Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Introducing Swift (developer.apple.com)
1216 points by falava on June 2, 2014 | hide | past | favorite | 712 comments


As someone who always disliked Objective C, I think Swift looks very promising. I'll check it out right away :)

Software-wise, I feel these current WWDC announcements are the most exciting in years.

Looking at the Swift docs right now, I can see many interesting inspirations at work: there's some Lua/Go in there (multiple return values), some Ruby (closure passed as the last argument to a function can appear immediately after the parentheses), closure expressions, strong Unicode character support, a very very neat alternative to nullable types with "Optionals". Operators are functions, too.

It has the concept of explicitly capturing variables from the surrounding context inside closures, like PHP does, instead of keeping the entire context alive forever like Ruby or JS.

Hell there is even some shell scripting thinking in there with shorthand arguments that can be used as anonymous parameters in closures, like "sort(names, { $0 > $1 } )".

Inside objects, properties can be initialized lazily the first time they're accessed, or even updated entirely dynamically. Objects can swap themselves out for new versions of themselves under the caller's nose by using the mutating keyword.

There is the expected heavy-weight class/inheritance scheme which accommodates a lot of delegation, init options, bindings, and indirection (as is expected for a language that must among other things support Apple's convoluted UI API). But at least it's syntactically easier on the eyes now.

Automated Reference Counting is still alive, too - however, it's mostly under the hood now. Accordingly, there is a lot of stuff that deals with the finer points of weak and strong binding/counting.

Swift has a notion of protocols which as far as I can tell are interfaces or contracts that classes can promise to implement.

I think generally there are a few great patterns for method and object chaining, function and object composition in here.

The language has C#-style generics, and supports interesting type constraint expressions.


I really don't see the Golang influence at all. The multiple- return- value semantic is closer to Ruby's than to Golang's; you're returning a tuple, which happens to have natural syntax in the language.

Defining Golang features that don't exist in Swift:

- Interface types with implicit adoption (Swift takes explicit protocols from ObjC)

- Error types

- Relatedly, the "damnable use requirement" and its interaction with error types and multiple value returns (ie, the reason Golang programs in practice check errors more carefully than C programs).

- Slice types

- Type switching (though, like Golang, it does have type assertions)

- "defer"

- Of course, CSP and the "select" statement.

Swift features that don't exist in Golang:

- Generics

- Optionals

- A conventional, full-featured class-model

Of the languages you could compare Swift to, Golang seems like one of the biggest reaches. Even the syntax is different.

(I like what I've read about Swift and expect to be building things in both Golang and Swift, and often at the same time).


This comment surprises me. It's factual but it's not really a response to an actual claim I made. Did you really perceive that I alleged an extreme similarity to Go in my comment? If so, it certainly wasn't intentional. I just said certain features reminded me of different languages, I didn't mean to assert these languages are actually incorporated into Swift.


No, no, I don't object to your comment. You're just not the only person I've seen making the comparison to Golang, and so I had a big comment bottled up. :)


Ah OK, I understand :)


My hat is off to you gentlemen. Such civility. Good day to you Sirs. Good day.


Including me, I think some syntaxes of Switft look like Go, while they actully don't share the same vision. Go tries to be a language great for system programming so it introduces channels, interfaces. But Swift want to help GUI programming and it needs the whole class system but without crazy stuffs like channels.

But for others aspects that are not related these two(class system, concurrency), I would say they look quite similar.


For Go to be great for systems programming, the unsafe package needs a few more primitives and better control over the GC.


I don't know what this means. I've used Golang successfully for USB drivers, emulators, web and database servers, testing tools, a debugger, and cryptography. It seems evidently suited for systems programming as it stands.

Someone is always going to be out there saying that any language isn't ready for prime time because it lacks feature-X.


Multiple return values are, of course, much older than Ruby or Go(lang). The first language I used with them was Zetalisp (Lisp Machine Lisp) in 1979, though at about the same time, with the release of v7 Unix, C gained the ability to pass or return structs by value. Xerox PARC's Mesa appears to have had them a few years earlier. I don't know of any earlier examples.

I'm surprised not to hear Python mentioned, as it also has a tuple syntax.


It does share Go's `func` keyword, parens-free conditionals, optional semi-colons


That's really not a lot. The optional semicolons could also be influenced by BCPL or JavaScript.


Yeah, it's only the entire basic Syntax of the language they copied.

Yes, Swift's semantics are different (since it's essentially a domain-specific language designed to make writing Cocoa apps faster), but syntax-wise a Go programmer feels right at home reading Swift.


>Yeah, it's only the entire basic Syntax of the language they copied.

Because, the keyword for function, keyword, parens-free conditionals and optional semi-colons are "the entire basic syntax" of Go, right?

Those are some of the most inconsequential details of Go syntax (all three of them), and of course all existed ages before Go.

Python has no semicolons and parens-free conditionals for one.


If you mean by "entire" the spelling of one keyword and the omitting of parentheses. Everything else seems to be more related to C and various Action/ECMAScript like scripting languages.


> - Interface types with implicit adoption (Swift takes explicit protocols from ObjC)

Objective-C has informal protocols, and so does Swift.


Informal protocols and implicit interfaces are not the same thing. In particular, implicit interfaces are type checked statically at compile time, while informal protocols are checked dynamically at runtime.


I'm assuming OP means structural typing, which Objective-C does not support.


Not just generics, but pretty fleshed out generics, with type variables constrained by class or protocol.


(I like what I've read about Swift and expect to be building things in both Golang and Swift, and often at the same time).

How about a blog series where a developer implements something in Golang and/or Swift, then you explain how it's insecure? Then the developer tries to fix it and you explain something else that's insecure. Rinse, repeat.


I thought that was called "Hacker News comments."


Perhaps if you take language features directly, it's not a good comparison with Go.

There are some things that did strike me as similar. The approach Go takes is to bring C language to a more modern world (i.e. C without some of the language burdens that we know so well). Swift is attempting to do the same. The way it does type inference is nice.

var x = "Hi" reminds me of Go's const types. The ARC usage reminds me of Go's garbage collection (even though it's not the same thing). Basically, the parts that it omits from C are similar to the parts that Go takes out of C even though the language itself is different... thankfully.


> The approach Go takes is to bring C language to a more modern world

Like all the other thousands of languages with C based syntax.

> var x = "Hi" reminds me of Go's const types

Why does it remind you of Go and not of all the other languages that use 'var x = "Hi"' like JavaScript, ActionScript, C#, Scala, Kotlin?

> The ARC usage reminds me of Go's garbage collection

Why does it remind you of Go and not of all the other languages with garbage collection?


It reminds me of Go in what it omits from C. There are similarities. Go feels like a mix between python and C.

I haven't gotten to Swift in a deep enough way, but it looks like it tried to tackle the same problems with the exception of concurrency. There are differences such as classes and generics in Swift. There are also similarities such as functions as first class citizens (or so it appears so from the closures section of the free book).

All in all, it reminds me of Go just a bit. It doesn't remind me of all of those other languages that I do not know.


> Why does it remind you of Go and not of all the other languages with garbage collection?

You sound old. ;)


I think the style of defining functions is similar to Go. Not a carbon copy, but it "feels" Go-ish.


From a user's point of view, it's basically straight out of the Rust book, all the gravy with also relaxed ownership and syntax.

It has it all [1]: static typing, type inference, explicit mutability, closures, pattern matching, optionals (with own syntax! also "any"), generics, interfaces, weak ownership, tuples, plus other nifty things like shorthand syntax, final and explicit override...

It screams "modern!", has all the latest circlejerk features. It even comes with a light-table/bret-victor style playground. But is still a practical language which looks approachable and straightforward.

Edit: [1]: well, almost. I don't think I've caught anything about generators, first-class concurrency and parallelism, or tail-call optimization, among others.


I don't really see anything but a superficial resemblance to rust, where both are borrowing ideas from the same place. Where Rust really differs from modern algol-family-languages-learning-from-30-year-old-functional-programming-research is in its strictness.

The optional type stuff is good, and it will definitely be a net safety improvement, but it's by no means attempting to approach a panacea to safety like Rust's strict static analysis does.

Particularly that Swift gives you really simple outs in the form of the '!' unwrap and 'as' (should be 'as!' at least imo) downcast-and-unwrap operators that result in run-time errors and will probably be seen as unremoveable code-smell in a couple of years.


Indeed, I'm not sure what Swift's concurrency story is yet. Other than that it's encouragingly similar to Rust (we're evolving in the right direction!), but not quite as low-level.


I'm sure that the main mechanism will be the existing dispatch queue libraries Apple's other languages use.


Most likely use the Grand Central Dispatch that is already part of iOS and Mac OS X.


The similarity to Rust should scare the hell out of Rust's creators and proponents.

Swift could very well render Rust almost totally irrelevant within the OS X and iOS sphere of software development. If we end up eventually seeing Swift implemented for other platforms, then the chances of Rust's long-term success diminish even more.

Things might have been different had a stable, even if somewhat imperfect, initial version of Rust had been released by now, thus allowing it to gain some adoption and traction.

I hope that the "But Rust isn't targeting those developers!" argument isn't used to try to justify this mistake, as well. Rust is already facing stiff competition from languages like Go, C++11, C++14, Scala and even Java 8.

With the announcement of Swift, Rust's niche and audience are getting smaller, further preventing the widespread adoption that's necessary for a programming language to become truly successful.


Oh hello again, Pacabel. I'm familiar with your game by now. :)

We're not scared in the slightest. I'll reconsider when Swift has inline ASM, allocators, linear types, move semantics by default, region analysis, and a type system that guarantees freedom from data races (oh, and when the code is open-sourced, and targets both Linux and Windows as a first-class citizen).

Swift isn't intended to be a systems language: it's an application language. Anyone who's capable of choosing Swift as of today was just as capable of choosing Objective-C yesterday. And as the Rust developers themselves have discovered over the past two years, deciding to become a systems language doesn't happen overnight.

(In fact, on a personal note, I'm ecstatic that Swift has been announced. ADTs! Optional types! Pattern matching! Think of how many features are no longer alien to people who want to learn Rust! And hell, the syntax is incredibly similar as well, which further reduces friction. As a hardcore Rust contributor, I want to shake the hand of each and every one of Swift's designers.)


Swift isn't intended to be a systems language

FWIW, Swift is categorized as a systems language in the opening pages. But, then, so does Go in its FAQ. To Swift's credit, at least it has deterministic memory management through ARC.


While having some support for garbage collection is good, reference counting is not is a rather expensive way to implement that for applications. This becomes especially bad in multicores since it may dramatically increase the number of writes to shared object cache lines.


Wouldn't a lot of that be mitigated, though, by using tagged pointers in place of RC structs where possible? Seems like an obvious optimization.


Not really sure what the tag in the pointer would be used for. Could you give an example.

In general, reference counting has the problem that it needs to update the reference count. If you have a read-only data-structure these updates to the references will introduce writes that may severely impact performance since a write introduces cache consistency communication, while reads are communication-free.


Yeah, you're right. I didn't think it through when I asked. I conflated this scenario with the technique they use to put small objects like NSNumber on the stack.


> Swift isn't intended to be a systems language: it's an application language.

It may not be ready as a systems language in its pre-1.0 form, but the Swift book claims that it's "designed to scale gracefully from ‘Hello World’ to an entire operating system", so Apple appears to have big goals.


I'm thinking about starting the "Rust contributor points out how Rust is a systems language and $language is an applications language" drinking game. At least now they'll focus on Swift instead of Go. I don't mean this to be rude; I've just noticed a similar set of usernames in threads about certain !Rust languages playing the underdog position and always feeling like they need to compare.

Given Rust's PR, speaking of that -- not a thread about Go passes without at least three pcwalton comments these days -- I actually broke down and gave it a try. I wrote a little Hello World server and then got lambasted by a friend of mine for not working functionally, since, in his words, "Rust is a functional language and the fact that it supports other paradigms is a mistake." I rm -rf'd and ignore it for now, but I look forward to it stabilizing and maybe coming back to it.

Rust has potential but the PR needs to ease up just a little. There is room for more than one language in the world.


> not a thread about Go passes without at least three pcwalton comments these days

Well, every thread about Go inevitably has numerous comments comparing it to Rust, often erroneously, and pcwalton is one of the primary Rust developers.


A fair point.


Rust is not a functional programming language. Your friend is just wrong for taking you to task about that.


Sign me up for a "someone gets indignant about rust developers responding to other people's comments about rust" drinking game.

(small-time rust contributor here too)


I think you should look up what "indignant" means, then, for the benefit of all of us, demonstrate the anger in my comment that was not put there unconsciously by the reader.


I saw a lot of people mention ADT in relation to Swift but I haven't found examples in the documentation book I downloaded from Apple. Would you be kind enough to provide the example you saw? EDIT: My bad, page 40 in the section about protocols (unless I'm missing something).


It's on the bottom half of the page about enumerations. Typically languages have product types, but lack true sum types. Swift's enums provide such leverage.

That said, Swift's types are a bit less than recursive, so there's a bit of niggling still before you get to the affordances of something like Haskell's `data`.


Thank you.


It's interesting that inline assembly is your first bullet point, since there's nothing I can think of that ruins a code file more than inline assembly in a host language. Put that crap in a .S file and link it in like everything else, for crying out loud. The one time you need inline assembly is when you don't want to build a function frame, such as a tight loop, but come on.

Also, even in systems, I can think of about once a decade I even need to write assembly, so... maybe grasping at straws a bit?


I wouldn't lead with inline assembler as the selling point of Rust. The main selling point of Rust is memory safety without garbage collection; it still is the only industry language that allows this (as reference counting is a form of GC).

That said, I think inline assembler is an important feature:

> It's interesting that inline assembly is your first bullet point, since there's nothing I can think of that ruins a code file more than inline assembly in a host language. Put that crap in a .S file and link it in like everything else, for crying out loud.

That's too slow. You need to give the optimizer more information than that, and the overhead of the procedure call can be significant.

> The one time you need inline assembly is when you don't want to build a function frame, such as a tight loop, but come on.

But that's a very important use case.

> Also, even in systems, I can think of about once a decade I even need to write assembly, so... maybe grasping at straws a bit?

It's all over the place in the Linux kernel.


> But that's a very important use case.

For Rust. I didn't make the comparison to Rust, I merely was intrigued by the choice of features and in which order to defend the comparison made by someone else. I see Rust and Swift as targeting entirely different things, at least at first (which means that Swift can certainly evolve inline assembly if it is so needed), and any comparison at this stage is pointless.

> It's all over the place in the Linux kernel.

Cool, that's one piece of software. I'll save you the next few: drivers and a couple files in a game engine. You're disputing my point how, exactly?


I'm confused that you seem to think that my bullet points are ordered from most to least important. It seems like quite a strange thing to attack! I'm sorry to disappoint you, but I do not run these comments by a team of editors first. :)

That said, judging by your other comments, you seem to be of the impression that the Rust team has some sort of vendetta against Go, and have concocted a vendetta in kind. Again, I must sadly disappoint you, but I strive to encourage a culture of respect in all the forums that I visit (and moderate).


If you find my comments disrespectful and think I'm implying you have a vendetta, you skimmed them and are disrespecting me by putting words in my mouth. I've simply noticed a trend of most commentary from Rust contributors taking the underdog position and participating in picking apart other (ostensibly) competing languages, and I think you guys should stop doing that.

The last sentence of the comment to which you hint is my thesis. There is no subtext. It's easy to perceive negative opinions as attacks and create an adversary from the person writing the opinion, but it's also a little bit disingenuous. It also, conveniently, creates a platform upon which nobody can disagree with you lest they be hostile and aggressive. You can then exit on the "high ground," as you've done here.

I meant no ill will. Best of luck, I guess.


> The one time you need inline assembly is when you don't want to build a function frame, such as a tight loop, but come on.

This is hilarious. Like anyone would ever use inline assembly in a tight loop.


My apologies, I didn't realize that expecting a programming language to have a stable syntax, stable semantics, a stable standard library and at least one stable and robust implementation before using it seriously in industry was merely a "game".

Perhaps this is news to you, but those of us who work on and are responsible for large-scale software systems tend to take such factors very seriously.

This may sound harsh, but it really doesn't matter what features and benefits Rust could potentially bring to the table if the lack of stability makes it unusable in practice today. A programming language that can't be seriously used might as well not even exist.

I don't doubt that Apple will have Swift available in a seriously usable form by this fall, and it's very likely that it will see rapid adoption soon after. I'm afraid I can't say the same about Rust and its supposed by-the-end-of-2014 1.0 release, given its current lack of stability and the rapidly-approaching end of the year.


You seem to desire both stability and a faster 1.0 release. The realistic choices are:

    1. Release fast and iterate
    2. Release fast and be stuck with mistakes
    3. Release slow
Option #1 breaks stability, so that's out.

Swift appears to be taking option #2 (Apple doesn't commonly break APIs, do they?), but we can't even really be sure because it hasn't been developed in the open the way that Rust has. It's possible that it's been in development as long as Rust, and we simply haven't heard about it yet. Either way, option #2 is a perfectly reasonable one to go with; it has served Java quite well (for a loose definition of fast), though it has required some creative approaches to language improvements.

Rust is taking option #3. C has been around for over 40 years now. If Rust hopes to supplant it, it seems reasonable to take a few extra months (or even an extra year) to put out a solid version 1 that won't hamstring the language or force a breaking change down the line.


Apple just announced in the Platform State of the Union that they won't guarantee source compatibility until Swift is released along with iOS 8 (changes to the language will require source conversions), so I believe they're taking a route closer to option #1.


No I'd say Apples approach was: methodically develop until very polished first, then announce after. You just didn't get to see the 0.1, 0.2, 0.3... versions.

This is actually nice, because knowing about languages years before they are production ready probably just slows developer adoption because nobody is quite sure when they should trust there development process to a new language.


>My apologies, I didn't realize that expecting a programming language to have a stable syntax, stable semantics, a stable standard library and at least one stable and robust implementation before using it seriously in industry was merely a "game"

You also didn't realize that you just built the biggest strawman ever in the above sentence.

Enough with the "I want a stable Rust now". Rust, like any other language, takes years to stabilize. You just happen to see it happen in the open, whereas most other languages you get them at their 1.0 release.

>This may sound harsh, but it really doesn't matter what features and benefits Rust could potentially bring to the table if the lack of stability makes it unusable in practice today. A programming language that can't be seriously used might as well not even exist.

They could not give a flying duck about it being "seriously used today".

They'll start to care AFTER they release it as 1.0. They only released this 0.x versions to solicit ideas and improvements, not to get programmer's to adopt it.


Well, we aren't actually seeing stabilization when it comes to Rust.

Assuming this stabilization actually does happen, whether it happens in public or private is irrelevant.

What matters is that we've seen C++ stabilize. We've seen Go stabilize. We've seen Scala stabilize. And now we'll likely see Swift stabilize, well before Rust does. They are all serious competitors to Rust.

As these other languages continue to evolve, but at the same time remaining usable, the relevance of Rust will continually decrease. It may still have drawing power today. A few years from now, it will have less appeal.


This is silly. Rust is on a similar time frame to Go in terms of stabilization (~2.5 years after release).

It's hard to make a comparison against Swift, which is a proprietary language developed for years behind closed doors. Presumably they're at 1.0 from the first day by design. You can't do that with open source languages.


* Scala has been around since 2004 (public release) -- 10 years ago. (Not sure when the first stable release was but more than 3 years.) * Go in 2009 (public) -- 6 years ago. 1.0 (first stable release) was released in 2012 so it took 3 years to stabilize. * C++ in 1983 -- 31 years ago. ... It's been a long time. * Clojure in 2007 -- 7 years ago. The creator took 2 1/2 years before releasing it to the public. * Rust in 2012 -- 2 years ago.

It's pretty absurd to expect Rust to be stable right from the get go. The difference in all this is that most of those languages were closed before being released. Rust was open at a pretty early state.


You're probably right, but I think I've heard of Rust in public in 2010 and that it was started by Graydon in 2007.


Rust had been stewing around in Graydon's head for years, but he was never paid to work on it until 2009 (part-time, at that point). And he didn't have any paid employees to help him until 2010 (that would be pcwalton). And as far as I'm concerned, Rust development didn't actually start in earnest until mid-to-late 2011, when the compiler began bootstrapping. And the Rust that we know and love today didn't begin to take shape until 2012, when borrowed references started taking shape.

Personally, I consider the 0.1 release in January 2012 to mark Rust's "birthday". Everything prior to that was just gestation. :)


I can't tell if you like Rust or hate it. If you hate it, and you are right, then it will simply fade away and your comments will serve nothing more than being able to say, "I told you so." If you are wrong, then you end up looking a bit silly.

If you like it, perhaps you should be a little patient and give the creators the benefit of the doubt. No one wants a Rust 3.0 fiasco.

It's hard to encounter language issues without implementing a large project in the language. I am happy that they're taking the time to let the implementation of Servo help inform the design of Rust.


Rust's raison d'être is memory safety without garbage collection. Swift requires garbage collection to achieve memory safety. (Reference counting is a form of garbage collection.)

In other words, Rust is about safety with zero overhead over C++, and Swift is not zero-overhead. So the people who need Rust are not going to use Swift for Rust's domains. That's fine, as Apple wanted a language for iOS and Mac app development with tight integration with Objective-C, and from what I've seen they've done a great job of developing one.

> Things might have been different had a stable, even if somewhat imperfect, initial version of Rust had been released by now, thus allowing it to gain some adoption and traction.

Why are you so insistent that we freeze an unsafe version of a language that's designed for safety?


> Why are you so insistent that we freeze an unsafe version of a language that's designed for safety?

Pacabel has made a career of complaining about Rust being unstable.


At least he/she hasn't derailed the thread by complaining about every other project that Mozilla is working on. I count that as progress!


Are you honestly suggesting that Rust is stable at this point?

I think that the recent, and very disruptive, ~ and box changes should completely dispel that notion.

I'm merely pointing out the reality of the current situation, which some in the Rust community do not wish to acknowledge, for whatever reason. The situation has yet to change, so what I'm saying is still valid, and will remain so until some actual improvement does take place.

Now that we see yet another serious competitor in the form of Swift, what I've had to unfortunately be saying for some time now becomes more and more relevant. If Rust is to become relevant, it will need to be usable, and that will need to happen very quickly.


No, I'm not suggesting that Rust is stable. It wasn't even slightly implied by what I said. I was just pointing out that you're a broken record on this topic, to the point of being a troll (you seem to just ignore the meat of any response you get and only focus on the current state of Rust).

To be crystal clear: no-one is suggesting that Rust is stable and no-one is suggesting it is ready for adoption (if they are, they are wrong). However, being unstable now is very very different to not ever being stable.

In any case, Swift is only tangentially a Rust competitor as kibwen demonstrated.


Resort to name-calling if you really must. None of that will change reality.

Rust is not stable, as you yourself have readily admitted. What I've unfortunately had to be pointing out for such a long time now is absolutely correct.

We've been told that we can expect Rust 1.0 by the end of the year. As each month passes, it becomes less and less likely that we will actually see this. We are still seeing significant change, even as recently as the past month.

I think Rust could potentially be very useful. But that requires stability, and that in turn is something that appears more and more elusive each day.

It's easy to say that Swift isn't a competitor to Rust, but the reality is that it is. And unlike Rust, it will very, very likely be usable for serious apps within a few months. It will see the adoption that Rust could have had, had it been usable, further reducing Rust's future changes.


What have you been pointing out for so long? That Rust is unstable? That many people/companies won't use Rust while it is unstable? That there are other languages people can use instead?

All of those are highly uncontroversial and universally acknowledged by experienced Rust users.

Also, I don't understand how you have lept from Rust being unstable now, to Rust never being stable.

A 1.0 release by the end of the year doesn't seem at all unreasonable to me; I think you are expecting more from it than what the Rust team is looking for (and have stated publicly repeatedly): stabilising the core language.

Of course, a stable release of that form will still mean some libraries may be unstable (and so that Rust would be unsuitable for many corporate developments). These libraries will be stabilised progressively and iteratively.


>Are you honestly suggesting that Rust is stable at this point? I think that the recent, and very disruptive, ~ and box changes should completely dispel that notion.

No, he merely suggests that you bored a lot of people by repeating that it's unstable, instead of accepting the fact and using something else.

If being unstable is that bad, then by all means, go and use a stable language.


I, and many others, do use something else. That's the big problem facing Rust, whether or not its creators wish to admit this fact.

There are numerous alternatives to Rust that offer many of its benefits, but they're usable today. We can rely on them today, tomorrow, and likely for some time to come.

And by this fall, we'll likely have Swift as yet another option to add to our growing list.

I think Rust has a lot of potential. But each month that goes by squanders that potential. It has less and less of a chance of making a real impact the longer it isn't usable, especially while its competitors keep evolving.


>I, and many others, do use something else. That's the big problem facing Rust, whether or not its creators wish to admit this fact.

Yeah, and I listen to Rihanna instead of Jay Farrar. Obviously that's the big problem Jay is facing, and he should sound more like Rihanna to cater to my taste.


> Are you honestly suggesting that Rust is stable at this point?

No, he didn't say that anywhere.


I think you may be aiming for a level of safety, or perhaps a notion of "perfection", that isn't practically obtainable.

The recent, and rather disruptive, box changes are a good example of this. We see change, and those of us with existing Rust code sure do feel the change, but very little convergence seems to be happening.

Based on past trends, I would not be at all surprised if problems are found with the new approach as it becomes more widely used, and some other approach is then attempted.

Wheel-spinning is something that can quite easily happen with ambitious software projects. It's not a new phenomenon. But when facing ever-increasing competition, and other real-world constraints, it's often better to aim slightly lower and at least be mostly usable in practice.

A memory-safe programming language that can't actually be used is pretty much irrelevant. It's better to accept some slight amount of imperfection if that means it can actually be used.


> I think you may be aiming for a level of safety, or perhaps a notion of "perfection", that isn't practically obtainable.

I believe it is, as the basic structure and rules of the borrow check (which is the part of Rust that's truly unique) have proven themselves to be quite usable. The usability problems remaining are implementation and precision (e.g. issue #6393), not big problems that will require large redesigns.

> The recent, and rather disruptive, box changes are a good example of this. We see change, and those of us with existing Rust code sure do feel the change, but very little convergence seems to be happening.

Yes, it is. The number of outstanding backwards incompatible language changes is decreasing. People who do Rust upgrades notice how the language is changing less and less.

> A memory-safe programming language that can't actually be used is pretty much irrelevant. It's better to accept some slight amount of imperfection if that means it can actually be used.

"Some slight amount of imperfection" means not memory-safe. Java didn't settle for that, C# didn't settle for that, Swift isn't settling for that, and we aren't settling for it.


This is an honest question: Why do you seem to care so much? Rust is in my view a great project, that yes isn't quite there yet but is making great progress. I'm looking forward to using it when it is stable, and pcwalton and the other contributors are developers that I've looked up to for a number of years: I have nothing but faith in them.

At the end of the day, if Rust fails, well that will be a shame. But I'm seeing nothing that shows that it might, so I'm truly struggling to understand why you seem so upset by a new modern language trying to tackle big problems in ways that have never been done before. That's a good thing, as far as I'm concerned.


Having been in industry for a long time, I think that something like Rust would be hugely beneficial. It very well could solve some very real problems.

I bring this up again and again because I'd rather not see Rust fail. I'd much rather see a slightly flawed Rust that's actually usable in the short term, rather than a continually changing Rust that nobody will seriously adopt.

Rust has been in development for years now. That's a very long time in the software industry. A few years of development time without a stable release is understandable. But it's getting beyond that now.

Rust isn't quite there yet, but each day it edges closer to a Perl 6 type of disaster. Perl 6 offered some intriguing ideas, but it just isn't usable, and that's a shame. Meanwhile, other competitors have arisen and blown past it, rendering it far less useful were it ever properly implemented.

Given the increasingly stiff competition that Rust is facing, I suspect we'll see it end up like Haskell or D. Something usable is eventually produced, but it never sees the truly widespread adoption that it could have seen, had it been usable earlier on. It's not as bad as Perl 6's situation, but it is still unfortunate.


> Given the increasingly stiff competition that Rust is facing, I suspect we'll see it end up like Haskell or D. Something usable is eventually produced, but it never sees the truly widespread adoption that it could have seen, had it been usable earlier on.

I don't have much to say about D, but the history of Haskell implied by this sentence is hilariously wrong.

Go watch Simon Peyton Jones' talk about the history of Haskell: http://research.microsoft.com/en-us/um/people/simonpj/papers.... As well as being wonderfully entertaining, it explains the actual history of Haskell: it was designed to be a language with which various academic groups could do functional programming language research. The fact that Haskell has gradually grown more popular and now has mainstream appeal and some industrial users is quite a surprise to its creators.


> Rust has been in development for years now. That's a very long time in the software industry. A few years of development time without a stable release is understandable. But it's getting beyond that now.

Not for programming languages. These take years and years. Take a stab at any of the most popular languages. They weren't created 1-3 years ago. It takes time, and that's a good thing.


reference counting != garbage collection. Garbage collection take CPU and lots of memory. Reference counting just ++ the pointer count on allocation, and -- on free. Essentially no overhead.


C++ moving to a 3 year standard cycle is a much bigger 'threat' to rust. But really, the fact that there's so much actual investment in improving mainstream languages from various well-funded sources is probably a rising-tide-lifts-all-boats kind of thing.


Yes, I do agree that the situation is improving across the board.

But as an industry, we need practical solutions that are available now, even if somewhat flawed. We need languages we can use today, and know that the code we write today will still compile fine next week and next year, if not a decade or more from now.

Modern C++ is getting pretty good at offering this, while offering far a greater degree of safety. Go isn't bad, either. Scala has its drawbacks, but it's often a reasonable option, too. The key thing to remember is that all of these languages have offered developers a stable target, and they are seriously usable in the present.

Given the announcement of Swift, and given that Apple will very likely deliver on it by the fall, we very well could see it becoming a major player during 2015.

The safety benefits that Rust could theoretically or potentially offer are virtually useless to huge swaths of the industry as long as the predictability of a stable release just isn't there. The longer this wait goes on, the better the competition becomes, and the less relevant Rust will unfortunately become in the long term.


> But as an industry, we need practical solutions that are available now, even if somewhat flawed. We need languages we can use today, and know that the code we write today will still compile fine next week and next year, if not a decade or more from now.

By this logic we shouldn't invent any new programming languages at all. There's no such thing as a "practical solution that's available now"; everything takes time to develop.

> Modern C++ is getting pretty good at offering this, while offering far a greater degree of safety. Go isn't bad, either. Scala has its drawbacks, but it's often a reasonable option, too. The key thing to remember is that all of these languages have offered developers a stable target, and they are seriously usable in the present.

You aren't going to use those languages if you want memory safety without garbage collection. Because they can't offer zero-overhead memory safety without breaking existing code.


Swift's environment is also very similar to Elm's time travel debugger: http://debug.elm-lang.org/

Direct link to Elm's demo similar to Bret Victor's: http://debug.elm-lang.org/edit/Mario.elm (video: https://www.youtube.com/watch?v=RUeLd7T7Xi4)


I immediately thought about that as well. I wonder how they pull it off? Swift is not a functional language, so they just save every single variable, or what?


Time travel debugging has existed for a long time, and it's not limited to functional languages; the most obvious way they could do this is through checkpointing.


Checkpointing is a natural fit with the Cocoa API, which uses an event loop. Just save the state of the application after each event is handled.


Just a hunch, LLVM uses Static-Single-Assignment, which is just that, saving every single variable change.


He mentioned the desire to drop the "C" from objective-c, but I'm curious what this means for using c/c++ libraries now. Do they need to be wrapped by objective-c before being visible in swift?


Swift uses a special "bridging header" to expose Objective-C code to Swift. This header will presumably be processed by the Swift compiler.

In the other direction, XCode will automatically generate an Objective-C header to expose your Swift code to Objective-C.


The ibooks guide references a "Using Swift with Cocoa and Objective-C" document. Is that where you got your information?



It seemed like it could interoperate with C just fine based on the slide talking about all three. Also because it uses the Objective-C runtime and compiles to native, it might just see C functions as normal functions. Though what little I've looked at the free book hasn't given me any hints about that yet.


"You cannot import C++ code directly into Swift. Instead, create an Objective-C or C wrapper for C++ code."


I suspect that as long as it compiles to LLVM, anything goes.


I hope so. Another stab on C's back.


Is it realistic to try to dive right in to the 500-page book they provided without a computer science background, just HTML/CSS/PHP self-taught experience, to learn the language? Or should I take other steps first?


Reading a book cover-to-cover is, for me, a bad way to learn a language. I usually pick things up very quickly when I try to implement things using them that I am both familiar with and faced with annoying issues stemming to the language they are currently implemented in. If you don't have one of those, think about something you hate, and fix it with this.

If the book is good documentation, then use it. But you may benefit from focusing more on problems than completing a book.


Just read the first paragraph and conclusions if the have them, of each chapter. This will give you a good idea of what's there when you need it. Then I'd jump straight into tutorials.

Honestly skimming 500 doesn't sound horribly hard to me. I've done that a few times to pick up something new. As ap said you won't learn the language like that but you will have a good reference to go and learn from after the fact.

After that you could probably work through the examples in said book, or at least the interesting ones.

P.S. above steps is all I really learnt from my cs degree.


Skimming through it has been great. It's quite well-written and you'll get a lot of the concepts that the lang introduces even if the extent of your programming education is JS. Give it a try :)


And is this the exact same language - http://www.cs.cornell.edu/jif/swift/doc/index.html


No, it is completely unrelated. That is the other language called Swift.


The copy I have is only 366 pages, but it's 'converted' from the ebook so I'm not sure if that's a factor. A lot of the pages are dedicated to an examination of the grammar that's probably not relevant for a language overview and the rest is really readable and easily skimmed for interesting details. It's broken up with simple and clear examples every few paragraphs as well.

Definitely take a look through it. You definitely don't need to be a language nerd to understand it.


I'm 20% in, and you certainly should give it a try. It's very well-written, explains basic concepts really well, and has a lot of examples. It also has a good flow from the basc features to more advanced ones.


Great, thanks - will give it a shot!


I think it's a good idea only if you execute code in parallel, following all the examples. Otherwise there's a lot of notions than already exist in Objective C that are glossed over and would be pitfalls for people new to the runtime.


I think everybody can see their own favorite language in it... and that's a good thing.

For me it looks like Scala + a sprinkle of C++14 :)


There are others too with "looks like Scala":

Jacob Leverich https://leverich.github.io/swiftislikescala/

Den Shabalin http://www.scribd.com/doc/227879724/Swift-vs-Scala-2-11


Indeed, I also think it's most similar to Scala.


> It has the concept of explicitly capturing variables from the surrounding context inside closures, like PHP does, instead of keeping the entire context alive forever like Ruby or JS.

Just as a point of fact, javascript -- at least the v8 implementation I'm most knowledgeable of -- doesn't "keep the entire context alive forever." Only variables used in the closure are allocated into the closure (i.e. on the heap), the others are allocated on the stack and disappear as soon as the function returns.

I don't use iTunes so can't read their book, but I wanted to ask: you say that ARC is still their GC strategy, correct? So reference cycles are still an issue? I'm surprised at this. I can see reference counting being a strategy for moving a non-GC'd language to GC (like Objective-C), but to start a new language with that constraint is surprising.


> Only variables used in the closure are allocated into the closure

I'm not sure that's true. Look at the following code:

  var x = 123;

  var f = function(xname) {
    eval('console.log('+xname+');');
  }

  f('x');
It's a dynamically named variable. Clearly, f() has access to the entire context that surrounds it, not just the objects explicitly used in the function code. In this example, the compiler could not possibly have known I was going to access x.

This means in Javascript, as well as in Ruby, when you hand a closure to my code, I can access the entire environment of that closure.

Contrast that with Lua, for example, where the compiler does indeed check whether an outer variable is being used and then it imports that variable from the context only.

PHP does it most explicitly, forcing the developer to declare what outer objects they want to have available within the function.


> In this example, the compiler could not possibly have known I was going to access x.

Right, but it knew you were going to use eval, and to support that, it had to allocate all local variables in the closure. That's why you saw this behavior. The same would happen if you used a 'with' construct.


> Right, but it knew you were going to use eval, and to support that, it had to allocate all local variables in the closure.

Wow, so there is actually special handling in the engine for this? So it does static analysis whenever it can, but not in these two cases?


Yes, the V8 compiler bails out of several optimizations if your function uses eval. You can see this in the profiler: functions which V8 wasn't able to optimize will have an alert sign next to them, and if you click it, it'll tell you what the issue was.


This is a bit troublesome!

  function f() {var x = 99; return function(a,b) {return a(b)};}
  f()(eval, 'console.log(x);')
  ReferenceError: x is not defined
  function f() {var x = 99; return function(a,b) {return eval(b)};}
  f()(eval, 'console.log(x);')
  99
  undefined


Yep, this is as per spec: http://www.ecma-international.org/ecma-262/5.1/#sec-10.4.2 . "Indirect" calls to eval (i.e. assigning eval to another variable, like you did by passing it as a param) are evaluated in terms of the global environment. "Direct" calls, like in your second example, use the local environment.


Very cool, thanks for clearing that up!


Yes, and in articles describing it (v8), they explicitly warn you to not use "eval" or "with" for the performance impact.


In your example, X is still in scope when f('x') is called. It doesn't require closure to work.


I think you misunderstand what I'm trying to say. The point is not that x should be out of scope (why would it be?)

The original assertion by curveship was that the outer context is not kept alive for the function, and that f() only gets access to the variables it explicitly imports from the outer context. And I thought this might be wrong, so I cooked up the example.

Again, this is not about scope. This is about the fact that the function itself keeps a reference to the entire context it was created in, as opposed to just the things it explicitly imports.

In this, it appears, Javascript works exactly as Ruby, which again makes the entire outer context available through the binding facility.

I'm sorry if that wasn't clear from my description.


Swift reuses the Objective-C runtime, so it had to be compatible in terms of memory management.


OK, thanks. I guess that makes sense. Now that the website is working, I see that they're aiming at fairly seamless interop between Swift and Objective-C, so I guess they need a similar memory strategy.


This is not accurate. SpiderMonkey and V8 still retain the entire outer scope if any of the variables are used.

See here for an example: https://www.meteor.com/blog/2013/08/13/an-interesting-kind-o...

This bug is still not fixed. There's an issue open for it on the V8 tracker, I believe. It seems to have not gotten fixed in either engine because it's a difficult problem that affects a small subset of JS applications.


So go ahead and run his test. Things have changed :). Memory builds up 1mb/second, then after a few seconds, you'll see it drop back to zero, as the GC runs.

V8 has seen a lot of really nice optimizations to closures over the last year. My favorite is that closures are no longer considered megamorphic.


Oh wow, when did that land? I was getting hit by that leak with JSIL in the last couple months. Can you link to the commit?


Not sure the date. I first noticed the change back in April, when I was profiling some code. Ask vegorov: http://mrale.ph/blog/2012/09/23/grokking-v8-closures-for-fun... .


I agree that Swift looks quite promising, though I'm a bit surprised that it doesn't offer any concurrency primitives like Go does. I only say this because "The Swift Programming Language" e-book claims that "it’s designed to scale from 'hello, world' to an entire operating system."


I'm not even an iOS developer but this is by far the most exciting thing I heard in the keynote.

As an amatuer/hobbyist programmer who's self-taught with Ruby, JavaScript, etc., the one thing that was keeping me from experimenting with iOS apps was Objective-C. I know I could tackle it, but it's been hard to take the plunge.

I don't know much about Swift yet, but from what I've seen it looks very exciting. So if Apple's goal was to get new devs into the iOS world, at least from 10k feet, it's working.

I'm excited!


I'm not really that impressed--it looks like a hodgepodge of ideas from ES6, Ruby, Go, and maybe Rust, with a bit of backend work done to let it work on their existing infrastructure.

I dislike that Apple has continued the special snowflake approach, that for some reason we as developers need to learn yet another different-but-almost-the-same language to develop for them, instead of just adding proper support and documentation for an existing language. Why not just let us use ES6, or normal C/C++, or Java?

But instead, now there's yet another language without great innovation that is probably going to be badly supported outside of the Apple ecosystem but still will have enough fandom to keep it alive and make life annoying.

At least Google had the decency to pick a language everybody was already using and use that.

EDIT:

I feel bad for all the engineers stuck in those pixel mines, not allowed to talk about what they're doing, doomed to reinvent things that are on the way out just as they come in.


There is already MacRuby and RubyMotion. They tried using Java years ago. It failed. Developers didn't like it. Existing stuff simply doesn't mix that well with Cocoa and that style of programming. That is why something like Swift was needed.

I really don't get why you can bring up languages such as Rust and Go, and complain about Apple's special snowflake approach. Suddenly Apple is doing something developers have been demanding from them for years and something lots of other companies like Google, Mozila and Microsoft has already done. But oh no, because it is Apple, it is all wrong.


It's unfair to lump Mozilla in with the rest, since Rust isn't at all propriety. It has been open source for a long long time: https://github.com/mozilla/rust


That is not quite right.

The Java/Objective-C bridge existed in the early days as they weren't sure if developers would pick Objective-C, so they decided to bet on two horses.

As Objective-C eventually won the hearts of Mac OS X developers, the bridge was deprecated and a few years later the full Java support.


Suddenly Apple is doing something developers have been demanding from them for years and something lots of other companies like Google, Mozila and Microsoft has already done.

And yet they've decided to do it again, with yet another incompatible language! Joy of joys!

(And as for Java, it was my understanding that Apple had hobbled it by refusing to release updates on a timely basis.)


> Apple had hobbled it by refusing to release updates on a timely basis.

I can see how they could get tired of being forced to ship almost-monthly updates just to support an extra language with very limited adoption. If you have to make that sort of effort, you'll probably do it for your native tools only (like Microsoft does with .Net). Besides, Java apps on OSX looked better than Java apps on Windows, but they were still recognizably different from Obj-C ones.

I wish somebody would write an OS in Python 3...


"(And as for Java, it was my understanding that Apple had hobbled it by refusing to release updates on a timely basis.)"

That's a different, later issue.

Early on in the life of OS X, Apple offered a Java interface to the Cocoa class frameworks. In theory, you could write OS X applications using Java, calling into the Apple frameworks instead of using Swing or whatever Java frameworks.

This wasn't all that well supported, didn't perform well, and wasn't popular.


Sun should simply have hired some mac people and done it themselves. Entrusting the success of your entire company ( they changed their ticker symbol to JAVA!) to a 3rd party vendor's whims was and is silly.


Agreed that the lack of using an existing (and open-source!) language is annoying and frustrating to deal with (think of where we'd be if they invested that time and effort into improving Ruby/Python/whatever instead!). But because of the desire for compatibility with Objective-C, and Apple's general desire to call all the shots regarding their ecosystem, this move doesn't surprise me in the least.


The fact that this has static typing is a huge difference to "just improving" ruby/python. That approach couldn't come close to getting the same early-error-catching dev experience, and performance. And amongst static languages, Apple wasn't likely to recommend C++ as simple, were they? And Rust/D are also quite low level, nor do they have the Objective-C legacy to consider. So really, you're probably left with C# (or maybe Java), and those are so old and large (esp. the libraries) by now that they're unlikely to naturally port to Apple's environment.

Frankly, a bit of a clean up every decade or two is not exactly often, right?


Apple consistently represents a step backwards for both developers and users, in terms of engineering and freedom, but they've amassed enough capital at this point that the hope of them simply withering on the vine and dying off is probably not going to happen.

At least Microsoft and Google show off their new projects and code so everyone can learn from them and read their research.


any proof to back up those claims?


http://research.microsoft.com/en-us/

http://research.google.com/

http://research.apple.com/

Hint: one of these things is not like the other...see if you can figure out which using only the power of curl.


What about the special snowflake projects of google, mozilla, or sun? Apples language development is no less valid than google developing Go, or mozilla developing rust. This just shows your inherent bias.

I've been amazed recently how many open-source projects that we rolled into our linux product were Apple sourced: LLVM, Clang, libdispatch, webkit, OpenCL, zeroConf. Can't think of anything google has done for me recently.

And if there is anyone who will knock-this out of the park, its Chris Lattner. LLVM, Clang, and openCL is all him. He has done more for compiler tech than anyone in 30 years.


>At least Google had the decency to pick a language everybody was already using and use that.

If you think Java is remotely comparable in power and expressiveness to Objective C, you should probably reconsider your line of work.

The rise in popularity of Java nearly drove me from the industry it is such a verbose half baked pile of garbage. I could fill your browser with things you can do in Objective C that you cannot do in Java at all and this incredible flexibility is why Apple is such an agile company with such limited head count.


I don't get the hate. Yeah, syntax is unfamiliar, bu once I got used to it I began to really enjoy objective-c. Ymmv etc., but it's now one of my fav languages - though I guess this is mostly due to cocoa


I also really like Obj-C now that I am familiar with it. I think the biggest pain point with iOS apps is understanding the way to build apps within the context of the iPhone (how to structure views, and the custom things like alert sheets, etc...) particularly if you are coming from a web app background. The syntax is quite nice (although sometimes verbose) once you get used to it.


I never understood what the fuss was all about either.

If you know one other language really well, Objective-C should take a week or two to get use to.

To understand all the design patters, apple HIG, XCode, profiling, libraries, debugging, app submission, etc, these combined is where youll sink your time to learn iOS development. Imo, Objective-C is the easy part.


I recently translated one of my Apps from Android to iPhone.

I had 0 objective-C experience, but I made it work. It was a bit of a frustrating experience. Many times I found myself writing Objective-C boilerplate-ish code that I had 0 clue what it was doing, considering this is a hobby / for fun project I just wanted it working.

It's not easy to google the answer to, "Why did I just add this new keyword after this colon in this random .h file.."

I didn't want to spend the next month reading Objective-C for beginners, I know what a for loop is, I also know what constructors are. I just wanted to use the language.


You may know what a constructor is, but maybe not know what a designated initializer does. ;-)


I felt the same when working on iOS. I felt I was writing way too much boilerplate code, while Android and Windows Phone just gave me a lot more "for free".


You've just described exactly what it feels like transitioning from iOS to Android development, too.


You may not hate Objective-C, but I doubt you love it either. Have you / would you ever use Objective-C to write a web back-end? To write a command-line tool?


I got started with WebObjects, a Next product a couple years before Apple bought them. Yes I've written wonderfully powerful web applications in Objective-C back when the rest of the web was being built using CGI and Perl scripts.

I loved Smalltalk and I love Objective-C at a deep level. The Objective-C runtime is incredibly powerful and its method dispatch is astonishingly efficient considering what it does. It is not as fast as vtables, but it isn't as fragile either.

It might well interest you to know that WebObjects (I'm talking 1997 here) ran on HP-UX, SunOS, AIX, and one other popular Unix of the day that slips my mind and it too shipped with a lively scripting language called WebScript which was not so different from a minimal Swift today.

The thing is, once you dig into the Objective-C runtime and spend a bit of time trying to write an interpreter, you start to realize that the interpreter almost writes itself. Swift is far from the first language built atop the Objective-C runtime.

Consider FScript (http://www.fscript.org) has been around for well over a decade and does more or less the same thing except it gives you something closer to Smalltalk than Javascript and it includes some advanced matrix manipulation goodies as well.

The majority of the people squealing with glee over the introduction to Swift seem to be the sort of people I wouldn't care to work with. If a bit of syntax puts you off so much, lord help you when a truly new paradigm hits.

Swift looks to have some nice features, but it seems to be missing the low level access to the runtime that advanced developers can use like default message handlers (forwardInvocation:/doesNotUnderstand:/methodForSelector: kinds of stuff) and the ability to fiddle method dicts at runtime which can be very useful for intercepting strange errors and unexpected code paths.

So, yes, I do LOVE Objective-C. It is my second favorite language to work in after Smalltalk and to those claiming that Swift will help them move over from Android because it less verbose - lets remember Java is the most boilerplate per capability language I've seen since COBOL. I don't know what those people are talking about.


I've done both, they were fun projects :)

The only thing that got in the way was the difficulty using the code away from OS X or iOS, and the fact that a lot of libraries for things like database access (especially those intended for iOS) were never intended to be used in a long running process. I found slow (3 week) memory leaks that someone writing an iOS app would never have hit.


I actually really like Objective-C and would totally use it as a back end language if there were good libraries to make use of. I've also written a couple of command line tools in Obj-C.


My dislike is that it uses [] for method calls. It's like making Objective-English where we swap Z and A and j for o, just for the hell of it.

If thzt sjunds like fun tj yju, thzn gj fjr Jboective-C.


It's not for the hell of it.

[ ] does not mean method call, it is the syntax for a message send.

Objective-C is a super set of C, adding an Smalltalk like object system to C. The delimiters say "I am sending a message", which is different to a method call. Also, without them the language would be much more difficult to parse, and future changes to C could break the language. It's lasted well (first appeared in 1993). Not as long as Lisp, perhaps it needs more [ ] :)


> It's lasted well (first appeared in 1993).

1983, actually.


Thanks - I felt I should type 1983, but if felt wrong! I still had my Apple ][ back then.


Thanks. Just read up on messaging and now I like it even less :(

In Smalltalk and Objective-C, the target of a message is resolved at runtime, with the receiving object itself interpreting the message. ... A consequence of this is that the message-passing system has no type checking.

http://en.wikipedia.org/wiki/Objective_c#Messages


This is exactly what gives you the ability to easily wire up standard UI components and do tihngs like KVO. KVO is really difficult in something like C++ (for example, it's practically impossible to do in Qt to create without a lot of templating/boilerplace code).


This is in my opinion the best thing about Objective-C; it clearly delineates the object/class and C dichotomy, making it easier for a C programmer (or a Smalltalk programmer!) to pick up. For years, the only changes from vanilla C were the brackets, "#import" and the @ literal syntax (IIRC).


Actually, if you ask me today, after dealing with Scala's idea of how the Option type should work, I might say that nil propagation is the best thing about Objective-C.


That's how I always felt. I liked the clear differentiation between C function calls and method calls on objects.


very genius response!


It's not hate, but Objective-C can be intimidating.


I just spent the past 2 months learning obj-c, about to release my first app and boom, X out obj-c. my luck.


90% of what you learned are Cocoa frameworks and Apple-flavored OOP patterns that will be totally applicable to apps written in Swift. Fear not!


I don't know very much at all about objective C, but the way these things generally work is that you will benefit from the experience as you learn new languages, as it will be an anchor of context against which you may base fresh perceptions.


You'll always be able to contribute to NeXTSTEP. It's not dead yet!


No worries, Objective-C is faaar from deprecated.


Objective C isn’t going anywhere.

Swift is shit. I suspect it will die in a couple years, like the misguided effort to get people to adopt the Java bridge or WebScript before that.


I don't think syntax is really the issue. Using objc these days is clunky for reasons besides syntax.


Like dealing with ARC, which is still clunky:

    @lazy var asHTML: () -> String = {
        [unowned self] in
        if let text = self.text {
            return "<\(self.name)>\(text)</\(self.name)>"
        } else {
            return "<\(self.name) />"
        }
    }
Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks. https://itun.es/us/jEUH0.l


To someone on the outside of ObjC, its just SO DAMN VERBOSE. It's unapproachable the same way Java is unapproachable.


I understand why ObjC's syntax makes some people bristle, but I've never felt that way myself. It's sort of like the people that really hate Python for no other reason than the meaningful whitespace. It's unconventional, but once you understand the rationale for it it makes sense in a way that is at least forgivable if not likable.

There have been a lot of C-based object-oriented APIs over the years. GObject has a C API. On the Mac, there's Core Foundation and a bunch of other OS X APIs that are built on top of it. For over a decade on X11, before gtk and Qt even existed, the closest thing there was to a standard graphical environment was Motif (the corresponding desktop environment was CDE), and Motif was built on top of Xt. Xt was yet another C-based object system, although it was specialized for designing UI components.

This is all well and good but you end up with a ton of boilerplate code that does nothing but manage the lifecycles of the object instances (retain/release for example), and lends itself to extremely verbose function calls in place of object methods.

One possible solution is to put together some really elaborate preprocessor macros to make it look like you have extended the C language to include special syntax for your object system, so you can at least replace this:

obj foo = obj_factory(); int c = obj_getNumberOfElements(foo);

...with something more compact like this:

obj foo = [Obj new]; int c = [foo numberOfElements];

(the second example is ObjC-ish but the former is nothing in particular other than just what the typical C object APIs tend to look like)

The only catch is that the little mini-language you are extending C with using macros can't use existing C syntax, because you can only add to the language, not alter the behavior of existing operators. So, you can't just do method calls using a dot syntax on the instance (such as foo.numberOfElements()). So, you have to come up with something new. Maybe you always liked Smalltalk, and maybe you even based much of behavior of your object system on how Smalltalk objects behave and interact? If so, you might settle on the bracket notation. This has the added benefit of making it very clear when a chunk of code is run-of-the-mill C versus when the code is triggering the syntactic sugar you created with macros to add support for your object system to the C language.

C++ doesn't exist yet, or else you might've just gone with that instead of rolling your own thing. Eventually C++ does exist, and you start to feel a little primitive for sticking with the weird macro language. You eventually build your mini-language into a C compiler so you don't have use the macros anymore. You experiment with some new alternatives to the syntax that are more conventional, but no one uses them. Many developers like that the non-C-ish syntax makes it easy to distinguish between straight C code vs. interactions with the object system, which has its own set of rules and conventions.

Anyway, that's mostly speculation, but something like that story is how I've always thought Objective-C evolved over the years. I don't mind it nearly as much as long as I don't think of it as a separate programming language from C (like C++ or Java or pretty much anything else these days), but rather think of it as C with some useful syntactic sugar that gets rid of a ton of boilerplate code for a particular C-based object-oriented API.


According to http://en.wikipedia.org/wiki/Objective-C#History, that's actually almost exactly how it came to be. (Apple even experimented with changing the syntax: http://en.wikipedia.org/wiki/Objective-C#.22Modern.22_Object...)


It really reeks of 80s. I 'd rather program in plain C.


I spent a lot of time trying to do stuff with ObjectiveC, but just hated the syntax. That's been the biggest thing keeping me from developing Mac OSX apps; I just prefer Ruby's simplicity. I'm going to seriously give Swift a try.


Yep, same here. It looks pretty JavaScript-y, which is familiar at least. I think this is a good move on Apple's part.


It's probably a wise decision to have an "Algol patterned" language. No non Algol patterned language has ever become a mainstream programming language to my knowledge.


I am not a programming language wonk; so I imagine most languages I am familiar-with/know-of are necessarily Algol patterned. What are some non-Algol patterned languages?


Lisp, Forth, Prolog (and Erlang), Smalltalk, Haskell, and Tcl all come to mind.


In particular, Obj-C = Smalltalk + C. If you subtract C from Obj-C, you'd most easily just end up with Smalltalk. But that's not the right move for mass adoption.


I agree with the first, but disagree with the second part:

COBOL, Fortran, JCL (not Turing complete, AFAIK), SQL, Excel, DOS batch files all were (fairly) mainstream at some time.


Fortran came before Algol and arguably influenced it[1]. I agree with COBOL and SQL in particular, though.

[1] http://www.digibarn.com/collections/posters/tongues/Computer...


The correctness of that image can be discussed. Fortran was specified in 1954, but the first compiler shipped in April 1957 (http://en.wikipedia.org/wiki/Fortran#History). That is earlier than Algol 58 (first two implementations in 1958 (http://en.wikipedia.org/wiki/ALGOL_58#Time_line_of_implement...), but close.

More importantly, "inspired by" does not imply that Fortran 58 is Algol-like (that same picture would declare Fortran Lisp-like, too)

For me, http://en.wikipedia.org/wiki/Fortran#Simple_FORTRAN_II_progr... certainly is nothing like Algol.


Ruby is simple and beautiful, isn't it? Too bad it never got the shower of money from big backers Javascript, PHP and now Swift got blessed with.


Beauty is in the eye of the beholder, but Ruby is anything but simple. It has one of the most complicated syntaxes of any programming language in common use.

Perl and C++ are still in the lead, but with stuff like the gratuitous introduction of alternate hash syntax, new-style lambdas, etc., Ruby is catching up.


Ruby's grammar is complex, but it's object model is incredibly simple.


Introduction of a new hash syntax wasn't gratuitous really. I think the point was to make up for the lack of proper keyword arguments. Now that they're available, it's true that it doesn't have a reason to stand on its own, but it does make the code more readable and concise, as does the stabby lambda syntax. Though I do agree with your point on simplicity really, the language does offer way too many ways to do the same thing sometimes.




Agreed. I would go so far as to say that this was "one more thing" worthy.

It's definitely more exciting than something like an incremental update to the Apple TV.


My dad tuned out as the keynote got to this point, but for me (as a web developer... for now!) this was the highlight.


I feel the exact same way. For a while now I've been looking at other ways to develop for iOS, such as HTML5 with PhoneGap or C# with Xamarin, but it's always been a kludge.

Swift looks amazing and I'm really excited to try it out tonight! Great job Apple devs.


  > So if Apple's goal was to get new devs into the iOS world, at least
  > from 10k feet, it's working
They just announced Swift, at a conference for Apple developers, with live streaming that is only easily accessed from an ios device. I think it is probably premature to pop the corks and celebrate the efficacy of the get new developers initiative.


As someone wise mentioned to me, Objective-c was 20% of the problem and Apple's silly rules and controls around app distribution are the other 80%. As someone who had their app available in the app store for nearly 8 months including 3 approved updates before being (seemingly) arbitrarily rejected, I feel the pain of that other 80%.


How else are they supposed to announce it? It's simply that, an announcement. People are talking about it now and there's info on the Apple site. I see this as a huge push forward for new developers.


The announcement was fine, it is the "its working" part that is odd considering it is less than a day old. Let's see if it actually attracts new developers before we declare it a mighty success.


Well; based on the promise of immediate inclusion in the app store and a very well thought out book about the language available for free I'd say they're doing rather well so far already.


You mentioned things that are likely to bring about the desired result of creating new ios developers. I am not disagreeing about the likelihood of success. I am simply saying that T + 8h is probably too soon to conclude that the program is successfully creating new ios developers. To be honest I think it is absurd to expect that such a program from any company could bring about the desired goal of creating new developers in less than eight hours.



I just skimmed the tour, and my impression is: Swift is a compiled, Objective-C compatible Javascript-alike with an ObjC-like object model, generics, and string interpolation. No exceptions. Based on LLVM and appears to inherit the same data structures as Cocoa apps (Dictionaries, Arrays, &c).

It feels very lightweight, sort of like an analog to what Javascript is in a browser.


I think it uses the Objective-C runtime directly, so it has access to all the frameworks and Swift classes can be loaded into Objective-C projects as well.

There are a few other languages that do this with the Obj-C runtime, for example a Lisp variant called Nu[0].

[0] http://programming.nu/


There's a Ruby implementation by a former Apple employee that does this as well: http://www.rubymotion.com/


Unfortunately it requires a $200 up front investment before you can even toy with the language. RubyMotion was my first thought when I saw the code happening on the keynote, but at least this will be free with the OS.


[1] Objective-Smalltalk http://objective.st/


> No exceptions.

This is a big shift. With such a rich type system (very Hindley-Milner .. even with "protocols" that feel like type classes?), there is no need for exceptions, for much the same reason that Haskell doesn't have exceptions in the core language, but only a monad. This would force error situations to be explicitly modeled in the types of objects returned by functions/methods. A good thing I think.

However, it does leave the hole of what if an ObjC framework you call on raises an exception? Can't handle it within Swift code? Another big omission in the manual is lack of mention of anything to do with concurrency, though "use GCD" is seen as the solution (Swift closures are compatible with ObjC blocks).


I disagree. I use exceptions a lot in OCaml. For example, when implementing a recursive type-checker, you really want to be able to abort (all the recursive invocations) quickly if e.g. you find an undeclared variable or a type error. Using an ADT would IMO incur an unacceptable syntactic and cognitive overhead.


Ocaml exceptions are a misnomer, since they are often used as a control flow primitive for non-exceptional circumstances. The point is that they are cheap. Contrast with Java, where you wouldn't want to use exceptions the way you use them in Ocaml, and would instead favour other non-local exit primitives such as "return", "break" and "continue."

Haskell doesn't care about this stuff, because lazy evaluation gives you the same control-flow patterns, and the exception monad ends up operationally equivalent to checked exceptions, but now with possibly exception throwing values made first-class. I doubt the same can be said of Swift.


Dictionary isn’t NSDictionary or NSMutableDictionary because of type inference issues (“they can use any kind of object as their keys and values and do not provide any information about the nature of these objects”).


You unfortunately probably have to deal with exceptions when crossing into Objective-C land because of https://developer.apple.com/library/mac/documentation/cocoa/....


I'm not seeing the javascript-alike-ness. What caused that connection to jump out at you?

I see the standard static FP features (from ML, Haskell, Scala, F#) with the syntactic flavor or Rust and some C# tossed in.


JS has exceptions though. I didn't notice that bit until just now… hmm. Could turn into lots of return-checking boilerplate. I'm still excited about this, very much so, but I think exceptions are worth keeping.


It's also apparently super fast, and implicitly typed, and designed for safety.


[deleted]


In the keynote they said Swift is faster than Obj-C.


"The company says that Swift apps are significantly faster than Objective-C apps, outperforming them by over 93x."

With a graph showing ObjC at 127x faster than Python, Swift 220x faster than Python.

Thus the conclusion is 220 - 127, Swift is 93x faster than ObjC.

Someone needs to resit their GCSEs.


"93x faster" sounds roughly like a 46.5x improvement in marketing.


It's not only possible, it's even not uncommon for a C programmer to get a 90X improvement in speed in their own C program. If you have naive memory management, or incorrectly implemented concurrency or parallelism, you can easily lose 2 orders of magnitude speed.


This. In my case a 1Mbyte memcpy in the middle of a loop this morning. Enough to blow the CPU cache out of the water...

300x improvement instantly by moving it out of the loop.


Are you sure it wasn't just because you were then no longer doing a large memcpy repeatedly?


Yes it was entirely covered by that :)

I think it was covered by "naive memory management" and "shitty outsourcing". I'm paid to fix their stuff.


Haha :) Maybe the shipped a better product, but the management said "No, it's not possible that this could run that fast. Something must be wrong.", so they put in some "waiting".


If the problem was just the time taken to do a 1MB copy inside a loop, why did you say the problem was clearing the CPU caches?


Because the CPU has 32k of cache in this case (ARM) so the memcpy was evicting the entire cache several times in the loop as a side effect of doing the work. The actual function of the loop had good cache locality as the data was 6 stack vars totalling about 8k.


So? Copying a megabyte is a really expensive thing to do inside a loop, even ignoring caches. (A full speed memcpy would take 40 microseconds, based on a memory bandwidth of 24 GB/s, which is a long time.)


My most painful personal experience dealing with this exact problem was with CUDA warps, during my undergrad research work.


Marketing are claiming a 91.3x.


For context: This story previously pointed to an article, but has now been changed to point to Apple.


"outperforming them by over 93x" is technically different than "93x faster"... Although I agree it is a cheap way to put it :)


Enumerations (from: https://developer.apple.com/library/prerelease/ios/documenta...):

Unlike C and Objective-C, Swift enumeration members are not assigned a default integer value when they are created. In the CompassPoints example above, North, South, East and West do not implicitly equal 0, 1, 2 and 3. Instead, the different enumeration members are fully-fledged values in their own right, with an explicitly-defined type of CompassPoint.

+100 for that. This will help developer avoid whole class of bugs.

Enumerations also support associated values. Enums in .NET are very poorly defined. Looks like Swift got it right.


Indeed. Luckily C++11 took care of the issue on the C++ side.

http://en.wikipedia.org/wiki/C++11#Strongly_typed_enumeratio...


Do Swift's enumerations allow recursive definitions?


>Unlike C and Objective-C, Swift enumeration members are not assigned a default integer value when they are created

Well that’s gonna make storing persistent values tricky.


I find it a bit sad that with all of the languages that already exist, Apple found it necessary to invent a completely new one -- and then make it proprietary. Why not use Ruby, or Python, or JavaScript -- or even Go, Rust, Clojure, or Scala? (Yes, I realize that the latter two run on the JVM, which would have been problematic in other ways.)

Heck, they could have bought RubyMotion and made Ruby the high-level language of choice for development.

I realize that Apple has a long tradition of NIH ("not invented here"), and in many cases, it suits them, and their users, quite well. But there are so many languages out there already that it seems like a waste for Apple to create a new one. Just the overhead of developing the language, nurturing its ecosystem, and ensuring compatibility seems like it'll cost more time and money than would have been necessary if they had gone with an existing language.


>Why not use Ruby, or Python, or JavaScript -- or even Go, Rust, Clojure, or Scala? (Yes, I realize that the latter two run on the JVM, which would have been problematic in other ways.) Heck, they could have bought RubyMotion and made Ruby the high-level language of choice for development.

Because OBVIOUSLY none of them solve the problems they wanted to solve (interoperabillity with Objective-C, fast, native, IDE integration, etc. Including RubyMotion which is a half-arsed implementation.


I'm not sure if you're kidding or not.

IDE integration for a new language? They wrote it themselves. Do you think it would have been harder to integrate an existing language? Fast & native also also trivially solvable.

I don't know about interop with Objective-C, that's probably the hardest part from your list.

But complaining about IDE integration when they're also the creators of the IDE is... silly...


First, I like how you break apart the issues I raised (like "IDE integration") when I said that they wanted to solve ALL this problems at once.

So, even if just adding IDE integration for an existing language was easier than creating a new one, using an existing language wouldn't solve their other issues (e.g Obj-C interoperabillity with message passing, protocols, named parameters et al). And RubyMotion wouldn't permit all the optimizations they did, nor the kind of type safety they added.

>But complaining about IDE integration when they're also the creators of the IDE is... silly...

We're not talking about PyCharm level of IDE integration here. Not even about the current level of Obj-C/C++ integration XCode offers (for which they had to create LLVM tooling and LLDB to enable all the features they wanted to offer). It goes beyond that.


I see that you don't really understand what's needed for real IDE integration. Please, understand one of the main reasons of Apple creating Clang... (hint: because the GCC guys wouldn't take their patches to improve Objc-C & add facilities for IDE integration fast enough)

Clang was easier to integrate with an IDE than GCC, and I strongly believe (after seeing what apple showed yesterday) that swift integration is even simpler.

( They must have made a new LLVM front-end to embrace IDEs equally or better than Clang )

So no, it's not silly to try to design better to have a better integration with an IDE that you control too.

Cheers.


Well, that may be true for GCC but Ruby, Python & co are well integrated into many third party IDEs. So that point, at least, is moot.


"Ruby, Python & co are well integrated into many third party IDEs" perhaps you're not familiar with the level of IDE integration we're talking about here.

Most (if not all) IDE's Ruby and Python integration is BS.

We're talking about real AST-based highlighting and suggestions, auto fixes, autocomplete for all APIs available (AND your own custom modules), integration with the debugger and the build system, and in Swift's case also integration with the REPL, LighTable-style live-variables and Brett-Victor-inspired live coding environment.

This is not your grandfather's PyCharm.


Probably because they wanted a statically typed language. Something that didn't require the JVM and not backed by Google..


My sense is they wanted "Their" language, as opposed to Go (Google) or Java (Oracle) or another tied in to a vendor.


Why does Google get kudos for inventing new languages (Go), and Apple gets mocked? (Swift)


Apple likes to control the whole product as much as possible.

The iOS/OSX ecosystem is absolutely big enough to support an exclusive language (see Objective-C), and Apple chose to create a new language that matched their goals instead of adapt something they don't control and isn't ideal.

Makes perfect sense, and Swift was by far the most impactful announcement at WWDC.


In one word: Lockin


Apple is a big, rich corporation. But in that campus there are still human developers.

Radical hypothesis: what if this started as a pet project, got the attention of more employees, then management (with or without convincing from said developers). Management sees the value in the project, and funds it officially. You know, like other big, rich corporations...such as Google.


When the project lead is one of the creators of LLVM (arguably the most fundamental low-level project in Apple after actual kernels), this sort of scenario is improbable.

Much more probable is that somebody asked top-developer Lattner for "a simpler language to compete with Java/Dalvik and C# with more casual developers" and he came up with Swift. The name itself is a message: "this thing is quick - quick to learn and quick to run, unlike VM-based stuff that must translate to Obj-C (fast to learn, slow to run) or Obj-C itself (fast to run, slow to learn)".


I would've voted for buying/licensing Xamarin instead; C#/F# would've given them everything including a lot of fresh programmers. On the other hand i'm happy they didn't as they would've tossed out Android and (at least for me) there is no alternative yet to the speed / power of development here.


Apple has enough programmers developing for their platforms.


Depends what you call 'enough' :) I would say there is no 'enough', but he.


Probably because they wanted one that integrated well with their existing frameworks, and which really took advantage of their investment in LLVM.


> Looking for the Swift parallel scripting language? Please visit http://swift-lang.org

Apple knew there was Swift-Lang, and still called this Swift. At least they link to it from their website!


Phrased with admirable restraint:

"The Swift parallel scripting language web is experiencing heavy load due to Apple's announcement of a new language by the same name. We'll have a raft of new web servers online shortly to handle this trafic. Please check back in a few hours! -- The Swift team..."


Their icon even looks the same. It's also almost identical to the icon used by http://swift.im

Sigh.


The Swift is a bird. It's unsurprising that the icons are of birds.

http://en.m.wikipedia.org/wiki/Swift


Not sure about identical icon -- there's only so many ways to draw a silhouette of a bird.


"The Swift parallel scripting language web is experiencing heavy load due to Apple's announcement of its new Swift language.

We'll have new web servers online shortly to handle this traffic.

Please check back in a few hours!

-- The Swift team..."


I guess they thought, if google can get away with it, so can they.


Oh God, they just compared the speed of Objective C, Swift and... Python! It's nice to see Swift being faster than Objective C, etc., but what has Python got to do with coding native iOS/OS X apps? Of course it's going to fail at speed when compared to a static compiled language.

What a weird and pointless comparison, imo (I mean the inclusion of Python, seems so random to me).


It's neither weird nor pointless. They're going for something that is as comfortable to work with as modern dynamic languages, for something that eschews the cruft of ObjC. Python is simply a very popular representative for this type of language.

Of course, it's an easy target. But I can see why they went for it.


Yeah, I kind of get it after some additional thought (and reading through the replies). Maybe I missed out on some detail (not watching the live video, just a text stream) – now that I've taken a look at the syntax, the comparison seems more valid. Still, comparing this to Javascript would've been more interesting (webapps, all that stuff).


Apple pushed the speed of Javascript in the newest version of Safari, earlier in the keynote. They probably try not to complete with themselves.


I can see a reason: it's gunning for the mindshare of the kind of developers who like the features in Python. But you're right that performing faster than Objective-C tells you what you need to know, and showing the Python comparison first was just showmanship.

I would have been interested in a JavaScript comparison, which I'm sure we'll be seeing from third parties soon.


Swift looks a lot like a scripting/dynamic programming language. For e.g. this is a complete Swift program:

    println("Hello")
I think the idea was that Swift will be as easy to code as Python and faster than Objective-C.


Dynamic has nothing to do with it. printfn "Hello" is a complete program in F#, too.

There's no reason C#, Java, etc. can't have such things exposed, but they get so wrapped up in their boilerplate and OO overhead they choose not to.


I mean the inclusion of Python, seems so random to me

It's supposed to be a "mainstream" scripting language. Also, it's an easy way for them to get favorable numbers for their presentation.


On the Ars liveblog, I didn't see any mention of Swift support for other platforms. Are we going to see Swift on Linux and Windows as well?


I didn't either, but if not, my bet is that outside programmers will make an open source version (like Mono to C#).


According to one Apple developer who works on the project it will _likely_ be open sourced, though the developer says they don't know if that's true or not:

http://forums.somethingawful.com/showthread.php?threadid=363...


Could you post the message (or thread) somewhere? I think that post is paywalled.


Sure, sorry - I'm not sure which SA threads are free and which require payment. This is the relevant bit:

"Is this under NDA?

No.

Is this open source?

Not yet. It probably will be, but I can't make promises. Right now, our repository still has a lot of history that we don't want to make public, and we have a lot of work to do before we release."


Thanks.


Swift on the Server with some modern CSP flavored concurrency mechanisms would Rock! Then combine that with a Swift-like language that compiles to Javascript, and that language could take over the web.


Yes. But what has that got to do with writing native apps?


It has everything to do with modern languages with powerful features.


Nothing. Don't look too far into it.

They probably just used python as an example of something most people would know. Its probably the most known scripting language out there so why not use it.

Besides ALL benchmarks are pointless, even when you have the source of the benchmarks and know what they ran on software/hardware wise.


I don't know much Python, but swift seems to be aiming more in that direction than ruby or js for instance.

On the superficial level, the use of line breaks to separate statement (although ; can also be used), on the deeper level the is an accent on accessibility and no non sense behaviour, where other language might have made more compromises on readability for instance.


Python was mentioned in a "generic object sort" benchmark. It's likely that the purpose of that slide was to show that Swift is a lot faster even when dealing with dynamic dispatch.


Perhaps because the syntax of Swift is so simple that it almost looks (and hopefully feels) like a scripting language? (type inference, no semicolons, automatically managed memory)


Runtime speed is only tangentially related to the fact that they are compiled ahead of time or not.

I guess the comparison with Python is that Swift code looks more like Python than C.


It's a comparison of how fast they made a dynamically typed language. (Yes, Objective-C is considered dynamically typed as well.)


I think it's the most popular obj-c/cocoa binding.


First question that comes to mind: how open is this language?

(I can't find any references to it)


Swift was developed internally at Apple.

In the intro of the book "The Swift Programming Language," which Apple just released on Itunes Book store, it says:

“Swift has been years in the making. Apple laid the foundation for Swift by advancing our existing compiler, debugger, and framework infrastructure. We simplified memory management with Automatic Reference Counting (ARC). Our framework stack, built on the solid base of Foundation and Cocoa, has been modernized and standardized throughout. Objective-C itself has evolved to support blocks, collection literals, and modules, enabling framework adoption of modern language technologies without disruption.”

Book is available here:

https://itunes.apple.com/us/book/swift-programming-language/...


Extremely closed, at least at the moment. Can't even read the language reference without Apple products.


You can actually read the language reference with a "standard browser":

https://developer.apple.com/library/prerelease/ios/documenta...


Hmm, can't seem to change page in Chrome. In Firefox, it works slightly better but still doesn't really. I'm guessing it only works on Safari? How can they release a language and not have proper documentation that is accessible by anyone?

It'd be great if someone could get the book, convert it to PDF and post it online.


I can switch pages in both Chrome and Firefox.

ePub[1] has been posted elsewhere in this comment thread.

[1] https://news.ycombinator.com/item?id=7835994 (accidentally said PDF instead of epub before)



“Apple Inc. Copyright © 2014 Apple Inc. All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer or device for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice.”


Works fine here in FireFox 29.0 on Solaris no less.


Doesn't change pages when I click on stuff here.


I had to open things in a new tab (in chrome) to see them (odd, but it worked?)

Edit: Seems to be working now anyways :).


Works here in Firefox.


Works for me in Chrome


Hard to say, as yet. I'd expect it to be pretty open in the end, though; they're using LLVM and the Objective C runtime, both of which are open and usable on other platforms. I guess we'll see if they release the compiler.


Object C was created through changing the GCC and hence had to be released as open source due to the licensing of GCC (I believe) [1]. Swift is an apple invention and I would predict that they have no intention of releasing source code of it for availability on other platforms. Apple is a very closed ecosystem.

[1] http://en.wikipedia.org/wiki/Objective-C#Popularization_thro...


You are correct that the initial NextStep implementations of Objective C extended GCC.

However, GCC is not longer used by the Apple toolchain. They could easily close up their work on Clang and LLVM, and haven't. Further, they're obviously still contributing - they've recently added their ARM64 backend to LLVM.

The Apple software ecosystem isn't particularly closed. Much list iOS and OS X, they're based on a pretty open foundation, with proprietary libraries on top.


This will almost certainly happen when Xcode 6 gets released publicly.


This is my problem with the language. The language looks really nice, and the interactive environment too, but I have no apple device to use it on. I would like it if there was at least a compiler for Linux.

I have the same problem with using c#/.Net (outside of work), it isn't a language I want to use at home or want to deploy onto a server.


1) This looks much less intimidating than Obj-C, I may finally write an iOS app.

2) Hopefully this puts some pressure on Google to make Go on Android easy (although I'm a Java guy myself).

3) What about swift-lang (http://webcache.googleusercontent.com/search?q=cache%3Aswift...). It has the same name and almost the same icon. Did they work with these people or just screw them?


Re: 3)...

Look at the bottom of the page. There's a link to the swift-lang site...from an Apple web property. That's probably more publicity than the swift-lang folks could have hoped for in a million years.


My first instinct was to be cautious about new languages from Apple - Dylan was supposed to be something awesome until Apple cancelled it. But I only learned of the existence of Dylan years after it was cancelled. Looked awesome, but it was so niche I didn't want to spend time learning it.

So I took a moment to look at why Dylan was cancelled.[1] Veryin interesting stuff. What it came down to was:

    - Apple was in dire financial straits

    - Apple needed to axe all projects that didn't show commercial viability

    - At the time, when Apple was transitioning to PowerPC, Dylan was 68K only, and needed another year or two to be ported

    - Most damning, the project was not finished - it wasn't even in the optimization stage.

None of these factors are in play here. So. My worries are assuaged. I do want to learn this, and it looks really easy to pick up so far.

I'm really curious now about two (unrelated) things:

1) is this good enough to build web apps with? 2) how would one manage the transition of an Obj-C based project to a Swift-based one? Assume I don't have the budget or manpower to perform a ground-up rewrite.

[1] http://en.wikipedia.org/wiki/History_of_the_Dylan_programmin...


Based on my skimming of the Swift documentation, Dylan is a much nicer language (both in its original form and in its modern OpenDylan incarnation). That said, Swift just needs to improve on Objective C, which does not set the bar high.


You can mix Objective-C and Swift code in a project, so it would be easy to progressively port code one class at a time.

Porting NSView subclasses first would give you a win, because if you use Swift the classes can draw themselves at design time in Interface Builder. (Objective-C view subclasses just draw a white rectangle in IB.)


We're writing a story for The Next Web on Swift. If anyone's interested in being interviewed for an article, can you flick me an email on owen@thenextweb.com with brief answers to some or all of the following questions. I'd love to talk to anyone who's used Objective-C before and share your opinions/experience:

1) How does Apple releasing Swift make you feel as an Objective-C developer?

2) Are you excited to code using Swift?

3) What about Swift makes you most excited?

4) Do you worry about upskilling to Swift?

5) How do you think Swift will change the way you work?

6) What concerns do you have about swift?

Keen to understand how this impacts people and share that if you have time to talk to me :)

If you don't want to email, just reply here.


Thanks for all the responses - we ran this :) http://thenextweb.com/apple/2014/06/03/developers-apples-swi...


The live REPL is totally out of Bret Victor, very impressive.



Bret Victor used to work in R&D at Apple, so that probably isn't a coincidence


That. Was. Awesome. Totally Bret Victor type of stuff. I was just meaning to play with Light Table but as an iOS developer.. oh my god.


Checkout Scala worksheets (Eclipse and IDEA) and LightTable too.


Yeah.. Because Scala doesn't have a metric ton of problems. AVOID!


Scala doesn't have a metric ton of problems. It has some problems, sure, but it has a lot of nice features as well.


Plus 'metric' must only accompany 'shitload' or other potty-mouthed quantifiers. Troll fail


Basically, he just came up with something using FP that was more demo-able than somewhat hacky things people were already doing in other languages/environments. (For which, he deserves tons of credit.)


Bret victor's demos really have nothing to do with FP. The textual code demos in his learnable programming essay are all JavaScript and fairly imperative.


As were a lot of the things you could do years earlier in Smalltalk. FP with referential transparency makes it a whole lot easier and slicker. (Can also do it with data flow analysis.)


By 'FP' here you guys mean 'functional programming?'


Yes


Or Scala worksheets in Eclipse, or any Lisp implementation.


First thing I thought. I wonder if Bret had any input into this? Very happy to see his ideas in a production environment anyway.


Unsurprisingly, I looked at the screenshot, came back to this comments page and ctrl+f'd for "Bret".


Here's the book for those without access to an iTunes account:

http://kristofferr.com/files/The%20Swift%20Programming%20Lan...


thks :) , BTW I've used this firefox extension to view https://addons.mozilla.org/en-US/firefox/addon/epubreader/


Swift reminded me of CoffeeScript a little, in a good sense (judging by what they showed during WWDC demo). Complexity and low-levelness of Objective-C is (was?) how I justified my reluctance to program for Apple devices, so I'll be looking forward to Swift.

The IDE they demoed looks very interesting on its own—it reminded me of Bret Victor's posts (http://worrydream.com/LearnableProgramming/). Immediate interactive code visualization, quite impressive.


We changed the url for this story from http://thenextweb.com/apple/2014/06/02/apple-announces-swift.... If we pick the wrong url, happy to change it again, so let us know. The goal is to have one discussion about each significant announcement, and have the best url for each.


The demo from the WWDC keynote is quite impressive. Unfortunately, this site seems to have been slashdotted. (Basically, Swift is "Apple acquires their own LightTable.") It's touted as a language for parallelism. I'm curious about its concurrency primitives. Since distribution is shown as a top feature, I'm going to guess that it has an Erlang-like actor model.

Having ARC and not needing GC will end up being a big fundamental advantage for its parallelism story. (The problem with GC, is that one thread does work, then a GC thread comes along and possibly causes an additional cache miss.)


Swift doesn't seem to have anything to do with the parallel language at swift-lang.org. In any case, reference counting is disastrous for the parallelism story. GC thread coming along and causing an additional cache miss is way better than having to do atomic operations on reference counts all the time.


Swift doesn't seem to have anything to do with the parallel language at swift-lang.org.

Whoops. Should've corrected that when I copied the comment over.

In any case, reference counting is disastrous for the parallelism story. GC thread coming along and causing an additional cache miss is way better than having to do atomic operations on reference counts all the time.

Why are atomic reference counts necessary? You wouldn't generally need them with an Erlang-like Actor model or for special concurrency primitives like Go channels. (That is to say, you'd only need them in the special mechanisms.)


I'm not sure you can be Objective-C compatible without a shared heap. If you have a shared heap, you need atomic reference counting.


No. If you take the attitude that you're only covered if you use the concurrency primitives correctly, then you don't need atomic reference counting for everything. Basically, the programmer can use CSP to ensure that only one thread is messing around with any given section of the heap at a time, and the language implementers could say you're SOL if you do otherwise. (That probably isn't the Apple way, though.)


Swift uses the Obj-C runtime and interoperates with Obj-C code. Those languages assume a shared heap. If Swift modules didn't perform atomic reference counts, that would quite likely break Obj-C code operating on the same heap.


It should still be possible to have the compiler interpose mechanisms between everything else and the Swift code, such that your Swift code has a section of heap all to itself. By the time you're done with that, you're halfway to having implemented your own Erlang on the Obj-C runtime. That might be worth doing, though.


You'd break a ton of Cocoa APIs too.


Swift is based on the ObjC runtime, which means you have a shared heal and the possibility of multiple threads adjusting counts at the same time.


Yes, but with something CSP-derived, you could design a runtime where you're covered if you use the concurrency facilities correctly, and you're SOL if you don't. Then only the concurrency primitives need atomic refcounts.


Possibly, but it's very handy to let immutable data be read by any thread that wants to. If, however, it's very slightly mutable due to a ref counter, you have to atomically manage the counter, even for what should be free immutable reference.


Possibly, but it's very handy to let immutable data be read by any thread that wants to. If, however, it's very slightly mutable due to a ref counter, you have to atomically manage the counter, even for what should be free immutable reference.

I'm managing something like this in Go. There are no refcounts, but everything is very much mutable. I'm basically arranging for a span of time where I know nothing unprotected by a channel is going to be mutated, then I simply let every thread in the app that cares to read data from every part of the heap, but only during this span of time. The same technique could be applied to a ref counted app. (It would probably work best for games that have a tick.)


Interesting.

I still think it would be hard to apply to a ref counter app, since you'd need to keep track of change in ref count for later cleanup (thread-local per object maybe? sounds inefficient), but I now will admit that it sounds possible.


I haven't seen a single mention of parallelism in the 600+ page language manual. This will probably be done through libraries.


Reading people compare Swift to other languages is pretty hilarious. OCaml.. Haskell.. CoffeeScript.. Ruby.. Go... Kotlin... JavaScript.. Scala... No one is saying it so I will: It looks like damn Java 8.

It is probably not a good sign that it can be immediately compared to every modern (and not so modern) language in existence.


And Swift seems to resemble another Apple language Dylan, which was based on Scheme/Lisp. All languages since the are in trouble!

I really like that it is incorporating good parts of many, more terse languages. Nothing wrong with selectively absorbing good ideas.

But a huge part is the interactive nature. I dabbled in Smalltalk some years ago and have been annoyed at compiled languages ever since, resorting to things like http://injectionforxcode.com to gain some of that back (on iOS).

Having a live environment can only be appreciated once it has been taken away. I think developing for Apple devices might just become one of the more programming pleasant experiences.


It does look a bit like Dylan. I wonder if it has more than a passing resemblance. Also I wonder if they are doing SBCL real time compelling.


I really hope this language is going to become an open standard quickly. Here's hoping the language design is not too tightly coupled to the OS APIs.


This is going to be as open as objective-C. (Whether objective-C is open is left as an exercise for the reader.)


One thing I'm very interesting in knowing is how this affects the whole 'hybrid/web app' space.

Many web developers (like myself) have used Phonegap/Cordova in conjunction with tools like the Ionic Framework for our apps, primarily due to the nearly esoteric (for some of us) nature of Obj-C, but Swift almost looks like JS, which certainly has motivated me to learn it and use it in future apps.

I wonder if the aforementioned tools will lose market share because of that. Let's see.


(disclaimer: Ionic creator here): I think Swift will definitely get more people building iOS apps. But we still see a ton of demand from Ionic devs for Android support (perhaps more than iOS!), so unless the world moves 100% to iOS we think Ionic will still be incredibly important.


I found the notion of "Optionals" surprising and a bit hard to handle at first. In Objective C it was really easy to lazily allow values to be nil and still do things on them, so it's a bit of a departure.

Thinking about it a bit longer, is it because of the clear distinction between non nullable values ans optionals that the compiler can optimise the code so much more ? (I am thinking about the xx times faster than Objective C claims)


This might increase performance, but I'm pretty sure it's mostly there for safety. It forces the programmer to check for nil. It's like Haskell's "maybe" type.


Or Scala's Option or Java 8's new Optional (http://docs.oracle.com/javase/8/docs/api/java/util/Optional....)


I think this is much less to do with performance and more to do with safety. If anything can be nil, NPEs are a fact of life. If you're forced to annotate for the compiler which values can be nil, and then forced to handle the nil case when you consume them, the problem disappears.


I don't think the problem disappears so easily. It puts a signifiant burden on the programmer (on the top of my head, if I have a set of 20 properties in an object, all optionals, I'll have to unwrap them all, even if I can guarantee by knowledge of the data that they're not nil), it will still throw an error if a value became nil after the test, and there is still the implicit unwrapping system that would cause runtime exceptions on nil values.

I see the value of the safeguard, but it seems cumbersome as a language level rule; I hated The boilerplating in Java, and it goes a bit in the same direction. I'm not sure I like the bureaucracy of explicitely testing every single variable that could be nillable to use them, but I'd love to be proven wrong.


I was looking through the programming language referenced that was published and the patterns that it uses to deal with Optionals isn't that bad. Having the question mark helps a great deal as a mental check when comparing syntaxes.

In my opinion, having that as a language rule will probably force people to design their classes to have things set at initialization more often than not.


> In my opinion, having that as a language rule will probably force people to design their classes to have things set at initialization more often than not.

Good point. There are some interesting work in javascript, like Om, to mainly use immutable objects, it could go in the same direction.


It's because it has a grown-up type system. This is a great step in the right direction. ObjC's treatment of nil was handy for a long time, but a basic Option type is far better to show the intent of code.


I'm interested to see the license under which Swift is released but it isn't mentioned anywhere. Is it under EULA or released under some open source license ?


Licensing is specified in the language specification document:

> No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-branded products.

It is pretty closed, if you ask me. Legally binding a language to a specific brand of product is a new low.


My thoughts while browsing the site:

- function-level type inference much like Rust

- no constness in the type-system (I like it)

- class are reference types, structs are values types, much like D and C#

- runtime dispacthed OO interfaces called "protocols". Blend the difference between runtime or compile-time polymorphism. Classes, structs and enums can implement a protocol. Available as first class runtime values, so the protocol dispatch will be slow like in Golang.

- enumerations are much like Ocaml ADT, can be parameterized by a tuple of values, value types, recursive definitions (nice)

- worrying focus on properties.

- strange closure syntax

- optional chaining, another anti-feature in my eyes

- normal arithmetic operator throws a trap on integer overflow (!). This must be incredibly slow.

- looks like Array is a fat slice to a reference-counted array

- operator overloading is in, supercharged with custom operators, custom precedence (!?)

- builtin tuples syntax

- break with C integer promotion, like Rust.

- no pointers

- convenience is a keyword!

- no exceptions (!)

- unsigned integers: check

- type inference is "bidirectional by expression or statement"

- classes have deterministic destructors, structs have no destructors

- It seems the only RAII source is through RC and classes.

- no single root class

Make your own opinion.


Trap on Overflow is not that expensive: http://www.sei.cmu.edu/reports/10tn008.pdf

I am thrilled they did this. Implicit Overflow is the stupidest shit I know that everyone takes for granted.

Also with CPU support it could be free, and adds negligible complexity to the silicon.


The part of that paper that says how expensive Trap on Overflow is:

> At the `-02` optimization level, our compiler prototype showed only a 5.58% slowdown when running the SPECINT2006 macro-benchmark. Although that percentage represents the worst-case performance for AIR integers (because no optimizations were performed), it is still low enough for typical applications to enable this feature in deployed systems.


500+ comments and the term asynchronous does not appear once. It is a platform pain point, several languages have baked in support for async scenarios and Apple comes up with a whole new language, ignores it and a forum full of language geeks talks about it and no one points out it is missing.


Haha, I was searching for "async" and "await" keyword and only your post mentioned it


I don't have access to a machine with the new Xcode yet, but reading the book in iBooks, I found:

    NOTE

    For the best experience, open this chapter as a playground in Xcode.
    Playgrounds allow you to edit the code listings and see the result immediately.
Downloading gives me https://developer.apple.com/library/prerelease/ios/documenta..., a zip file with about 50 html files, a .css, about 40 .swift files and two files Results.playgrounddata and contents.xcplayground

Does that mean that playgrounds can be used for literate programming?


Swift has also pattern matching which I think is really awesome.



This is its greatest feature if you ask me... Imperative, classical OO language WITH pattern matching. It's a crazy world we live in.


Yes. They even have ranges in the patterns inside tuples. Not sure if any other language has that particular syntax.


F# has that. And also allows you to define your own constructors to pattern match on anything you want.


There's one thing that bothers me about Swift, and I feel like I must not be getting it. For the most part it looks like a very well-designed language, and the choices they made are extremely pragmatic. But the way collection mutability is determined seems positively insane. You can't have a mutable reference to an immutable array, or vice-versa. I don't get the reasoning behind that.


I can't check it, but i presume you can do something like Rust and redefine immutable variables.


I'm a bit surprised by this move. I see that there are some advantages to this new language, but Objective-C is not as unapproachable as the unwashed masses make it out to be.

If Apple wanted to add official support for a new language I would think it would have been a better move to use something that already has an established following and could potentially attract new developers over. Something like Ruby/Python/Lua would seem to fit the bill nicely.

We've already seen Ruby can be done successfully on Mac with MacRuby and RubyMotion, but it nevers get full support from Apple.

Adding an additional programming language that binds me only to Mac platforms doesn't give me a whole lot of incentive.


Ok, what about runtime support for older devices/iOS versions? They didn't say anything about it.


I wrote a test app with the newest Xcode and simulator. It does work with iOS 7, with iPad Air, iPhone 4s, iPhone5, etc. I would imagine that it will support at least 6.0 and maybe even back to 5.0.

EDIT: Tested with 6.x and works.


The keynote says it uses the Objective-C runtime and it compiles to native code -- so one would imagine that it's compatible with older devices and versions. But that is no guarantee.


They said that they were accepting apps built with Swift for iOS 8 and OSX 10.10 so sounds like it won't work with older OS version. They weren't all that specific about it though so I could be wrong.


New language runs on Obj-C runtime. Maybe apps written in Swift can run smoothly on existing OS.


Perhaps the bridging to all the Cocoa libs won't be supported pre iOS8


Not sure. They said apps with Swift code would be accepted once iOS 8 was released, so possibly not.


They mentioned complete compatibility with C so it sounds like it should be ok.


It'll be available for all devices supported by iOS 8, so going back to iPhone 4S.

The runtime of Swift is also the runtime of Objective-C, but the runtime might need some upgrades to fully support the Swift semantics in a safe manner.

EDIT: Correction. It's available also on iOS 7. Just confirmed by Apple. Great :)


Do you have a link for the confirmation?


It was clarified during the "State of the Union" talk immediately following the keynote.

The video should be available on Apple's site (or if not, very shortly).


What i don't really see in the docs is how to calls to objective-c methods are sorted out. For instance, if I have an objective-c class with a method

    -(void)addNumber:(NSNumber*)num withString:(NSString*)str;
How is this called in swift? Is it

    myobj.addNumber(42, withString:"Hello World")?


  myobj.addNumber(42, withString:"Hello World")
edit: I see you removed your equals - your guess is now correct :)

for more details, see this link on apple's docs (may change)

https://developer.apple.com/library/prerelease/ios/documenta...


There is a section in the docs about it. Similar to what you propose. https://developer.apple.com/library/prerelease/ios/documenta...


    UIColor *color = [UIColor colorWithRed:0.5 green:0.0 blue:0.5 alpha:1.0];


    let color = UIColor(red: 0.5, green: 0.0, blue: 0.5, alpha: 1.0)
Wow, this just brings up more questions, is the compiler removing the "colorWith" semantics or do you write your own translator?


The docs definitely indicate that some magic is occurring but they don't say exactly how. Perhaps If the first parameter contains the word "With"?

They do explicitly state that initializers will get "init" and "initWith" stripped off and whatever follows becomes the first parameter. The fact that "colorWithRed" is translated to "red" might indicate that it is looking for the keyword "With"


I suspect they wrote a new init function specifically for Swift (as an extension), but I could be wrong.


They have a concept of implicit/explicit "external parameter names".


In AppleScriptObjC, Objective-C methods are converted from

    -(void)addNumber:(NSNumber)num withString:(NSString)str;
to

    addNumber_withString(num, str)
Swift could be different, though.


The whole function name is addNumber:withString:

so if it has to be something, it might be myobj.addNumberWithString(42, "Hello world")


Somebody leak this manual please for the love of everything that is holy

... well I meant put it somewhere I could download from my Debian


Looks like they've learned a lot from Haskell (but with none of the parts of Haskell that force you to construct things purely).

Looking forward to the Lambda the Ultimate discussions on this new language.


Can anybody find a link to the Swift guide that they mentioned in Keynote? All I see is Taylor Swift books.



I've been getting books by Jonathan Swift. What's in your search history? :-)


LOL


A lot of commenters here are asking whether it will be open sourced, I'm curious, specific to those who think it should be open sourced: why? I'm not really curious about the philosophical reasons, but really the practical ones. How would Swift being open source help you as a developer? It's clearly targeted at iOS and Mac OS X, so does this mean you won't write Mac OS X or iOS apps if it's not open source, or did you hope that you could write Swift code on other platforms?


Two big ones off the top of my head:

    - Porting to other platforms/community implementations (think Mono)
    - Developer input: being able to submit bugs, influence or even just *watch* the trajectory of the language, get a deeper understanding of how various components are really implemented


A reason in addition to ics’s two reasons: If Swift is open-sourced, it will be more popular, because it might be used by developers on other platforms too. Those extra developers would write more libraries and documentation that would improve the ecosystem of the language.


Automatic Reference Counting

Swift uses Automatic Reference Counting (ARC) to track and manage your app’s memory usage. In most cases, this means that memory management “just works” in Swift, and you do not need to think about memory management yourself. ARC automatically frees up the memory used by class instances when those instances are no longer needed.”

Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks. https://itun.es/il/jEUH0.l


It "automatically" works until it doesn't. Pointer semantic is the first thing i've looked, and unsurprisingly, they still have the "weak / strong" pointer semantic.

You can still point to nil pointers and crash the program because of it, and you can still have retain cycles which creates memory leaks.

That language really looks more of trying to make good compromise rather than create a revolution or a breakthrough. In a way, that feels like a much safer choice.


Hopefully there are some of Bret Victor’s "lost" ideas in there: http://worrydream.com/Apple/


The documents for ios 8 show all examples in objective-c. Can't wait for them to be updated to swift. I'd love to start with 'getting started' and work my way through rest of the docs. I'm a programmer but could never stomach objective-c.


Some of the new docs I've seen let you select Objective-C, Swift, or both.


Doesn't look like "Start Developing iOS Apps Today" has an option to switch to Swift.



Just glanced thru the Swift book in about 3 hours. Conclusion: all your programming language are belong to Swift, mostly stolen good ideas, some innovations, a few gripes.

I can say Swift takes inspiration and improves on at least these languages:

C:

typealias

struct

control structures

labeled statements AKA gotos

varargs

C++:

default arguments

class instance construction syntax

// comment

superclass, implementing protocol declaration syntax

semi-virtual class init, deinit

Go:

No parentheses around the condition part of control statements

Unicode identifiers

shorthand for signed and unsigned integer types U?Int(8|16|32|64)

C#:

in-out params

properties

subscript access of class member values

Objective-C:

ARC

protocols

extensions

param names as method names

willSet/didSet

nil?

Java:

enum

@final

super keyword

override method keyword

Scala:

Local type-inference, blend of an ML flavored FP with OOP without the noise and believe it or not, even more powerful in specifying generic type constraints. No stupid JVM type erasures either so you can actually create an instance of a generic type, just like C++ templates.

Self:

self

Python:

for i in enumerate(seq)

for key, value in dictionary

Type(value) explicit type conversion syntax

No public/private/protected class member access modifier bullshit

Array literals, dictionary is also like Python but use [] instead of {}

Ruby:

0..100, 100_000

Lisp:

closures

Scheme, Coffeescript:

? optional type modifier

Bash:

$0, $1... inside short callback closures

Innovations

---------------

break-less switch, optional fall-thru, comma as multiple case, case can be any value of any type, condition or a type constraint for pattern matching, supports method call shorthand

generic type constraint queries

overflow operators

@prefix, @postfix, @infix, @assignment modifiers for operator overloading Trailing closure as partial function application

Gripes

------

Seems like array[4..6] is even more useless than Javascript's Array#slice, and a far cry from Python's slices.

No set literals and list/set/dict comprehension.

Nothing for concurrency???? No yield, no generators, no channels, not even the synchronized keyword.

There's no decorator or annotations, and Swift isn't Objective-C, what's with the odd-ball @ modifiers?

I don't see namespaces as mentioned in the WWDC slides, and goto is definitely still here so you might just write another gotofail.

Looks like Swift is Apple's answer to Go, Rust, Scala, Java, PyObjC/RubyMotion, Unity, Xamarin and all these HTML5 + JS/Phonegap people. I'll definitely pay attention to Swift. If the performance results hold up, Swift + iOS8 will definitely leave Android's ancient Java 5 crap way out in the dust.

https://itunes.apple.com/us/book/swift-programming-language/...


Concurrency will probably be handled using Grand Central Dispatch's[0] queues.

[0] http://en.wikipedia.org/wiki/Grand_Central_Dispatch


You're right. I can remember an example with something like:

dispatch_async(queue) { /* code here */}

That's an idiomatic adaptation to dispatch_async to support closures. I guess this internally might be converted to a block.


Good list, I think the generics notation was inspired by C# but with the improvement of moving the generic constraints inside the <>'s


All of these features make my eyes glaze over. I hope it's not as bad as it looks. I much prefer a language with only a few core concepts that everything else builds off of rather than one that packs all of the latest PL research into the compiler.


> break-less switch, optional fall-thru

Like Go and Dart?


There's actually a new keyword called `fallthrough`.


Interesting definition of 'new' you're using there.

http://golang.org/ref/spec#Fallthrough_statements


Ha thanks! It's been a while since I looked at Go. I wish I could edit the comment now!


The `enum` part of the language seems to be Haskellish algebraic types - like you can have enum "cases" with parameters in addition to just named enumerations .. and these enums can have methods. Cool!


This is a hidden killer feature


For those without iBook, here's an excerpt from "The Swift Programming Language": http://pastebin.com/xsr401gt



It works in Calibre, provided you can get your hands on the epub file in some way...


I got it from Apple's ebook store. It showed up in ~/Library/Containers/com.apple.BKAgentService/Data/Documents/iBooks/Books


Unicode variables, I love it:

let 🐶🐮 = "dogcow"

Moof!


You can do this in C# (and presumably other languages). I've seen people using Greek symbols in mathy code before. It's kind of fun.


It's fun once. It's impractical for someone editing the code later though, I don't want to have to look up the unicode character each time I want to remember it, and I don't want to have to copy and paste it either, it's better to stick with the characters available on a keyboard.


OS X handles some of this by assigning mnemonics to keyboard keys with the meta key (on mac keyboards, Option, or the "windows" key on the standard layout) held down.

For instance, the registered trademark symbol ® is just option+r. ∑ is option-w. Diacritics are two-stroke combinations, to get é, you'd type option-e, which puts the ´ on the screen, and then type the e to complete the character.

Some of them definitely make more sense than others. ∑ looks like a sideways 'W", the trademark symbol is just a circled r, and the diacritic marks fit the character you'd commonly associate them with. (Guess what letter you hit to get ¨ over a letter?)

Beats copy/pasting out of character map, anyways!


"OS X handles some of this by assigning mnemonics to keyboard trademark symbol ® is just option+r. ∑ is option-w."

And, after they ran out of mnemonics, they sprinkled the rest of the characters on the keyboard (almost; I think they tried hard to keep things memorable, but some combinations are just plain of the "if you don't know it, you 'll never guess". The Apple logo is on the k key, for instance (IIRC). Mnemonic? MaKintosh?)

"Diacritics are two-stroke combinations, to get é, you'd type option-e, which puts the ´ on the screen, and then type the e to complete the character."

That's the old way. Recent OS X has 'hold down the e key, a menu pops up, click the desired variant or type the digit shown next to it'.


Ah neat, I wasn't aware of this. Every mac I use has the press-and-hold mechanic disabled to support a higher repeat rate. I might have to go back and play with it some more.


> Some of them definitely make more sense than others, but it sure beats copypasting out of character map!

Sure, but even making your own keyboard layout beats copypasting out of character map for any characters that you use regularly (and if you switch from US-English to US-International as your base layout, you get a lot characters that aren't on US-English for free without making a new layout.)


except ∑ is an upper-case sigma which corresponds to s in the latin alphabet, so why not option+s ?


I guess it depends what country you're currently in. I know some chinese developers would prefer some chinese characters because it happens to be the ones available on their keyboard.


I have to agree. I had a love affair with unicode put directly in LaTeX markup (delta, integral signs, element-of, etc...) and it was very fun at first. Then I had to send the paper to a collaborator while writing a follow-up article together. I ended up removing all the unicode, and in subsequent work I didn't do it any more.


Why would you want to put math symbols in LaTeX using unicode? What would be the advantage over using a package like amsmath or even the native math environment? I'm genuinely curious.


Because then instead of writing \int, you can have an actual integral sign, etc., and it's more readable.


I won't mind it if the original author also has a comment signifying what unicode character the variable refers to. This will reduce some pain though not all of it.


Bluntly, though, that's a deficiency in your OS user interface or text editor.


I'm stuck doing most of my work in php, so this is kind of neat. (I also like that the example in the text resurrects the famous 30 year old dogcow joke.)


The only practical use is that we can assign functions to a variable named λ Or define a type alias to that.

On a more serious note, several languages have that, Ruby included.


It's not entirely useless to be able to use natural-language words/names for variables (if your language happen to not be English). For example when teaching kids how to code, why have proper spelling (eg: naming a "crow", "krake" rather than the proper "kråke") be an error? Just adds another barrier to learning.

I think we're approaching the level of unicode penetration that this shouldn't really be much of a problem.

Incidentally, I think this will be much more useful in Japan and possibly parts of China than in the parts of the world that speak an indo-european language (and therefore has an easier time learning English).

But, those areas that doesn't speak native English is of course just a small market. /s


I've only seen one thing actually use it in Go:

    	func Polar(x complex128) (r, θ float64) {
    		return Abs(x), Phase(x)
    	}
    
It seems to be good for documentation reasons.


This should make for interesting entries in upcoming IOS (International Obfuscated Swift) competitions.


As someone who's biggest hurdle was the syntax of Objective-C, this is absolutely massive personally. Just the other day, me and my friend was discussing how hard Objective-C is to properly learn. Of course, the jury is still out on Swift until I read further on it, but it can't possibly be worse then Objective-C.

Looking forward to it.


"When Apple announced Swift at WWDC, it got the largest cheer out of the developer audience than any other single feature." :D Sigh of relief!


HR departments around the world are going to have to update their mobile dev job postings to include "3+ years of experience with Swift."


In the playground REPL, am I missing an easy way to display errors as they happen?

It seems any error messages aren't visible by default. Xcode shows a red "!" disc next to the line, and that's it.

The usual shortcuts for "Jump to next/previous issue" are disabled. Opening the issues tab with command-4 works, but it's empty. Apparently I have to mouse over and click on the tiny red disc to see any error message at all, and then it displays as text that can't be selected or copied.

EDIT: ctrl-command-M turns "Show all issues" on or off. It seems to be a little buggy, which may be why it's off by default. Hopefully we'll get the ability to copy the error text in the next refresh.


Now, if Go was implemented on Android, I'd probably develop a lot more for mobile.


Look into Kotlin for Android development. It's a nice-looking language.


Anyone know how Swift might achieve its claimed speedup vs. Objective-C? I can't see how it could get the advertised numbers without method inlining, which appears to be incompatible with the dynamic object model that it inherits from Objective-C


Why would method inlining be incompatible with dynamic dispatch?


I was thinking more about runtime method replacement.


Less ambiguity, allowing the compiler to optimize more?


I'm reading the manual and liking nearly everything. But then I stumble across:

> Alternatively, remove a key-value pair from a dictionary with the removeValueForKey method.

Is that the day where an Objective-C got to choose method names? Why not dict.delete() or similar?


I’m with you, and that’s one thing I really don’t appreciate about Obj-C and Apple APIs. You find yourself looking at classes with names like UICollectionViewDelegateFlowLayout and methods called minimumLineSpacingForSectiontAtIndex.

I’m not a one-character variable name type of person, but this makes my fingers (and my brain) ache, and for me makes the code harder to comprehend (wood for the trees, I guess, or something like that)


You mean an API than you can actually read and understand? How dare them take your 1970s 80-char display away from you!


You mean a methodNameThatOverlyExplainsEverythingEvenThoughTheContextMakesItClear()?


That sounds bad. Heh, you can define your own extension and rename that method.


It seems perfectly serviceable, but I have to admit that my reflex response on opening the page was 'Oh god, another language?'

I can't tell if Apple is proposing this as a great new language everyone should use, or whether it's only intended for developers using Apple hardware and so represents a sort of lock-in strategy. I don't have an opinion on the language itself - it seems to have several neat features that make it easier/safer than competing languages like js, but presumably there are a few shortcomings as well.


So, I picked up Objective-C a few weeks ago, and I've been struggling (only coming from a Python background, with only the CS-knowledge I've picked up along the way). I just figured it would be fun to be able to make some apps. What would your advice be? Stick with Objective C, or switch over to learning Swift? Swift looks a lot more friendly, but I don't want to sell myself short. I'm also thinking big picture, where learning Obj-C might eventually be helpful in learning other languages.


I know neither.

I'd skip Objective-C and learn Swift to actually make a thing.

If you want general language knowledge, a really hairy production language and toolset isn't the place to look. You'll be fighting with lots of incidental stuff along the way.

Learning different kinds of languages will help you learn more languages.


It is exciting, the language is full of modern stuff... This is what I catch at a glance of the book:

- Optionals (Java's @Nullable)

- Tupples

- Functions as first class citizens

- let vs var (immutable vs mutable)

- Operators are functions

- Closures

- Extensions (Adding things to an existing class)

- Value object (struct - are passed by value -- and so are Strings!!! )

- Reference Objects (class)

- Generics (lets hope it will be better than Java's version)

- External Parameters ??

- @final keyword (to prevent overrides - like Java's final)

It kinda looks like C# meets Ruby meets the let keyword Very complex...

And More

- object reference operator === and !==

- typealias (~ typdef)

- Optional Binding (if let x = y.f() { } else {}

- for-in loops (for i in 0...count)

- The default behavior of switch is not to fallthrough


A lot of the syntax is incredibly similar to rust.


I've been looking into Rust recently, so I wasn't sure if it was just the freshness in my mind or what. Glad to see I wasn't imagining things.


The syntax is so similar that I'm quite curious to know whether the Swift developers have been watching us from afar. :) That said, it could just as easily be convergent evolution... which would be quite an impressive validation of our syntax decisions.


Yes, but Swift looks much cleaner thanks to ARC memory management. I wish Rust has something similar, all those sigils make it messy.


ARC is the reason Swift won't make it into the systems programming world. I - a firmware engineer - am glad Rust team stuck to their guns and kept it 0 overhead.


What sigils? All Rust has is & now.

Using ARC removes control from the programmer and has a significant runtime overhead (atomic reference counts), which violates Rust's zero-overhead principle. What Swift works well when you need deep Objective-C integration, of course.


Could "'" count as type-parameter sigil for lifetimes? It could at least look like a sigil on a first look.


Isn't that essentially what Rc<T> or Gc<T> are? Rust obviously can't get rid of the other types, since safe manual memory management is one of their core use cases, but it does support automatic memory management.


err.. Rust doesn't have neither '@' or '~' any more. And there is "std::rc::Rc<T>" in case you want reference counting. A big fat difference is that you are not forced to use it.


I really wish we could play with it, without having to be a paid member.

It's crazy how Apple is always so scared to release dev tools, at the end it will be out-there on bittorrent anyway...


I am pretty sure you can sign up without paying - but your code will be limited to running on the simulator. The $99 is to build for device, and submit to the store.


You can't download Beta versions of dev tools without having a paid subscription, only released versions.


I forgot about that, thank you.


Visual Studio isn't free in that regard either. What exactly are you comparing this release to?


http://fsharp.org/use/linux/ feel free to give F# a try. Apple hardware ( or MSFT tax) not required.


VS has non-paid versions. And C# and F# compilers are open source.


Yes, but you need Windows and a Windows PC to run them. I guess you could get Linux for C# and F# but you still need to buy a PC.


How are you doing development without a non-mobile OS? But I imagine http://www.tryfsharp.org/ works even for mobile.


No, F# runs everywhere C# does - you don't need a PC.


Swift is designed to make many common C and Objective-C errors less likely, but at least one class of bugs could be ascendant: off-by-one errors in ranges. Swift's ".." and "..." range operators are reversed compared to Ruby and CoffeeScript.

Swift's way is arguably more sensible: the longer operator makes a longer range. But switching the way two similar-looking operators work, as opposed to at least two other languages popular with the target audience, is bound to lead to errors as programmers switch contexts.

Just the fact of having the two operators in the language together is dangerous, since they look similar and switching them will lead to weird bugs instead of immediate compile-time or runtime errors. Switching their meanings makes this more pernicious.

Time to prime our eyeballs to look out for this one.

[1] Swift book: “Use .. to make a range that omits its upper value, and use ... to make a range that includes both values.”

Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks. https://itun.es/us/jEUH0.l

[2] Ruby: "Ranges constructed using .. run from the beginning to the end inclusively. Those created using ... exclude the end value." [http://www.ruby-doc.org/core-2.1.2/Range.html]

[3] CoffeeScript: "With two dots (3..6), the range is inclusive (3, 4, 5, 6); with three dots (3...6), the range excludes the end (3, 4, 5)".


So Google...still not going to push Go to Android, already?


It's more likely, they will push Dart as next Android lang


Go is more of a server-side language; there's no need for it on Android.


Swift looks promising and looks like a step in the right direction. However, looking at the reference and everything, I fail to find an answer to this question. Does the concept of private members (methods and variables) in Swift's object system not exist? It looks like every single variable is exposed fully without any way to prevent it from being so.


One question to the audience : i've had a look at the swift iBook and noticed that dictionary are strongly typed. Which is great in a way, but now i wonder :

how would you create a json deserializer ( which conveniently deserialized into NSArray and NSDictionnary of anything ) ? I didn't see any "object" or "id" type equivalent.


There is a special type called AnyObject that represents Objective-C id values


There will be the Swift language developer center at http://swiftlang.eu - right now it redirects to the Apple website.

The Swift eBook is available at http://book.swiftlang.eu

What would you like to see there beyond reference docs, guides and examples?


At a glance, it looks like ScalaScript


There is the official Swift Book: The Swift Programming Language by Apple Inc.

Book Link: https://itunes.apple.com/us/book/the-swift-programming-langu...

Swift is a new programming language for creating iOS and OS X apps.

Swift builds on the best of C and Objective-C, without the constraints of C compatibility. Swift adopts safe programming patterns and adds modern features to make programming easier, more flexible, and more fun. Swift’s clean slate, backed by the mature and much-loved Cocoa and Cocoa Touch frameworks, is an opportunity to reimagine how software development works.

This book provides:

- A tour of the language.

- A detailed guide delving into each language feature.

- A formal reference for the language.

Hope it helps.


Perhaps it was just for the benchmark, but it seemed ambitious that they're testing encryption algorithms with it already. Does anyone know if Swift, by design, could help avoid issues like they've had recently with Secure Transport?


Inferred types, first class functions, closures, great speed -- this looks very promising.


Is anyone getting the vibe that swift is golang for writing Cocoa apps, or is it just me?


To me it feels more like a kind of Vala for Cocoa.


Just you, Golang would never buy in to modern language features like generics.


just you


www.mcs.anl.gov/papers/P1818.pdf

Swift: a language for parallel scripting, 2011, mostly from University of Chicago

Anyone knows if it's related?


Right yeah it's completely unrelated. My bad.


Anybody have any reference to language docs?


It just became available in the US ibook store

https://itunes.apple.com/us/book/swift-programming-language/...


And if anyone else accidentally tried to get it in the UK store first follow these instructions to get the use store back:https://discussions.apple.com/message/22116986#22116986

basically you just need to scroll to the bottom of the bookstore front page and touch your username to logout and then log back in.


I'm in the UK and this link is working for me, but the one posted earlier with "gb" in the URL isn't.


Yes, Apple has posted a pre-release version here:

https://developer.apple.com/library/prerelease/ios/documenta...


Supposed to be on ibooks



Ugh, not available in Canada. Way to kill my excitement, Apple.

Edit: Canadian link worked! Sorry Apple.



Note: replace "gb" in the URL with your country as appropriate.


Your other post is dead, just FYI. The filters will kill duplicate posts from the same user (within a time period?)even if under different threads.


Here are some coding styles I found for Swift: http://codingstyleguide.com/lang/23/swift


Someone bought swiftcasts.com today ... http://whois.domaintools.com/swiftcasts.com


This is a fascinating development. I wonder how Swift will impact the many cross-platform mobile frameworks. Objective-C was a big barrier for many beginners and small companies and a free and easier development language provided by Apple and supported with good docs and third-party tutorials will likely command a good amount of mind-share. It's going to be an interesting few months in the mobile development world.


So, they have the weird syntax the MacRuby people were using. I really hate they removed the Smalltalk selector syntax and replaced it with the half-thingy.


Any ideas on how easy it will be to slurp JSON with Swift?


Define easy?

`NSJSONSerialization` worked pretty well with iOS 5.x and above.

Not sure if you're looking direct JSON -> Object serialization.


Ah yes, that will do. Thanks!


I'm assuming that there's no memory safety guarantee for Swift code that interacts with Objective C code. Can anyone shed any light on this?


Did anybody find information regarding namespaces?


Does this mean I can finally get around to porting my phonegap plugins over to iOS without having to dive deeply into Objective-c?


Swift looks quite similar to javascript - it may be fairly easy to port, I hope.


What kind of benchmark produced 220 times something faster than Python? My guess is that they did it on a mobile device and used an application like this: https://itunes.apple.com/us/app/python-2.7-for-ios/id4857298...


What kind of benchmark produced 220 times something faster than Python?

Python and Ruby are resource profligate dynamic languages in comparison to other dynamics langs like Lua and Smalltalk. If you are surprised that someone could come up with a benchmark that disadvantages Python by a factor of 200, then you have a lot of neat reading to look forward to in language implementation and Python internals. For some reason speed is easy for our reptilian brains to grasp at. It's not the be-all end-all of a language.


Eh, something on that magnitude is certainly doable: http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

Python's pretty slow.


If you're doing anything at all computational (that doesn't fit into the numpy/etc solution spaces), it's pretty easy to get those kinds of speedups.


I wonder how many years, n, from now it takes for people to claim they have N > n years of experience with Swift on their CV.


Presumably 0, since there are already people who can legitimately claim multiple years of experience with Swift.


I am surprised noone mentioned Erlang!

Tuples and pattern matching are the bread and butter in Erlang and deserve to be mentioned as one of the source of inspiration for the Swift language.

The '_' character used to ignore some values in loops/patterns was also taken from Erlang (for _ in array { ... }).


Question: It sounds like the Xcode 6 beta is available on the dev center but I can't find it. Do you have to be a paying developer to have access to it, or does anyone know if it's going to be made available for free to (unpaid) registered developers?


Pre-release software is usually for paying devs only. Xcode is listed under the iOS 8 beta tab which is hidden to non-paying devs.


To access the Xcode 6 beta you must be a registered developer. Non-beta releases of Xcode are free to all.


I can't think of a reason why they wouldn't allow unpaid devs to play with Swift.


To get more devs to pay.


Slightly disappointed that I won't be able to try it out yet, because I'm not a Mac Developer. I can't get the XCode 6 beta without it, so I'd have to cough up $99 to try a new language... It seems to me like that might hurt its adoption.


I think you should be able to get it when it's out of beta without the $99 thing. Also I have paid my $99 and have not been able to get it yet.


Noob here. I don't understand... so what happens to Objective-C? Why would you code an iOS app with one language instead of the other? Why would you use both? That just sounds like a pain. Is Swift the evolution of Objective-C or something?


As a C# developer, I've felt familiar to this lanuage. I don't know why...


One of the things I look for in a language right off the bat, as it's a sign that powerful features can be built as libraries later, is some type of reflection api. There appears to be none (though attributes seem cool).


Do we know if enums can be recursive?

i.e.

enum BinaryTree = { case Leaf(Int) case Node(BinaryTree, BinaryTree) }


I saw somewhere else today that someone tried it and it crashed the compiler, so I guess not. The string "recur" doesn't even occure once in the iBook. It seem though that you might be able to use a combination of objects and enums to get that.


I'm super excited about Swift as Objective-C was always a barrier for me as I dislike it very much. This was the greatest news from Apple today, I hope to see compilers on other platforms as well soon.


"innovative" "new" "concise" "expressive" "lightning-fast"

God, I love Apple. Now I just wish real innovative languages could market themselves as efficient.


Maybe I'll pickup iOS dev, do you still have to have an mac to dev?


Yes, you need a Mac, especially if you want to be able to use Xcode and Swift.

If you are short on money, you can try to buy a second hand iMac or a MacMini (just be sure it will support the next OS X version 10.10).


You do need to have a Mac to do dev, although you can go the Hackintosh route. I just built one using tonymac86's guide. There are a few gotchas, but it just basically works, and can be a good way to get started without buying something brand new.


I'm facing the same entry barrier. I am considering buying the hardware for it at this point


In my experience it's hard work without a mac.


As someone who has written apps in Lua using Corona SDK. This is exciting. Syntax looks a lot like Lua/Ruby and we are not stuck using the watered down version of Lua that Corona provides.


Now where to read some code examples, they may say its better but until I read some code I remain unconvinced (still anything sounds better than Objective-C)


I just started learning Objective-C a week ago. Was almost finishing the Stanford iOS 7 course.

Coming from a Ruby background I couldn't be more surprised and excited!


It looks pretty nice, I hope they didn't forget about concurrency and parallelism. I don't see anything talking about this in the iBook.


I'm sure that the main mechanism will be the existing dispatch queue libraries Apple's other languages use.


They link over to this site from the Apple page about Swift http://swift-lang.org/ - I couldn't tell if they are the same thing or if Apple Swift is just based off of this..?


your trolling.


I actually was not, I didn't understand the relationship between the two languages. I see now that they are not related at all. By the way, it's "You're trolling"


So is swift an extension of c like obj-c is? In theory, can I have a program/app that uses swift, obj-c, and c syntax all in the same file?


Same project, not mixed in files.


No built in support for concurrency or parallelism :(


It's closest relative is Kotlin, as far as I can see. It shares a lot of the same functionality, from the null-checking to generics, etc.


As a detached Apple-related news follower, can some please update me: Is Swift going to become the new main development language for iOS?


If it's actually faster than Objective-C like Apple claims, then I'd bet on it, yeah.


Even if it's not faster, yeah. Most apps don't have any performance bottlenecks, and the ones that do can just drop down to Obj-C/C/C++


As someone coming from Python and Java but always dissuaded by Objective C's menacing syntax, I am 100% behind the new change.


Any hints on how hard it is to call C libraries from within Swift? This might be a great way to quickly develop a native UI.


I dunno, but you could always call the C libraries from Objective-C, and call the Objective-C from Swift.


I hope some of the debugging UX will cross over to objC dev; my experience is that this is the weak point of the XCODE IDE.


What is the advantage of "func funcName() -> returnType{ }" over "returnType funcName(){ }"?


When your function returns a function, it's cleaner.

func funcName() -> Int -> String { return { (i: Int) -> String in return i.toString() } }


Why do we have to wait with submitting apps that uses Swift until fall when Swift works with iOS 6 and iOS 7?


I feel like I could do most of this stuff with lua...but I haven't touched lua on iOS in years.


ObjC and its quirks was a major reason i never did Mac development. This sounds more interesting.


Will it still cost me an Arm, a Leg, and my first born to obtain the dev tools and compile code?


The beta currently requires that you are a member of one of the Developer programs (iOS or Mac) which costs $99 per year.

Release versions of Xcode are freely available at no cost (except that you need a Mac)


Oh! Finally! swf files on iOS :-)


[deleted]


> language website: http://swift-lang.org/

Nope, completely different. Really scummy of Apple to just nick the name of an existing language for their new one, it's been five minutes and people are already confused.


I don't know if it's scummy, but at least they link to the swift-lang.org website at the bottom of the page.


You mean like Google did for Go? Meh


Just like that.


that site's not related to Apple's Swift language


Nested multiline comments. :-)


Now I wish I knew of a tool to filter out HN comments older than x hours.


I'm a learning addict. Has somebody tasted it and is this worth it?


I like the "value binding" for switch statements with tuples.


This is probably the biggest announcement from a developer perspective. Swift looks like a language in which you can code as fast as you code in languages like Ruby or Python, while having the speed and performance of a language like Objective-C.


Swift was the biggest announcement today. Looking forward to it.


Looks neat, but I'm disappointed that Apple didn't go with Ruby for their next-generation language. Things like MacRuby and Ruby Motion make it seem like that was a possibility, albeit a pretty distant one.



Does anyone know when the new (beta) Xcode will come out?


Disregard: I wasn't signed in. You can download Xcode 6 now and test it out!


The beta is already available to download from the iOS/Mac dev center.


> you don’t even need to type semi-colons.

But you still need to type curly braces. Which are utterly redundant with indentation, pain to type, brainless task that languages should take care of, and a source of bugs.



Can't find the link to download Xcode beta!


You have to be a developer but... [xcode6 link] https://developer.apple.com/devcenter/mac/index.action

Edit: You then have to click "OS X Yosemite Developer Preview". Then scroll down, it's at the bottom of the page


Ah, you have to pay to play!


There are cool things in Swift, but I hope they just promote Javascript to a system language so that all platforms can go towards a single code base.


iPhone apps need to be more webscale and NoSQL compliant too. And what about event based? (Hmm...)


So is Swift a managed language or what?


It uses Automatic Reference Counting like Objective C. You must avoid strong reference cycles by declaring your fields weak and optional or unowned. It has deterministic deinit.


Anyone have any links or resources?


Has some features from Lisp.


So does every language.


I was implying that a feature that is known to be in Lisp but not in most of the other languages. Such as Read–eval–print loop.


Do they have a pdf book?


Anyone else see quite a lot of Scala here?


Yup...


You appear to be advocating a new:

[ ] functional [X] imperative [X] object-oriented [X] procedural [X] stack-based

[ ] "multi-paradigm" [ ] lazy [ ] eager [X] statically-typed [ ] dynamically-typed

[ ] pure [X] impure [ ] non-hygienic [ ] visual [ ] beginner-friendly

[ ] non-programmer-friendly [ ] completely incomprehensible

programming language. Your language will not work. Here is why it will not work.

You appear to believe that:

[X] Syntax is what makes programming difficult

[ ] Garbage collection is free [ ] Computers have infinite memory

[ ] Nobody really needs:

    [X] concurrency  [ ] a REPL  [ ] debugger support  [ ] IDE support  [ ] I/O

    [X] to interact with code not written in your language
[ ] The entire world speaks 7-bit ASCII

[X] Scaling up to large software projects will be easy

[X] Convincing programmers to adopt a new language will be easy

[X] Convincing programmers to adopt a language-specific IDE will be easy

[ ] Programmers love writing lots of boilerplate

[ ] Specifying behaviors as "undefined" means that programmers won't rely on them

[ ] "Spooky action at a distance" makes programming more fun

Unfortunately, your language (has/lacks):

[ ] comprehensible syntax [ ] semicolons [ ] significant whitespace [ ] macros

[X] implicit type conversion [ ] explicit casting [ ] type inference

[ ] goto [ ] exceptions [ ] closures [X] tail recursion [ ] coroutines

[ ] reflection [X] subtyping [X] multiple inheritance [ ] operator overloading

[X] algebraic datatypes [ ] recursive types [ ] polymorphic types

[X] covariant array typing [ ] monads [ ] dependent types

[ ] infix operators [ ] nested comments [ ] multi-line strings [ ] regexes

[ ] call-by-value [ ] call-by-name [ ] call-by-reference [ ] call-cc

The following philosophical objections apply:

[ ] Programmers should not need to understand category theory to write "Hello, World!"

[ ] Programmers should not develop RSI from writing "Hello, World!"

[ ] The most significant program written in your language is its own compiler

[X] The most significant program written in your language isn't even its own compiler

[X] No language spec

[X] "The implementation is the spec"

   [X] The implementation is closed-source  [ ] covered by patents  [ ] not owned by you
[X] Your type system is unsound [ ] Your language cannot be unambiguously parsed

   [ ] a proof of same is attached

   [ ] invoking this proof crashes the compiler
[X] The name of your language makes it impossible to find on Google

[ ] Interpreted languages will never be as fast as C

[ ] Compiled languages will never be "extensible"

[ ] Writing a compiler that understands English is AI-complete

[ ] Your language relies on an optimization which has never been shown possible

[ ] There are less than 100 programmers on Earth smart enough to use your language

[ ] ____________________________ takes exponential time

[ ] ____________________________ is known to be undecidable

Your implementation has the following flaws:

[ ] CPUs do not work that way

[ ] RAM does not work that way

[ ] VMs do not work that way

[ ] Compilers do not work that way

[ ] Compilers cannot work that way

[ ] Shift-reduce conflicts in parsing seem to be resolved using rand()

[ ] You require the compiler to be present at runtime

[ ] You require the language runtime to be present at compile-time

[ ] Your compiler errors are completely inscrutable

[ ] Dangerous behavior is only a warning

[ ] The compiler crashes if you look at it funny

[ ] The VM crashes if you look at it funny

[ ] You don't seem to understand basic optimization techniques

[ ] You don't seem to understand basic systems programming

[ ] You don't seem to understand pointers

[ ] You don't seem to understand functions

Additionally, your marketing has the following problems:

[X] Unsupported claims of increased productivity

[X] Unsupported claims of greater "ease of use"

[ ] Obviously rigged benchmarks

   [ ] Graphics, simulation, or crypto benchmarks where your code just calls

       handwritten assembly through your FFI

   [ ] String-processing benchmarks where you just call PCRE

   [ ] Matrix-math benchmarks where you just call BLAS
[ ] Noone really believes that your language is faster than:

    [ ] assembly  [ ] C  [ ] FORTRAN  [ ] Java  [ ] Ruby  [ ] Prolog
[ ] Rejection of orthodox programming-language theory without justification

[ ] Rejection of orthodox systems programming without justification

[ ] Rejection of orthodox algorithmic theory without justification

[ ] Rejection of basic computer science without justification

Taking the wider ecosystem into account, I would like to note that:

[ ] Your complex sample code would be one line in: _______________________

[ ] We already have an unsafe imperative language

[X] We already have a safe imperative OO language

[ ] We already have a safe statically-typed eager functional language

[ ] You have reinvented Lisp but worse

[ ] You have reinvented Javascript but worse

[X] You have reinvented Java but worse

[ ] You have reinvented C++ but worse

[ ] You have reinvented PHP but worse

[ ] You have reinvented PHP better, but that's still no justification

[ ] You have reinvented Brainfuck but non-ironically

In conclusion, this is what I think of you:

[X] You have some interesting ideas, but this won't fly.

[ ] This is a bad language, and you should feel bad for inventing it.

[ ] Programming in this language is an adequate punishment for inventing it.


In addition to being a space-hogging recapitulation of an old, boring Usenet joke, a lot of your checks are wrong; some of them are because the Usenet joke is lame, but some of them are just you missing things, like the fact that Swift interacts directly with C/ObjC, uses the same IDE as Cocoa developers already use, &c.

This lame Usenet joke was a way of punching down at people joining language newsgroups trying to get people to pay attention to their half-baked language ideas. This, on the other hand, is a language that was introduced on stage at WWDC by Chris Lattner. You can see how especially clumsy the joke is by the fact that you checked off a reason Swift "wasn't going to fly". Obviously, it's going to "fly" just fine on the Mac and in iOS.


I posted the checklist partly in jest, but also because I think it illustrates how every programming language ends up stuck with the same trade-offs.


Well, I thought it was funny and I enjoyed reading through the list. You're absolutely right that it provides an insight into the past language efforts and the trade-offs that they inevitably encounter.

I hope you remember that when someone tells you something like 'it doesn't work here' that it's an opinion rather than a fact, regardless of how they phrase it.

The USENET era, while sometimes dated, was probably one of times in 'geek' history where we were the closest to one another. The internet was interpersonal, and i'm glad that someone is still trying to propagate the humor and spirit from that time.


I'm not actually familiar with the context in which this checklist was originally written. But I really love it for two reasons:

First, it demonstrates how programming languages have been making the same tradeoffs for years, to the point that someone was able to make a checklist of what's wrong with any programming language that still works years later.

Second, you can fill this out for any of the big programming languages and many will do very badly. It shows how whether a programming language succeeds is unrelated to how good it is. An actually accurate checklist would have one item:

You're programming language will succeed because:

[X] It is tied to a popular platform.


It really doesn't work here, but also, while I appreciate the spirit you pasted it in, that wasn't the spirit it was written in. (And it takes up a huge amount of space).


It is kind of huge. I tried deleting all the non-checked ones but that defeats the purpose, which is for everyone to form their own opinion of what should be checked.


WTF is usenet?


You make some good points but this post is coming across as really snarky. Might be worth lightening the tone some.



upvoted but needs editing


While half of the things you checked in that copy-paste are wrong, let me take the bait and say why it will work:

- it's not just an arbitrary language, it's the new officially sanctioned programming interface for Apple's gigantic, wildly profitable ecosystem

- it's much more concise than Objective-C

- it's faster than Objective-C

- it's safer than Objective-C

- it has more features than Objective-C

- ...yet it's fully compatible with Objective-C and C code running alongside in the same app.

And last, but not least, if you know how Apple works, you know that somewhere in the range of WWDC 2016-2018, Craig Federighi will be on stage, showing a pie chart and saying "Over 92% of developers have ported their apps to Swift, so screw the other 8%, we're discontinuing Objective-C".

Moving fast and constant change is the name of the game at Apple.


1) So was Java for a bit. It sucked. Developers ignored it.

2) UIApplication.sharedApplication.delegate vs UIApplication.sharedApplication().delegate - nope. It isn’t more concise. It appears to be a wee bit more verbose.

3) Some benchmarks on the web are saying otherwise. It adds extra bridging and ARC for numbers. I don’t see how it can be faster and even if it is - nobody cares about speed. If I need to beat it, I can drop to C, IMP cache, and kick Swifts sorry little ass.

4) Safer - like the TSA says flying without nail clipp ers is safer? It doesn’t solve any problems I actually have. I think my last type error was in 2005 - took about a minute and a half to find it. I routinely work in dynamically typed languages and avoid static typed languages like the plague they are. I’m not opposed to type annotations when they help, but these just look like cargo cultism - like a lot Swift’s silly “features”.

5) It has less features where it counts - basically a less capable object model, weaker meta model, and more budensome interaction with C code than Objective C and a number of useful Objective C features have been walled off. I have Objective C code that cannot be written in Swift.

6) Except for that kind of dynamic stuff like performSelector: afterDelay: withArguments: and that pesky NSInvocation that isn’t available.

There are better projects around than this pile of crap. I will not be porting anything. Quite a lot of code I have can’t be ported.

Java killed WebObjects. Lets not let Swift kill Cocoa.


How is Swift safer than Obj-C?

I'm not sarcastic or anything by the way, I just don't understand yet the foundation behind that statement.


Are we talking about type safety or general security? Either way, Swift does better than ObjC in both regards.

ObjC was exceptionally type-unsafe. Any object type could be implicitly converted to and from the `id` type:

    NSString* foo = @"hello world!";
    id bar = foo; // no warning
    NSDictionary* baz = foo; // no warning
The Foundation collections all used `id` for the values and keys when they had keys. `NSDictionary` will accept any object as a key, even though it will only work if the key is copiable and hashable, and it does mean that you could have keys of fundamentally different types in the same collection. Values can be heterogeneous too.

As far as type safety goes, Swift is on par with modern languages.

ObjC also inherited several security issues of the C language, like unchecked arrays and unchecked arithmetic. Swift performs bounds checking by default (you can manipulate raw arrays with `UnsafePointer<T>`) and has checked arithmetic by default (you can allow overflows by prefixing the operator with `&`, so `&*`, `&+`, etc). It also never requires you to allocate and deallocate buffers yourself.

So Swift is more secure because it has no buffer overflows, no integer overflows (though they're more an issue when you have buffer overflows) and no unsafe memory management. These are by far the three most commonly exploited vulnerabilities in software.


Thanks for your elaborate answer, I very much appreciate it. After these discussions, Swift sounds like adopting the best of the worlds of fast and dangerous (C, C++, Fortran), slow and easy (Python, Ruby) and slow and safe (Haskell) languages (I'm listing the extreme examples here from my perspective rather than the actual influences to Swift). I especially like the 'safe by default - unsafe optional' stance, since this gives you the ability to create a working program quickly, but still optimise it later through introducing unsafe behaviour. This can allow Swift to be used for HPC purposes further down the road (provided [1] substantial parts will get Open Sourced, [2] it gets implemented on Linux and [3] it gets support for parallel computing both on node, core and accelerator level). Without having worked with it yet, here's what I'd love to get in order to use it for HPC:

-UnsafeButWarn compiler options -> replaces safe versions of arithmetic / arrays with unsafe ones, but issue runtime warnings whenever unsafe behaviour has occurred. Meant for debugging the runtime into never going unsafe, such that checks can be disabled.

-Unsafe compiler option -> replaces safe operations with unsafe ones. Meant for the production version that has been extensively tested but needs to run as fast as possible.


Dynamic typing is not "type unsafe". Objective C objects are completely "type safe". They are actually true objects in that they can receive and potentially handle any message sent to them.

This is an advanced but powerful feature and is in fact exactly what Dr Alan Kay meant when he coined the term "Object Oriented". Any system that has abstract data types that does not have a default message handling capability is not actually "Object Oriented" and the "type safety" brigade strikes me as a bit like depression era prohibitionists. They can't handle it so nobody should have it.

The "type safety" thing is a myth. I have worked on colossally sized systems written in PERL/Mason, PHP, Ruby, Smalltalk, Java, and C++. You'll note that all but the last two are "type unsafe" by your definition and yet I think my last "type related" error was sometime around 2003.

Also, not all collections are meant to be homogeneous.

>ObjC also inherited several security issues of the C language, like unchecked arrays

Objective C developers use NSArray and NSString. Raw arrays and pointers to raw memory are beyond exceptionally rare in Objective C. This is just more prohibitionist propaganda. In the cases where there really are raw memory accesses - you need them - but for the most part Objective C provided safe alternatives and were used heavily (NSData for instance rather than raw arrays and NSString instead of arrays of char).

Nobody is going to do audio or video processing in Swift. The CoreAudio team used C/C++ because when you need performance, you need performance and bounds checking is inefficient.

I heard this sermon from the C++ people in 1992, and then the Java people in 1998 - there's nothing new here and the myth of "type safety" vs "type unsafe" still isn't really true. It is just a lot of baggage and lost capability we are being sold here.

Raw C is potentially unsafe. Objective C - not so much. You can write quite a lot without even using a pointer (apart from id - a safe dynamically typed pointer).


For one, Swift avoids what Tony Hoare calls his Billion Dollar Mistake (conflating optionality and reference types). Because Tony Hoare introduced null references to Algol W, most Algol-derived and Algol-inspired statically typed languages since have made all pointers/references nullable. Some, like Java, have bolted on Lint-like null checkers an afterthought (see @Nullable, @NotNull) outside of the official compiler.

It's really nice to have the compiler not allow callers to pass null to functions where passing null wouldn't make sense, isn't useful, or the programmer was just too lazy to implement null checks. More than half of the functions I write have inputs where null wouldn't have an obvious meaning.

Objective-C's treatment of nil receivers is particularly handy for programming in the small and particularly bad for programming in the large. Sending a message to nil (similar to calling a method on null in many other languages) does nothing other than returning 0 cast to the type of variable you're storing the return value in. Chugging along accidentally accumulating nulls in many cases is worse than segfaulting and giving a nice stack trace.

Certainly in the realm of automated trading software, segfaulting when hitting a programming error is preferable to most other failure modes. Of course, failing to compile is often (though not always) the best failure mode.


It does not have features that often lead to bugs. E.g. the switch statement does not allow fall through and enforces a default case.


The lack of fall through makes me a bit sad since I think that it's a very powerful programming construct and it usually only hurts people new to the concept - but of course one can live without it.


Three things.

First of all, while you can't do this:

    case val:
    case val:
    case val:
        ...
You can do this in Swift, which does the same thing:

    case val, val, val:
        ...
Second, you can fall through in Swift, but you need to ask for it:

    case val:
        ...
        fallthrough
    case val:
        ...
So don't be sad.

And third... Seriously now, it's a "very powerful programming concept"? I bet 9 times out of 10 you use this very powerful programming concept in other languages, you did it by accident, not because you needed its pawa.


Thanks for the clarification on how to use it in Swift - like this it's indeed the perfect solution by enforcing it to be explicit. Having multiple cases rolled into one is also a very nice option. The snark wasn't really called for though. No, I don't 'use' it by accident, that would in most cases just become a bug. I use it when it makes code more concise rather than having nested ifs. Consider for example ACL rules where you set different privileges by default according to an access level.


There's no real objective measure for either 'conciseness' or 'safety', in general terms. Whether or not it's faster in the real world is yet to be seen, and creeping featuritis has never been the hallmark of a great programming language.

It makes good press release and marketing speak, but those assertions are a long way from making the OP 'wrong'.


There are objective measures. There aren't necessarily 'absolute' ones.

Lisp is relatively more concise than C or Java for the same tasks.

Java is relatively safer than C for the same tasks.


How objective can something be if the criteria vary depending on the comparison?


I'm not sure what you mean.

Lines of code is an objective measure. How meaningful that might actually be, well, that's subjective.


So you take a language with null pointers, unsafe pointer access, possibility of array and buffer overflows (C and Objective-C).

And then you take another where you have no null pointers, no direct pointer access, and no possibility for array and buffer overflows (Swift).

And you say "nah, I look at these two, and I can't objectively tell which is safer"? Are you fuckin' kidding me?

Regarding speed, it's been proven faster by benchmarks, and by the fact the very reason Swift doesn't mix C types and Objective-C types is so the compiler can better reason and optimize the resulting code.


I think you're putting words into his mouth, and perhaps intentionally misrepresenting what he wrote.

Swift was announced just a few hours ago. While we can make assumptions about its safety based on its feature set, or we can assume that the very vague performance details Apple provided are valid in a real-world setting, none of this has really been tested or verified independently yet.

What cratermoon wrote is perfectly legitimate. Let's try Swift out in the real world, perhaps for a year or two. Let's get some real data. Then let's analyze that actual data, rather than engaging in unsubstantiated speculation, or trusting some vague performance graphs shown in a conference presentation.


I'm not misrepresenting what he wrote, I'm directly responding to it.

While you may not be aware with recent language development and modern LLVM-based languages in general, to me and many others what was announced at WWDC today wasn't just a set of buzzwords and marketing speak.

Anyone who has been into language design and watched the keynote today can make a pretty good guess about the properties of Swift. There's a book with hundreds of pages of examples and description of the language semantics on iBooks right now. I've been reading it before I posted. I've been playing with the beta Xcode as well.

Swift is not revolutionary in terms of its feature set. But what makes it quite interesting is the fact this is now an official language for Apple app development, and it's quite a bit ahead of Objective-C in... yes. Safety, performance and conciseness. Fact.


I'd prefer to see how it fares in the real world before jumping to such conclusions like you seem to be so willing to do.

You may not have been around to experience it first-hand, but we heard a lot of claims made back in the 1990s about how Java and the JVM would increase security and safety.

The arguments made then even overlap with some of those being made in this case! The lack of direct pointer access and manipulation, automated memory management and better bounds checking are some examples of the arguments used then and now.

Yet if you work with computers at all, I'm sure you'd know that many of those claims did not materialize. Flaws have been found in the various implementations of Java and its VM, and these have affected its security and safety in very serious and significant ways, many times over.

Perhaps things are different in the case of Swift. But we won't be able to say for sure until later on, once it has undergone some significant real-world use.


And you're telling me... with a straight face... that Java's lack of direct pointer access, automatic memory management and bounds checking do not make it safer and more secure compared to C and C++?

You really should stop talking.


That's obviously not what I wrote, and you would know this had you bothered to read my comment.

I'm saying that such features alone do not actually guarantee safety, if the language's implementation happens to have flaws.

There hasn't been sufficient time and opportunity to see what Swift is like in practice. It's premature to say anything conclusive about it at this point, aside from stating that we don't yet have enough information about it.


The "flaws" discovered in Java aren't in the language, but in the most popular JVM implementations, most of which - surprise - are implemented in C (for the lack of a better alternative).

Writing in Java still remains monumentally harder to fuck up in compared to writing all your code in C or C++.

Furthermore, while Java relies on said relatively complicated interpreter + JIT virtual machine for its execution, Swift has no such virtual machine. All code is analyzed statically and compiled to machine code. The Objective-C runtime which it uses (which is not new - it's the same fucking Objective-C runtime) is a tiny C library, which implements a few basic low-level routines, such as dynamic message-to-method resolution.

So next time it's best for you to shut your mouth if you're ignorant about an issue, than telling people who know better than you to "wait for conclusive evidence".


I am looking forward to see if Graal[1] replaces Hotspot in Java 9, thus replacing the amount of C/C++ code in the reference implementation.

[1] Oracle's JIT implemented in Java, from the meta-circular JVM Maxime


And quite a bit behind in flexibility and dynamism.

Giant step backwards. Vtables? I don't think so.

override keywords? WTF? Its not so much they removed the C as the "Object".


Thanks for posting!


Oh sweet!

Another shrub in the walled garden!


Logo looks a lot like a hammer and sickle.


Great!!!!! i never liked Objective-C anyways


This is massive.


Hijacking another programming language name for your own cheap JS-syntax clone? Nice move, jerks!


Is that it? You have nothing more to say? JS, really? Can I say Scala?



"...we wondered what we could do without the baggage of C."

Is that tongue in cheek? It's not even a particularly large, encumbered language, C.


Heartbleed. Majority of all SSL keys on the internet compromised. All ~2 billion of humans on the internet required to change their passwords due to a single mistake by a single programmer using C. That's billions of human beings wasting hours either changing all their passwords or having their money, identities, medical records, and more stolen because they didn't. Having their accounts hijacked. For all we know totalitarian governments have already exploited this to monitor citizens and torture or kill them.

If that isn't enough, how about goto fail? All the IIS exploits in v4/5? Various Windows RPC overflows, WMF overflows, SQL Slammer, et al? How many billions in damages have been caused by stack smashing and buffer overflows? How many millions of hours of manpower wasted cleaning up after these errors? Toyota killed some people because their dumb code overwrote memory, blasting the OS task tables causing the watchdog task to stop getting CPU time, meaning nothing provided a stopgap against unintended acceleration. People are literally dying because we can't fucking let go of C.

C is like saying "forget seat belts, child seats, anti-lock breaks, and adaptive steering! How can I power-slide? I want full control; I need to pump the breaks. People should just drive better, then we'd have fewer accidents".

We've been trying to "drive better" for decades (Valgrind, lint, code reviews, static analysis tools, education, ASLR, NX protection, et al). We still regularly see massive security-smashing epic failures.

It hasn't worked. Furthermore the C standard library has been proven turing-complete for ROP gadgets in the presence of a buffer overflow. So no matter what you do, the presence of a single stack smash is enough to allow code execution, subject to payload size limits and execution time.

At some point we have to admit C is no longer acceptable. Not for libraries, not for drivers, not for operating systems. It has to go.

All the performance benefits ever derived from writing everything in C has been more than erased, by orders of magnitude, by the damage caused from even simple innocent mistakes.

Software allows us as programmers to greatly magnify our impact on the world; we like to think of that in positive ways. But the inverse is also true: thanks to the continued use of non-memory-safe languages we have the power to negatively affect the world on a massive scale.

It is unethical to continue writing code in non-memory-safe C or C-based languages, for any purpose. Period.


It is unethical to continue writing code in non-memory-safe C or C-based languages, for any purpose. Period.

I'm looking forward to seeing your new operating system and managed runtime written entirely using garbage-collected languages!


I don't necessarily agree with the post you're replying to, but Rust is a memory-safe language without any garbage collection whatsoever. Proper unique pointers and move semantics are basically magic.


I'd be interested to know more about what you think of xenadu02's post. Practically, we can't completely stop coding in C/C++ yet, but a large class of software can be written in safer languages already, and it seems to me that once Rust is mature, we should strongly prefer it over C or C++. The security problems with non-memory-safe languages are really that bad.


We have to be realistic. C is never going away. C++ is never going away. When has any entrenched programming language ever gone away? As someone who has been paid to write code in RPG (https://en.wikipedia.org/wiki/IBM_RPG#Example_code), I can confirm: never ever, ever ever. The best that we can do is to offer an alternative.

The reason why I put so much effort into Rust is because people who need to write the software in this space have literally no alternative that is not unsafe. Even if they cared about safety, they're screwed! Say that they need to write a library that can be written once and called from any language. That means, effectively, that they need to write that library in a language that 1) can expose a C-compatible interface, and 2) can run without a runtime. Which means, practically, that their choices of programming language are either 1) C or 2) C++. Despite Heartbleed, nobody's rushing to rewrite OpenSSL in ML. And I sure hope nobody's rushing to rewrite it in Rust either (we have no idea yet how Rust would fare for crypto, and we need time to figure that out). But once Rust is ready, you will at least have a choice. Memory safety will no longer be something that you leave on the table out of necessity.

I feel like the vast majority of the new programming languages coming out these days were conceived to make programming more pleasurable for the programmer. And yeah, I'm a programmer too, and I dislike many of the languages that I am forced to use every day. But Rust isn't about making programmers happy (although it seems to do that entirely by accident); it's about making users safer. Fewer vulnerabilities, fewer angles of attack for the baddies to exploit. And hey, if it makes software crash less, I guess that's cool too.


> We have to be realistic. C is never going away. C++ is never going away.

Not in the coming years, but it eventually will become a legacy language like RPG is, confined to old boxes running on long term maintenance contracts.

All is needed are a few mainstream OS where those languages are no longer part of the standard SDK. Like for example Microsoft just did with C as of Windows 8. Even their latest C99 compatibility changes were only done as they are required by C++11/14, nothing else.

> I feel like the vast majority of the new programming languages coming out these days were conceived to make programming more pleasurable for the programmer.

This was already possible with Lisp, Smaltalk, Mesa/Cedar, Modula-2, back when C was created, but then AT&T had better relationship with universities than Xerox PARC and ETHZ did.


What you're saying is perhaps ideal. The unfortunate reality, however, is that we really aren't seeing the "once Rust is mature" part actually happening.

There is still a large amount of change happening to the language and to the standard libraries. Some of this change has been of a here-and-there nature, where it's like they're trying to find an optimal or perfect solution that most likely does not exist.

C++11 (and C++14) may not offer the level of safety that Rust potentially could, but unlike Rust it's usable today, and using modern techniques does a reasonable job of avoiding dangerous situations.

It's been claimed that Rust will have stabilized and 1.0 will be released before the end of the year. Given that we're already into June, this becomes more and more doubtful each day. Now Rust is facing even more competition with this announcement of Swift. The longer we're forced to wait for a stable, seriously-usable release of Rust, the less viable Rust will become.


Parent poster didn't imply that the replacement would be garbage collected, just memory safe, which is a pretty big difference. I think you will see it sooner than you may think, it seems the tide may be turning.


Alright, I missed that nuance. I don't think I've ever worked with a language that provided memory safety without garbage collection.


The sad part, is that already in the 70's there were better alternatives, but UNIX creators just decided to ignore them and create their own language.


So they could implement SpaceWar on the PDP-11.


>Heartbleed. Majority of all SSL keys on the internet compromised. All ~2 billion of humans on the internet required to change their passwords due to a single mistake by a single programmer using C.

You can write shit code in any language - even Swift.

>how about goto fail?

I haven’t seen a goto in 20+ years. Strawman much?

>C is like saying "forget seat belts, child seats, anti-lock breaks, and adaptive steering! How can I power-slide? I want full control; I need to pump the breaks. People should just drive better, then we'd have fewer accidents”.

Yeah, and I suppose you’d prefer lumberjacks use rubber axes so they wouldn’t hurt themselves - or the trees for that matter. Life is dangerous. Get over it.

>At some point we have to admit C is no longer acceptable.

You bubble wrap your kids and lobby for lower jungle gyms at your kids schools too?

Swift doesn’t solve these problems and you sound like a self righteous ninny.

But hey - thats why there are scripting languages - for people who can’t deal with the machine. Pick one and go for it - but your scripting language isn’t suitably performant for things like audio processing (CoreAudio is in C/C++), real time control with tight tolerances, etc.

Grow the fuck up.

Sometimes you have to get your hands dirty and think hard and make stuff work. Your Swift code isn’t actually “safer” in the same way that the TSA hasn’t made flying safer - it is all security theater.


> you sound like a self righteous ninny [...] Grow the fuck up

Personal attacks are not allowed on Hacker News. Please don't post anything like this.


How about if you're developing in a field where the overhead of higher-level languages would prohibit the device from being made at all?

Do you just put on your idealist hat and say, "Sorry! I can't develop a medical device for you. You'll just have to die. At least no one was required to be careful about their programming."


> It is unethical to continue writing code in non-memory-safe C or C-based languages, for any purpose. Period.

And since memory-safe languages are written in non-memory-safe languages (i.e. C and C++), writing code in them is unethical as well, right?


Well, most implementations of Ada, Oberon and Modula-2 compilers among many others were boostraped, no C code in sight.


i registered just to say, "you're hired!"


So what other language can we use today on basically any imaginable platform, while still retaining the extreme degree of control, the excellent interoperability, and the near-optimal runtime performance of C?

Some will say Rust, but we're years away from that being realistic. C++, using modern techniques, is perhaps the only feasible response.

While there may be some validity to your claim about "billions of human beings wasting hours" due to vulnerabilities in C code, we can't forget that the alternatives would also suffer from significant forms of waste.

If using a language with slower runtime performance, for example, people will need to wait longer for their computations to complete. More powerful, or even just more, hardware will be needed to alleviate these delays. Slower runtime performance also often results in much higher energy consumption. The costs just keep adding up and up.

Forcing billions of people to use far less efficient software, while requiring far more powerful hardware, on a continual and ongoing basis, for decade upon decade, could very well generate waste that far, far exceeds that of dealing with an occasional flaw in widely-used C code. I just can't seriously buy your "All the performance benefits ever derived from writing everything in C has been more than erased, by orders of magnitude, by the damage caused from even simple innocent mistakes." argument.

C won't be going anywhere until somebody provides a practical alternative that offers benefits without any downsides. It's as simple as that.


C is dead on Windows and I hope Microsoft does not change its mind.

C is almost dead on Android, with its minimal exposure on the NDK and I hope Google does not change its mind about it.

C is now being killed on MacOS X and I look forward to Swift's success.

So this leaves out the embedded industry (slowly moving to Ada and Java on IoT) and the hardcore UNIX guys.


Hahahah C is dead? Come on, don't start that kind of bating. High-level efficiency-oriented languages are high-level efficiency-oriented languages and low-level performance-oriented languages are low-level performance oriented languages. This has been true and will continue to be true. Notice the number of high-level languages that have waxed and waned while C has continued to flourish...


I suspect they were referring more to the syntactic baggage in Objective-C – which is at least partially a result of the fact that ObjC was originally implemented as a simple C preprocessor. To avoid syntactic conflicts with C, ObjC uses lots of weird syntactic constructs that contribute to the language's characterization as ugly.

A good example is the liberal use of the @ character to denote ObjC literals – even in front of strings and numbers, because raw C strings/numbers have different semantics and won't play well with the standard ObjC APIs.


That's good context, thanks. When I dabbled in Objective-C, I found the excessive bracketing strange. Thanks for the background.


Is that tongue in cheek? It's not even a particularly large, encumbered language, C.

This would make a lot more sense if you knew about the history behind Objective-C.

http://en.wikipedia.org/wiki/Smalltalk


That's a good point, thank you!


It is when you have to maintain compatibility with it in your new shiny language that's based on it. C alone may not be particularly large or encumbered (though it does have that vexing parse), but as the foundation of other languages it can cause problems.


Nothing about concurrency. Nice. STOP MAKING NEW LANGUAGES!!! This is like environment pollution and should be shunned.


[deleted]


It turns out that this is a different language. Apple overloaded the name "Swift".


best part, one of this swift's authors is called wozniak

https://webcache.googleusercontent.com/search?q=cache:Qvltqf...



This appears to be a different swift.


Is that the same language? It looks completely different.


Wrong language.


“You also don’t need to write semicolons at the end of every statement.”

Please. Please. PLEASE don't be whitespace delimited!


Please. Please. PLEASE don't be whitespace delimited!

One of the prime examples I give for how HN/reddit has gone downhill was being downvoted by some clueless hipsters for suggesting that it would be easy to cross-compile from a whitespace-delimited language to a non-whitespace-delimited one. It's not some kind of huge fundamental divide in languages. For any non-whitespace delimited context free language, it should be possible to write a homomorphic transformation to a whitespace delimited one.

This is something that people should consider trivial and be beyond discussing.


That does not sound like a practical solution for any language that offers a standardized toolchain or no explicit compilation as features. For those languages adding a homebrew translation step results in discarding unrelated features that the user might have wanted, and completely isolates you from intermingling your code with the rest of the language community.

Syntax is not the most important thing in a language but it's not nothing either.


That does not sound like a practical solution for any language that offers a standardized toolchain or no explicit compilation as features.

Should be doable as an editor plugin.


If the solution for a feature intended to save time and reduce hassle ends up being "write and maintain your own compiler extension" it's a bad feature. Period.

I'm not a fan of whitespace delimiting or type inferencing because while they seems like they save you time and effort, I've found that in the long run you end up spending more time debugging your indentations here or type declarations there than you would have spent just explicitly declaring them in the first place.


I'm not sure how much Python you've written but I personally have never had to debug problems with whitespace. (I'm had to debug plenty of problems in other languages where the whitespace implied one thing but the curly braces said another).


If the solution for a feature intended to save time and reduce hassle ends up being "write and maintain your own compiler extension" it's a bad feature. Period.

Note that your contextual value of "save time and reduce hassle" is mostly programmer dependent.


Practically speaking I'd prefer to work with a language and toolchain that doesn't require me to do that kind of stuff (write my own cross compiler from a derivative of the language that has all my own preferences for syntax). I'd constantly worry about maintaining that piece and having that cut into my quality & productivity. I think applied language theory as it pertains to productivity is fair game for a good discussion.

That said, whoever downvoted you for that reaffirms my belief that HN should have limited metamoderation with some kind of exponential backoff of mod privileges for egregiously poor moderation.


"Clueless hipsters" will attract downvotes.


I just wouldn't bother to downvote for such a minor, subjective tone issue. Anyway, sorry, I'll end it here. Don't want to detract from the Swift discussion.

It makes me sad that companies are coming up with their own language stacks. Obviously Google vis a vis Oracle makes it a good move, if a company has the scale to accomplish and maintain it. Not to be a downer on what seems like a nice language, but other than that reason I see no need for Swift.


I didn't use that term until after the downvotes.


The post in this thread uses the term "clueless hipsters" in its first sentence. That phrase will attract downvotes. If you care about downvotes stop using needlessly polarising terminology. If you don't care about downvotes, well, do whatever.


I care more about calling out the cluelessness than the downvotes. If it gets attention and brings some bits of CompSci to the attention of people who wouldn't otherwise know, then all the better.


Ruby doesn't have neither semicolons nor whitespace syntax


Ruby has both semicolon and newline statement termination.

    > x = 1; y = 2; z = x + y
     => 3


actually, the rules are the same as swift. newline is the end of a statement, multiple statements on the same line require a semicolon.


Oh. I was rather hoping it would be!

But still, there's not a lot to hate. It looks like a cleaner javascript at first glance. Not quite as pretty as Python but then, what is. :)


Common Lisp.



You're ok. It's {}


Me (running arch linux): Oooh! I'd love to learn a new language. And great, they have a free ebook it looks like to describe it!

Apple: To download Apple Inc.’s 'The Swift Programming Language', you need to have iTunes.

Me: What?

Apple: Using a 64-bit edition of Windows? On a Mac?

Me: No.

That experience just killed a potential programmer for you right there, Apple.

I had a few hours to kill, and was pumped to jump on the next apple cash cow and help us both, but you literally killed my ability to download the manual or learn anything more about it for a few days, by which time I'll probably be onto something else.

They have this: https://developer.apple.com/library/prerelease/ios/reference... But it doesn't look nearly as in depth as the ebook would be.



If you're an Arch user, why would you learn a language that won't ever compile on anything other than OSX?


Because money.



That's a 474 page ebook that I can only load in a web browser... Not exactly easy reading.

But thank you!


Our mentors & curriculum developers went bananas when they heard about Swift – so we announced the first course teaching Swift: https://www.thinkful.com/a/dlp/learn-blue/base/IOS-002




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: