Hacker News new | past | comments | ask | show | jobs | submit login
Introducing Swift (developer.apple.com)
1216 points by falava on June 2, 2014 | hide | past | favorite | 712 comments



As someone who always disliked Objective C, I think Swift looks very promising. I'll check it out right away :)

Software-wise, I feel these current WWDC announcements are the most exciting in years.

Looking at the Swift docs right now, I can see many interesting inspirations at work: there's some Lua/Go in there (multiple return values), some Ruby (closure passed as the last argument to a function can appear immediately after the parentheses), closure expressions, strong Unicode character support, a very very neat alternative to nullable types with "Optionals". Operators are functions, too.

It has the concept of explicitly capturing variables from the surrounding context inside closures, like PHP does, instead of keeping the entire context alive forever like Ruby or JS.

Hell there is even some shell scripting thinking in there with shorthand arguments that can be used as anonymous parameters in closures, like "sort(names, { $0 > $1 } )".

Inside objects, properties can be initialized lazily the first time they're accessed, or even updated entirely dynamically. Objects can swap themselves out for new versions of themselves under the caller's nose by using the mutating keyword.

There is the expected heavy-weight class/inheritance scheme which accommodates a lot of delegation, init options, bindings, and indirection (as is expected for a language that must among other things support Apple's convoluted UI API). But at least it's syntactically easier on the eyes now.

Automated Reference Counting is still alive, too - however, it's mostly under the hood now. Accordingly, there is a lot of stuff that deals with the finer points of weak and strong binding/counting.

Swift has a notion of protocols which as far as I can tell are interfaces or contracts that classes can promise to implement.

I think generally there are a few great patterns for method and object chaining, function and object composition in here.

The language has C#-style generics, and supports interesting type constraint expressions.


I really don't see the Golang influence at all. The multiple- return- value semantic is closer to Ruby's than to Golang's; you're returning a tuple, which happens to have natural syntax in the language.

Defining Golang features that don't exist in Swift:

- Interface types with implicit adoption (Swift takes explicit protocols from ObjC)

- Error types

- Relatedly, the "damnable use requirement" and its interaction with error types and multiple value returns (ie, the reason Golang programs in practice check errors more carefully than C programs).

- Slice types

- Type switching (though, like Golang, it does have type assertions)

- "defer"

- Of course, CSP and the "select" statement.

Swift features that don't exist in Golang:

- Generics

- Optionals

- A conventional, full-featured class-model

Of the languages you could compare Swift to, Golang seems like one of the biggest reaches. Even the syntax is different.

(I like what I've read about Swift and expect to be building things in both Golang and Swift, and often at the same time).


This comment surprises me. It's factual but it's not really a response to an actual claim I made. Did you really perceive that I alleged an extreme similarity to Go in my comment? If so, it certainly wasn't intentional. I just said certain features reminded me of different languages, I didn't mean to assert these languages are actually incorporated into Swift.


No, no, I don't object to your comment. You're just not the only person I've seen making the comparison to Golang, and so I had a big comment bottled up. :)


Ah OK, I understand :)


My hat is off to you gentlemen. Such civility. Good day to you Sirs. Good day.


Including me, I think some syntaxes of Switft look like Go, while they actully don't share the same vision. Go tries to be a language great for system programming so it introduces channels, interfaces. But Swift want to help GUI programming and it needs the whole class system but without crazy stuffs like channels.

But for others aspects that are not related these two(class system, concurrency), I would say they look quite similar.


For Go to be great for systems programming, the unsafe package needs a few more primitives and better control over the GC.


I don't know what this means. I've used Golang successfully for USB drivers, emulators, web and database servers, testing tools, a debugger, and cryptography. It seems evidently suited for systems programming as it stands.

Someone is always going to be out there saying that any language isn't ready for prime time because it lacks feature-X.


Multiple return values are, of course, much older than Ruby or Go(lang). The first language I used with them was Zetalisp (Lisp Machine Lisp) in 1979, though at about the same time, with the release of v7 Unix, C gained the ability to pass or return structs by value. Xerox PARC's Mesa appears to have had them a few years earlier. I don't know of any earlier examples.

I'm surprised not to hear Python mentioned, as it also has a tuple syntax.


It does share Go's `func` keyword, parens-free conditionals, optional semi-colons


That's really not a lot. The optional semicolons could also be influenced by BCPL or JavaScript.


Yeah, it's only the entire basic Syntax of the language they copied.

Yes, Swift's semantics are different (since it's essentially a domain-specific language designed to make writing Cocoa apps faster), but syntax-wise a Go programmer feels right at home reading Swift.


>Yeah, it's only the entire basic Syntax of the language they copied.

Because, the keyword for function, keyword, parens-free conditionals and optional semi-colons are "the entire basic syntax" of Go, right?

Those are some of the most inconsequential details of Go syntax (all three of them), and of course all existed ages before Go.

Python has no semicolons and parens-free conditionals for one.


If you mean by "entire" the spelling of one keyword and the omitting of parentheses. Everything else seems to be more related to C and various Action/ECMAScript like scripting languages.


> - Interface types with implicit adoption (Swift takes explicit protocols from ObjC)

Objective-C has informal protocols, and so does Swift.


Informal protocols and implicit interfaces are not the same thing. In particular, implicit interfaces are type checked statically at compile time, while informal protocols are checked dynamically at runtime.


I'm assuming OP means structural typing, which Objective-C does not support.


Not just generics, but pretty fleshed out generics, with type variables constrained by class or protocol.


(I like what I've read about Swift and expect to be building things in both Golang and Swift, and often at the same time).

How about a blog series where a developer implements something in Golang and/or Swift, then you explain how it's insecure? Then the developer tries to fix it and you explain something else that's insecure. Rinse, repeat.


I thought that was called "Hacker News comments."


Perhaps if you take language features directly, it's not a good comparison with Go.

There are some things that did strike me as similar. The approach Go takes is to bring C language to a more modern world (i.e. C without some of the language burdens that we know so well). Swift is attempting to do the same. The way it does type inference is nice.

var x = "Hi" reminds me of Go's const types. The ARC usage reminds me of Go's garbage collection (even though it's not the same thing). Basically, the parts that it omits from C are similar to the parts that Go takes out of C even though the language itself is different... thankfully.


> The approach Go takes is to bring C language to a more modern world

Like all the other thousands of languages with C based syntax.

> var x = "Hi" reminds me of Go's const types

Why does it remind you of Go and not of all the other languages that use 'var x = "Hi"' like JavaScript, ActionScript, C#, Scala, Kotlin?

> The ARC usage reminds me of Go's garbage collection

Why does it remind you of Go and not of all the other languages with garbage collection?


It reminds me of Go in what it omits from C. There are similarities. Go feels like a mix between python and C.

I haven't gotten to Swift in a deep enough way, but it looks like it tried to tackle the same problems with the exception of concurrency. There are differences such as classes and generics in Swift. There are also similarities such as functions as first class citizens (or so it appears so from the closures section of the free book).

All in all, it reminds me of Go just a bit. It doesn't remind me of all of those other languages that I do not know.


> Why does it remind you of Go and not of all the other languages with garbage collection?

You sound old. ;)


I think the style of defining functions is similar to Go. Not a carbon copy, but it "feels" Go-ish.


From a user's point of view, it's basically straight out of the Rust book, all the gravy with also relaxed ownership and syntax.

It has it all [1]: static typing, type inference, explicit mutability, closures, pattern matching, optionals (with own syntax! also "any"), generics, interfaces, weak ownership, tuples, plus other nifty things like shorthand syntax, final and explicit override...

It screams "modern!", has all the latest circlejerk features. It even comes with a light-table/bret-victor style playground. But is still a practical language which looks approachable and straightforward.

Edit: [1]: well, almost. I don't think I've caught anything about generators, first-class concurrency and parallelism, or tail-call optimization, among others.


I don't really see anything but a superficial resemblance to rust, where both are borrowing ideas from the same place. Where Rust really differs from modern algol-family-languages-learning-from-30-year-old-functional-programming-research is in its strictness.

The optional type stuff is good, and it will definitely be a net safety improvement, but it's by no means attempting to approach a panacea to safety like Rust's strict static analysis does.

Particularly that Swift gives you really simple outs in the form of the '!' unwrap and 'as' (should be 'as!' at least imo) downcast-and-unwrap operators that result in run-time errors and will probably be seen as unremoveable code-smell in a couple of years.


Indeed, I'm not sure what Swift's concurrency story is yet. Other than that it's encouragingly similar to Rust (we're evolving in the right direction!), but not quite as low-level.


I'm sure that the main mechanism will be the existing dispatch queue libraries Apple's other languages use.


Most likely use the Grand Central Dispatch that is already part of iOS and Mac OS X.


The similarity to Rust should scare the hell out of Rust's creators and proponents.

Swift could very well render Rust almost totally irrelevant within the OS X and iOS sphere of software development. If we end up eventually seeing Swift implemented for other platforms, then the chances of Rust's long-term success diminish even more.

Things might have been different had a stable, even if somewhat imperfect, initial version of Rust had been released by now, thus allowing it to gain some adoption and traction.

I hope that the "But Rust isn't targeting those developers!" argument isn't used to try to justify this mistake, as well. Rust is already facing stiff competition from languages like Go, C++11, C++14, Scala and even Java 8.

With the announcement of Swift, Rust's niche and audience are getting smaller, further preventing the widespread adoption that's necessary for a programming language to become truly successful.


Oh hello again, Pacabel. I'm familiar with your game by now. :)

We're not scared in the slightest. I'll reconsider when Swift has inline ASM, allocators, linear types, move semantics by default, region analysis, and a type system that guarantees freedom from data races (oh, and when the code is open-sourced, and targets both Linux and Windows as a first-class citizen).

Swift isn't intended to be a systems language: it's an application language. Anyone who's capable of choosing Swift as of today was just as capable of choosing Objective-C yesterday. And as the Rust developers themselves have discovered over the past two years, deciding to become a systems language doesn't happen overnight.

(In fact, on a personal note, I'm ecstatic that Swift has been announced. ADTs! Optional types! Pattern matching! Think of how many features are no longer alien to people who want to learn Rust! And hell, the syntax is incredibly similar as well, which further reduces friction. As a hardcore Rust contributor, I want to shake the hand of each and every one of Swift's designers.)


Swift isn't intended to be a systems language

FWIW, Swift is categorized as a systems language in the opening pages. But, then, so does Go in its FAQ. To Swift's credit, at least it has deterministic memory management through ARC.


While having some support for garbage collection is good, reference counting is not is a rather expensive way to implement that for applications. This becomes especially bad in multicores since it may dramatically increase the number of writes to shared object cache lines.


Wouldn't a lot of that be mitigated, though, by using tagged pointers in place of RC structs where possible? Seems like an obvious optimization.


Not really sure what the tag in the pointer would be used for. Could you give an example.

In general, reference counting has the problem that it needs to update the reference count. If you have a read-only data-structure these updates to the references will introduce writes that may severely impact performance since a write introduces cache consistency communication, while reads are communication-free.


Yeah, you're right. I didn't think it through when I asked. I conflated this scenario with the technique they use to put small objects like NSNumber on the stack.


> Swift isn't intended to be a systems language: it's an application language.

It may not be ready as a systems language in its pre-1.0 form, but the Swift book claims that it's "designed to scale gracefully from ‘Hello World’ to an entire operating system", so Apple appears to have big goals.


I'm thinking about starting the "Rust contributor points out how Rust is a systems language and $language is an applications language" drinking game. At least now they'll focus on Swift instead of Go. I don't mean this to be rude; I've just noticed a similar set of usernames in threads about certain !Rust languages playing the underdog position and always feeling like they need to compare.

Given Rust's PR, speaking of that -- not a thread about Go passes without at least three pcwalton comments these days -- I actually broke down and gave it a try. I wrote a little Hello World server and then got lambasted by a friend of mine for not working functionally, since, in his words, "Rust is a functional language and the fact that it supports other paradigms is a mistake." I rm -rf'd and ignore it for now, but I look forward to it stabilizing and maybe coming back to it.

Rust has potential but the PR needs to ease up just a little. There is room for more than one language in the world.


> not a thread about Go passes without at least three pcwalton comments these days

Well, every thread about Go inevitably has numerous comments comparing it to Rust, often erroneously, and pcwalton is one of the primary Rust developers.


A fair point.


Rust is not a functional programming language. Your friend is just wrong for taking you to task about that.


Sign me up for a "someone gets indignant about rust developers responding to other people's comments about rust" drinking game.

(small-time rust contributor here too)


I think you should look up what "indignant" means, then, for the benefit of all of us, demonstrate the anger in my comment that was not put there unconsciously by the reader.


I saw a lot of people mention ADT in relation to Swift but I haven't found examples in the documentation book I downloaded from Apple. Would you be kind enough to provide the example you saw? EDIT: My bad, page 40 in the section about protocols (unless I'm missing something).


It's on the bottom half of the page about enumerations. Typically languages have product types, but lack true sum types. Swift's enums provide such leverage.

That said, Swift's types are a bit less than recursive, so there's a bit of niggling still before you get to the affordances of something like Haskell's `data`.


Thank you.


It's interesting that inline assembly is your first bullet point, since there's nothing I can think of that ruins a code file more than inline assembly in a host language. Put that crap in a .S file and link it in like everything else, for crying out loud. The one time you need inline assembly is when you don't want to build a function frame, such as a tight loop, but come on.

Also, even in systems, I can think of about once a decade I even need to write assembly, so... maybe grasping at straws a bit?


I wouldn't lead with inline assembler as the selling point of Rust. The main selling point of Rust is memory safety without garbage collection; it still is the only industry language that allows this (as reference counting is a form of GC).

That said, I think inline assembler is an important feature:

> It's interesting that inline assembly is your first bullet point, since there's nothing I can think of that ruins a code file more than inline assembly in a host language. Put that crap in a .S file and link it in like everything else, for crying out loud.

That's too slow. You need to give the optimizer more information than that, and the overhead of the procedure call can be significant.

> The one time you need inline assembly is when you don't want to build a function frame, such as a tight loop, but come on.

But that's a very important use case.

> Also, even in systems, I can think of about once a decade I even need to write assembly, so... maybe grasping at straws a bit?

It's all over the place in the Linux kernel.


> But that's a very important use case.

For Rust. I didn't make the comparison to Rust, I merely was intrigued by the choice of features and in which order to defend the comparison made by someone else. I see Rust and Swift as targeting entirely different things, at least at first (which means that Swift can certainly evolve inline assembly if it is so needed), and any comparison at this stage is pointless.

> It's all over the place in the Linux kernel.

Cool, that's one piece of software. I'll save you the next few: drivers and a couple files in a game engine. You're disputing my point how, exactly?


I'm confused that you seem to think that my bullet points are ordered from most to least important. It seems like quite a strange thing to attack! I'm sorry to disappoint you, but I do not run these comments by a team of editors first. :)

That said, judging by your other comments, you seem to be of the impression that the Rust team has some sort of vendetta against Go, and have concocted a vendetta in kind. Again, I must sadly disappoint you, but I strive to encourage a culture of respect in all the forums that I visit (and moderate).


If you find my comments disrespectful and think I'm implying you have a vendetta, you skimmed them and are disrespecting me by putting words in my mouth. I've simply noticed a trend of most commentary from Rust contributors taking the underdog position and participating in picking apart other (ostensibly) competing languages, and I think you guys should stop doing that.

The last sentence of the comment to which you hint is my thesis. There is no subtext. It's easy to perceive negative opinions as attacks and create an adversary from the person writing the opinion, but it's also a little bit disingenuous. It also, conveniently, creates a platform upon which nobody can disagree with you lest they be hostile and aggressive. You can then exit on the "high ground," as you've done here.

I meant no ill will. Best of luck, I guess.


> The one time you need inline assembly is when you don't want to build a function frame, such as a tight loop, but come on.

This is hilarious. Like anyone would ever use inline assembly in a tight loop.


My apologies, I didn't realize that expecting a programming language to have a stable syntax, stable semantics, a stable standard library and at least one stable and robust implementation before using it seriously in industry was merely a "game".

Perhaps this is news to you, but those of us who work on and are responsible for large-scale software systems tend to take such factors very seriously.

This may sound harsh, but it really doesn't matter what features and benefits Rust could potentially bring to the table if the lack of stability makes it unusable in practice today. A programming language that can't be seriously used might as well not even exist.

I don't doubt that Apple will have Swift available in a seriously usable form by this fall, and it's very likely that it will see rapid adoption soon after. I'm afraid I can't say the same about Rust and its supposed by-the-end-of-2014 1.0 release, given its current lack of stability and the rapidly-approaching end of the year.


You seem to desire both stability and a faster 1.0 release. The realistic choices are:

    1. Release fast and iterate
    2. Release fast and be stuck with mistakes
    3. Release slow
Option #1 breaks stability, so that's out.

Swift appears to be taking option #2 (Apple doesn't commonly break APIs, do they?), but we can't even really be sure because it hasn't been developed in the open the way that Rust has. It's possible that it's been in development as long as Rust, and we simply haven't heard about it yet. Either way, option #2 is a perfectly reasonable one to go with; it has served Java quite well (for a loose definition of fast), though it has required some creative approaches to language improvements.

Rust is taking option #3. C has been around for over 40 years now. If Rust hopes to supplant it, it seems reasonable to take a few extra months (or even an extra year) to put out a solid version 1 that won't hamstring the language or force a breaking change down the line.


Apple just announced in the Platform State of the Union that they won't guarantee source compatibility until Swift is released along with iOS 8 (changes to the language will require source conversions), so I believe they're taking a route closer to option #1.


No I'd say Apples approach was: methodically develop until very polished first, then announce after. You just didn't get to see the 0.1, 0.2, 0.3... versions.

This is actually nice, because knowing about languages years before they are production ready probably just slows developer adoption because nobody is quite sure when they should trust there development process to a new language.


>My apologies, I didn't realize that expecting a programming language to have a stable syntax, stable semantics, a stable standard library and at least one stable and robust implementation before using it seriously in industry was merely a "game"

You also didn't realize that you just built the biggest strawman ever in the above sentence.

Enough with the "I want a stable Rust now". Rust, like any other language, takes years to stabilize. You just happen to see it happen in the open, whereas most other languages you get them at their 1.0 release.

>This may sound harsh, but it really doesn't matter what features and benefits Rust could potentially bring to the table if the lack of stability makes it unusable in practice today. A programming language that can't be seriously used might as well not even exist.

They could not give a flying duck about it being "seriously used today".

They'll start to care AFTER they release it as 1.0. They only released this 0.x versions to solicit ideas and improvements, not to get programmer's to adopt it.


Well, we aren't actually seeing stabilization when it comes to Rust.

Assuming this stabilization actually does happen, whether it happens in public or private is irrelevant.

What matters is that we've seen C++ stabilize. We've seen Go stabilize. We've seen Scala stabilize. And now we'll likely see Swift stabilize, well before Rust does. They are all serious competitors to Rust.

As these other languages continue to evolve, but at the same time remaining usable, the relevance of Rust will continually decrease. It may still have drawing power today. A few years from now, it will have less appeal.


This is silly. Rust is on a similar time frame to Go in terms of stabilization (~2.5 years after release).

It's hard to make a comparison against Swift, which is a proprietary language developed for years behind closed doors. Presumably they're at 1.0 from the first day by design. You can't do that with open source languages.


* Scala has been around since 2004 (public release) -- 10 years ago. (Not sure when the first stable release was but more than 3 years.) * Go in 2009 (public) -- 6 years ago. 1.0 (first stable release) was released in 2012 so it took 3 years to stabilize. * C++ in 1983 -- 31 years ago. ... It's been a long time. * Clojure in 2007 -- 7 years ago. The creator took 2 1/2 years before releasing it to the public. * Rust in 2012 -- 2 years ago.

It's pretty absurd to expect Rust to be stable right from the get go. The difference in all this is that most of those languages were closed before being released. Rust was open at a pretty early state.


You're probably right, but I think I've heard of Rust in public in 2010 and that it was started by Graydon in 2007.


Rust had been stewing around in Graydon's head for years, but he was never paid to work on it until 2009 (part-time, at that point). And he didn't have any paid employees to help him until 2010 (that would be pcwalton). And as far as I'm concerned, Rust development didn't actually start in earnest until mid-to-late 2011, when the compiler began bootstrapping. And the Rust that we know and love today didn't begin to take shape until 2012, when borrowed references started taking shape.

Personally, I consider the 0.1 release in January 2012 to mark Rust's "birthday". Everything prior to that was just gestation. :)


I can't tell if you like Rust or hate it. If you hate it, and you are right, then it will simply fade away and your comments will serve nothing more than being able to say, "I told you so." If you are wrong, then you end up looking a bit silly.

If you like it, perhaps you should be a little patient and give the creators the benefit of the doubt. No one wants a Rust 3.0 fiasco.

It's hard to encounter language issues without implementing a large project in the language. I am happy that they're taking the time to let the implementation of Servo help inform the design of Rust.


Rust's raison d'être is memory safety without garbage collection. Swift requires garbage collection to achieve memory safety. (Reference counting is a form of garbage collection.)

In other words, Rust is about safety with zero overhead over C++, and Swift is not zero-overhead. So the people who need Rust are not going to use Swift for Rust's domains. That's fine, as Apple wanted a language for iOS and Mac app development with tight integration with Objective-C, and from what I've seen they've done a great job of developing one.

> Things might have been different had a stable, even if somewhat imperfect, initial version of Rust had been released by now, thus allowing it to gain some adoption and traction.

Why are you so insistent that we freeze an unsafe version of a language that's designed for safety?


> Why are you so insistent that we freeze an unsafe version of a language that's designed for safety?

Pacabel has made a career of complaining about Rust being unstable.


At least he/she hasn't derailed the thread by complaining about every other project that Mozilla is working on. I count that as progress!


Are you honestly suggesting that Rust is stable at this point?

I think that the recent, and very disruptive, ~ and box changes should completely dispel that notion.

I'm merely pointing out the reality of the current situation, which some in the Rust community do not wish to acknowledge, for whatever reason. The situation has yet to change, so what I'm saying is still valid, and will remain so until some actual improvement does take place.

Now that we see yet another serious competitor in the form of Swift, what I've had to unfortunately be saying for some time now becomes more and more relevant. If Rust is to become relevant, it will need to be usable, and that will need to happen very quickly.


No, I'm not suggesting that Rust is stable. It wasn't even slightly implied by what I said. I was just pointing out that you're a broken record on this topic, to the point of being a troll (you seem to just ignore the meat of any response you get and only focus on the current state of Rust).

To be crystal clear: no-one is suggesting that Rust is stable and no-one is suggesting it is ready for adoption (if they are, they are wrong). However, being unstable now is very very different to not ever being stable.

In any case, Swift is only tangentially a Rust competitor as kibwen demonstrated.


Resort to name-calling if you really must. None of that will change reality.

Rust is not stable, as you yourself have readily admitted. What I've unfortunately had to be pointing out for such a long time now is absolutely correct.

We've been told that we can expect Rust 1.0 by the end of the year. As each month passes, it becomes less and less likely that we will actually see this. We are still seeing significant change, even as recently as the past month.

I think Rust could potentially be very useful. But that requires stability, and that in turn is something that appears more and more elusive each day.

It's easy to say that Swift isn't a competitor to Rust, but the reality is that it is. And unlike Rust, it will very, very likely be usable for serious apps within a few months. It will see the adoption that Rust could have had, had it been usable, further reducing Rust's future changes.


What have you been pointing out for so long? That Rust is unstable? That many people/companies won't use Rust while it is unstable? That there are other languages people can use instead?

All of those are highly uncontroversial and universally acknowledged by experienced Rust users.

Also, I don't understand how you have lept from Rust being unstable now, to Rust never being stable.

A 1.0 release by the end of the year doesn't seem at all unreasonable to me; I think you are expecting more from it than what the Rust team is looking for (and have stated publicly repeatedly): stabilising the core language.

Of course, a stable release of that form will still mean some libraries may be unstable (and so that Rust would be unsuitable for many corporate developments). These libraries will be stabilised progressively and iteratively.


>Are you honestly suggesting that Rust is stable at this point? I think that the recent, and very disruptive, ~ and box changes should completely dispel that notion.

No, he merely suggests that you bored a lot of people by repeating that it's unstable, instead of accepting the fact and using something else.

If being unstable is that bad, then by all means, go and use a stable language.


I, and many others, do use something else. That's the big problem facing Rust, whether or not its creators wish to admit this fact.

There are numerous alternatives to Rust that offer many of its benefits, but they're usable today. We can rely on them today, tomorrow, and likely for some time to come.

And by this fall, we'll likely have Swift as yet another option to add to our growing list.

I think Rust has a lot of potential. But each month that goes by squanders that potential. It has less and less of a chance of making a real impact the longer it isn't usable, especially while its competitors keep evolving.


>I, and many others, do use something else. That's the big problem facing Rust, whether or not its creators wish to admit this fact.

Yeah, and I listen to Rihanna instead of Jay Farrar. Obviously that's the big problem Jay is facing, and he should sound more like Rihanna to cater to my taste.


> Are you honestly suggesting that Rust is stable at this point?

No, he didn't say that anywhere.


I think you may be aiming for a level of safety, or perhaps a notion of "perfection", that isn't practically obtainable.

The recent, and rather disruptive, box changes are a good example of this. We see change, and those of us with existing Rust code sure do feel the change, but very little convergence seems to be happening.

Based on past trends, I would not be at all surprised if problems are found with the new approach as it becomes more widely used, and some other approach is then attempted.

Wheel-spinning is something that can quite easily happen with ambitious software projects. It's not a new phenomenon. But when facing ever-increasing competition, and other real-world constraints, it's often better to aim slightly lower and at least be mostly usable in practice.

A memory-safe programming language that can't actually be used is pretty much irrelevant. It's better to accept some slight amount of imperfection if that means it can actually be used.


> I think you may be aiming for a level of safety, or perhaps a notion of "perfection", that isn't practically obtainable.

I believe it is, as the basic structure and rules of the borrow check (which is the part of Rust that's truly unique) have proven themselves to be quite usable. The usability problems remaining are implementation and precision (e.g. issue #6393), not big problems that will require large redesigns.

> The recent, and rather disruptive, box changes are a good example of this. We see change, and those of us with existing Rust code sure do feel the change, but very little convergence seems to be happening.

Yes, it is. The number of outstanding backwards incompatible language changes is decreasing. People who do Rust upgrades notice how the language is changing less and less.

> A memory-safe programming language that can't actually be used is pretty much irrelevant. It's better to accept some slight amount of imperfection if that means it can actually be used.

"Some slight amount of imperfection" means not memory-safe. Java didn't settle for that, C# didn't settle for that, Swift isn't settling for that, and we aren't settling for it.


This is an honest question: Why do you seem to care so much? Rust is in my view a great project, that yes isn't quite there yet but is making great progress. I'm looking forward to using it when it is stable, and pcwalton and the other contributors are developers that I've looked up to for a number of years: I have nothing but faith in them.

At the end of the day, if Rust fails, well that will be a shame. But I'm seeing nothing that shows that it might, so I'm truly struggling to understand why you seem so upset by a new modern language trying to tackle big problems in ways that have never been done before. That's a good thing, as far as I'm concerned.


Having been in industry for a long time, I think that something like Rust would be hugely beneficial. It very well could solve some very real problems.

I bring this up again and again because I'd rather not see Rust fail. I'd much rather see a slightly flawed Rust that's actually usable in the short term, rather than a continually changing Rust that nobody will seriously adopt.

Rust has been in development for years now. That's a very long time in the software industry. A few years of development time without a stable release is understandable. But it's getting beyond that now.

Rust isn't quite there yet, but each day it edges closer to a Perl 6 type of disaster. Perl 6 offered some intriguing ideas, but it just isn't usable, and that's a shame. Meanwhile, other competitors have arisen and blown past it, rendering it far less useful were it ever properly implemented.

Given the increasingly stiff competition that Rust is facing, I suspect we'll see it end up like Haskell or D. Something usable is eventually produced, but it never sees the truly widespread adoption that it could have seen, had it been usable earlier on. It's not as bad as Perl 6's situation, but it is still unfortunate.


> Given the increasingly stiff competition that Rust is facing, I suspect we'll see it end up like Haskell or D. Something usable is eventually produced, but it never sees the truly widespread adoption that it could have seen, had it been usable earlier on.

I don't have much to say about D, but the history of Haskell implied by this sentence is hilariously wrong.

Go watch Simon Peyton Jones' talk about the history of Haskell: http://research.microsoft.com/en-us/um/people/simonpj/papers.... As well as being wonderfully entertaining, it explains the actual history of Haskell: it was designed to be a language with which various academic groups could do functional programming language research. The fact that Haskell has gradually grown more popular and now has mainstream appeal and some industrial users is quite a surprise to its creators.


> Rust has been in development for years now. That's a very long time in the software industry. A few years of development time without a stable release is understandable. But it's getting beyond that now.

Not for programming languages. These take years and years. Take a stab at any of the most popular languages. They weren't created 1-3 years ago. It takes time, and that's a good thing.


reference counting != garbage collection. Garbage collection take CPU and lots of memory. Reference counting just ++ the pointer count on allocation, and -- on free. Essentially no overhead.


C++ moving to a 3 year standard cycle is a much bigger 'threat' to rust. But really, the fact that there's so much actual investment in improving mainstream languages from various well-funded sources is probably a rising-tide-lifts-all-boats kind of thing.


Yes, I do agree that the situation is improving across the board.

But as an industry, we need practical solutions that are available now, even if somewhat flawed. We need languages we can use today, and know that the code we write today will still compile fine next week and next year, if not a decade or more from now.

Modern C++ is getting pretty good at offering this, while offering far a greater degree of safety. Go isn't bad, either. Scala has its drawbacks, but it's often a reasonable option, too. The key thing to remember is that all of these languages have offered developers a stable target, and they are seriously usable in the present.

Given the announcement of Swift, and given that Apple will very likely deliver on it by the fall, we very well could see it becoming a major player during 2015.

The safety benefits that Rust could theoretically or potentially offer are virtually useless to huge swaths of the industry as long as the predictability of a stable release just isn't there. The longer this wait goes on, the better the competition becomes, and the less relevant Rust will unfortunately become in the long term.


> But as an industry, we need practical solutions that are available now, even if somewhat flawed. We need languages we can use today, and know that the code we write today will still compile fine next week and next year, if not a decade or more from now.

By this logic we shouldn't invent any new programming languages at all. There's no such thing as a "practical solution that's available now"; everything takes time to develop.

> Modern C++ is getting pretty good at offering this, while offering far a greater degree of safety. Go isn't bad, either. Scala has its drawbacks, but it's often a reasonable option, too. The key thing to remember is that all of these languages have offered developers a stable target, and they are seriously usable in the present.

You aren't going to use those languages if you want memory safety without garbage collection. Because they can't offer zero-overhead memory safety without breaking existing code.


Swift's environment is also very similar to Elm's time travel debugger: http://debug.elm-lang.org/

Direct link to Elm's demo similar to Bret Victor's: http://debug.elm-lang.org/edit/Mario.elm (video: https://www.youtube.com/watch?v=RUeLd7T7Xi4)


I immediately thought about that as well. I wonder how they pull it off? Swift is not a functional language, so they just save every single variable, or what?


Time travel debugging has existed for a long time, and it's not limited to functional languages; the most obvious way they could do this is through checkpointing.


Checkpointing is a natural fit with the Cocoa API, which uses an event loop. Just save the state of the application after each event is handled.


Just a hunch, LLVM uses Static-Single-Assignment, which is just that, saving every single variable change.


He mentioned the desire to drop the "C" from objective-c, but I'm curious what this means for using c/c++ libraries now. Do they need to be wrapped by objective-c before being visible in swift?


Swift uses a special "bridging header" to expose Objective-C code to Swift. This header will presumably be processed by the Swift compiler.

In the other direction, XCode will automatically generate an Objective-C header to expose your Swift code to Objective-C.


The ibooks guide references a "Using Swift with Cocoa and Objective-C" document. Is that where you got your information?



It seemed like it could interoperate with C just fine based on the slide talking about all three. Also because it uses the Objective-C runtime and compiles to native, it might just see C functions as normal functions. Though what little I've looked at the free book hasn't given me any hints about that yet.


"You cannot import C++ code directly into Swift. Instead, create an Objective-C or C wrapper for C++ code."


I suspect that as long as it compiles to LLVM, anything goes.


I hope so. Another stab on C's back.


Is it realistic to try to dive right in to the 500-page book they provided without a computer science background, just HTML/CSS/PHP self-taught experience, to learn the language? Or should I take other steps first?


Reading a book cover-to-cover is, for me, a bad way to learn a language. I usually pick things up very quickly when I try to implement things using them that I am both familiar with and faced with annoying issues stemming to the language they are currently implemented in. If you don't have one of those, think about something you hate, and fix it with this.

If the book is good documentation, then use it. But you may benefit from focusing more on problems than completing a book.


Just read the first paragraph and conclusions if the have them, of each chapter. This will give you a good idea of what's there when you need it. Then I'd jump straight into tutorials.

Honestly skimming 500 doesn't sound horribly hard to me. I've done that a few times to pick up something new. As ap said you won't learn the language like that but you will have a good reference to go and learn from after the fact.

After that you could probably work through the examples in said book, or at least the interesting ones.

P.S. above steps is all I really learnt from my cs degree.


Skimming through it has been great. It's quite well-written and you'll get a lot of the concepts that the lang introduces even if the extent of your programming education is JS. Give it a try :)


And is this the exact same language - http://www.cs.cornell.edu/jif/swift/doc/index.html


No, it is completely unrelated. That is the other language called Swift.


The copy I have is only 366 pages, but it's 'converted' from the ebook so I'm not sure if that's a factor. A lot of the pages are dedicated to an examination of the grammar that's probably not relevant for a language overview and the rest is really readable and easily skimmed for interesting details. It's broken up with simple and clear examples every few paragraphs as well.

Definitely take a look through it. You definitely don't need to be a language nerd to understand it.


I'm 20% in, and you certainly should give it a try. It's very well-written, explains basic concepts really well, and has a lot of examples. It also has a good flow from the basc features to more advanced ones.


Great, thanks - will give it a shot!


I think it's a good idea only if you execute code in parallel, following all the examples. Otherwise there's a lot of notions than already exist in Objective C that are glossed over and would be pitfalls for people new to the runtime.


I think everybody can see their own favorite language in it... and that's a good thing.

For me it looks like Scala + a sprinkle of C++14 :)


There are others too with "looks like Scala":

Jacob Leverich https://leverich.github.io/swiftislikescala/

Den Shabalin http://www.scribd.com/doc/227879724/Swift-vs-Scala-2-11


Indeed, I also think it's most similar to Scala.


> It has the concept of explicitly capturing variables from the surrounding context inside closures, like PHP does, instead of keeping the entire context alive forever like Ruby or JS.

Just as a point of fact, javascript -- at least the v8 implementation I'm most knowledgeable of -- doesn't "keep the entire context alive forever." Only variables used in the closure are allocated into the closure (i.e. on the heap), the others are allocated on the stack and disappear as soon as the function returns.

I don't use iTunes so can't read their book, but I wanted to ask: you say that ARC is still their GC strategy, correct? So reference cycles are still an issue? I'm surprised at this. I can see reference counting being a strategy for moving a non-GC'd language to GC (like Objective-C), but to start a new language with that constraint is surprising.


> Only variables used in the closure are allocated into the closure

I'm not sure that's true. Look at the following code:

  var x = 123;

  var f = function(xname) {
    eval('console.log('+xname+');');
  }

  f('x');
It's a dynamically named variable. Clearly, f() has access to the entire context that surrounds it, not just the objects explicitly used in the function code. In this example, the compiler could not possibly have known I was going to access x.

This means in Javascript, as well as in Ruby, when you hand a closure to my code, I can access the entire environment of that closure.

Contrast that with Lua, for example, where the compiler does indeed check whether an outer variable is being used and then it imports that variable from the context only.

PHP does it most explicitly, forcing the developer to declare what outer objects they want to have available within the function.


> In this example, the compiler could not possibly have known I was going to access x.

Right, but it knew you were going to use eval, and to support that, it had to allocate all local variables in the closure. That's why you saw this behavior. The same would happen if you used a 'with' construct.


> Right, but it knew you were going to use eval, and to support that, it had to allocate all local variables in the closure.

Wow, so there is actually special handling in the engine for this? So it does static analysis whenever it can, but not in these two cases?


Yes, the V8 compiler bails out of several optimizations if your function uses eval. You can see this in the profiler: functions which V8 wasn't able to optimize will have an alert sign next to them, and if you click it, it'll tell you what the issue was.


This is a bit troublesome!

  function f() {var x = 99; return function(a,b) {return a(b)};}
  f()(eval, 'console.log(x);')
  ReferenceError: x is not defined
  function f() {var x = 99; return function(a,b) {return eval(b)};}
  f()(eval, 'console.log(x);')
  99
  undefined


Yep, this is as per spec: http://www.ecma-international.org/ecma-262/5.1/#sec-10.4.2 . "Indirect" calls to eval (i.e. assigning eval to another variable, like you did by passing it as a param) are evaluated in terms of the global environment. "Direct" calls, like in your second example, use the local environment.


Very cool, thanks for clearing that up!


Yes, and in articles describing it (v8), they explicitly warn you to not use "eval" or "with" for the performance impact.


In your example, X is still in scope when f('x') is called. It doesn't require closure to work.


I think you misunderstand what I'm trying to say. The point is not that x should be out of scope (why would it be?)

The original assertion by curveship was that the outer context is not kept alive for the function, and that f() only gets access to the variables it explicitly imports from the outer context. And I thought this might be wrong, so I cooked up the example.

Again, this is not about scope. This is about the fact that the function itself keeps a reference to the entire context it was created in, as opposed to just the things it explicitly imports.

In this, it appears, Javascript works exactly as Ruby, which again makes the entire outer context available through the binding facility.

I'm sorry if that wasn't clear from my description.


Swift reuses the Objective-C runtime, so it had to be compatible in terms of memory management.


OK, thanks. I guess that makes sense. Now that the website is working, I see that they're aiming at fairly seamless interop between Swift and Objective-C, so I guess they need a similar memory strategy.


This is not accurate. SpiderMonkey and V8 still retain the entire outer scope if any of the variables are used.

See here for an example: https://www.meteor.com/blog/2013/08/13/an-interesting-kind-o...

This bug is still not fixed. There's an issue open for it on the V8 tracker, I believe. It seems to have not gotten fixed in either engine because it's a difficult problem that affects a small subset of JS applications.


So go ahead and run his test. Things have changed :). Memory builds up 1mb/second, then after a few seconds, you'll see it drop back to zero, as the GC runs.

V8 has seen a lot of really nice optimizations to closures over the last year. My favorite is that closures are no longer considered megamorphic.


Oh wow, when did that land? I was getting hit by that leak with JSIL in the last couple months. Can you link to the commit?


Not sure the date. I first noticed the change back in April, when I was profiling some code. Ask vegorov: http://mrale.ph/blog/2012/09/23/grokking-v8-closures-for-fun... .


I agree that Swift looks quite promising, though I'm a bit surprised that it doesn't offer any concurrency primitives like Go does. I only say this because "The Swift Programming Language" e-book claims that "it’s designed to scale from 'hello, world' to an entire operating system."


I'm not even an iOS developer but this is by far the most exciting thing I heard in the keynote.

As an amatuer/hobbyist programmer who's self-taught with Ruby, JavaScript, etc., the one thing that was keeping me from experimenting with iOS apps was Objective-C. I know I could tackle it, but it's been hard to take the plunge.

I don't know much about Swift yet, but from what I've seen it looks very exciting. So if Apple's goal was to get new devs into the iOS world, at least from 10k feet, it's working.

I'm excited!


I'm not really that impressed--it looks like a hodgepodge of ideas from ES6, Ruby, Go, and maybe Rust, with a bit of backend work done to let it work on their existing infrastructure.

I dislike that Apple has continued the special snowflake approach, that for some reason we as developers need to learn yet another different-but-almost-the-same language to develop for them, instead of just adding proper support and documentation for an existing language. Why not just let us use ES6, or normal C/C++, or Java?

But instead, now there's yet another language without great innovation that is probably going to be badly supported outside of the Apple ecosystem but still will have enough fandom to keep it alive and make life annoying.

At least Google had the decency to pick a language everybody was already using and use that.

EDIT:

I feel bad for all the engineers stuck in those pixel mines, not allowed to talk about what they're doing, doomed to reinvent things that are on the way out just as they come in.


There is already MacRuby and RubyMotion. They tried using Java years ago. It failed. Developers didn't like it. Existing stuff simply doesn't mix that well with Cocoa and that style of programming. That is why something like Swift was needed.

I really don't get why you can bring up languages such as Rust and Go, and complain about Apple's special snowflake approach. Suddenly Apple is doing something developers have been demanding from them for years and something lots of other companies like Google, Mozila and Microsoft has already done. But oh no, because it is Apple, it is all wrong.


It's unfair to lump Mozilla in with the rest, since Rust isn't at all propriety. It has been open source for a long long time: https://github.com/mozilla/rust


That is not quite right.

The Java/Objective-C bridge existed in the early days as they weren't sure if developers would pick Objective-C, so they decided to bet on two horses.

As Objective-C eventually won the hearts of Mac OS X developers, the bridge was deprecated and a few years later the full Java support.


Suddenly Apple is doing something developers have been demanding from them for years and something lots of other companies like Google, Mozila and Microsoft has already done.

And yet they've decided to do it again, with yet another incompatible language! Joy of joys!

(And as for Java, it was my understanding that Apple had hobbled it by refusing to release updates on a timely basis.)


> Apple had hobbled it by refusing to release updates on a timely basis.

I can see how they could get tired of being forced to ship almost-monthly updates just to support an extra language with very limited adoption. If you have to make that sort of effort, you'll probably do it for your native tools only (like Microsoft does with .Net). Besides, Java apps on OSX looked better than Java apps on Windows, but they were still recognizably different from Obj-C ones.

I wish somebody would write an OS in Python 3...


"(And as for Java, it was my understanding that Apple had hobbled it by refusing to release updates on a timely basis.)"

That's a different, later issue.

Early on in the life of OS X, Apple offered a Java interface to the Cocoa class frameworks. In theory, you could write OS X applications using Java, calling into the Apple frameworks instead of using Swing or whatever Java frameworks.

This wasn't all that well supported, didn't perform well, and wasn't popular.


Sun should simply have hired some mac people and done it themselves. Entrusting the success of your entire company ( they changed their ticker symbol to JAVA!) to a 3rd party vendor's whims was and is silly.


Agreed that the lack of using an existing (and open-source!) language is annoying and frustrating to deal with (think of where we'd be if they invested that time and effort into improving Ruby/Python/whatever instead!). But because of the desire for compatibility with Objective-C, and Apple's general desire to call all the shots regarding their ecosystem, this move doesn't surprise me in the least.


The fact that this has static typing is a huge difference to "just improving" ruby/python. That approach couldn't come close to getting the same early-error-catching dev experience, and performance. And amongst static languages, Apple wasn't likely to recommend C++ as simple, were they? And Rust/D are also quite low level, nor do they have the Objective-C legacy to consider. So really, you're probably left with C# (or maybe Java), and those are so old and large (esp. the libraries) by now that they're unlikely to naturally port to Apple's environment.

Frankly, a bit of a clean up every decade or two is not exactly often, right?


Apple consistently represents a step backwards for both developers and users, in terms of engineering and freedom, but they've amassed enough capital at this point that the hope of them simply withering on the vine and dying off is probably not going to happen.

At least Microsoft and Google show off their new projects and code so everyone can learn from them and read their research.


any proof to back up those claims?


http://research.microsoft.com/en-us/

http://research.google.com/

http://research.apple.com/

Hint: one of these things is not like the other...see if you can figure out which using only the power of curl.


What about the special snowflake projects of google, mozilla, or sun? Apples language development is no less valid than google developing Go, or mozilla developing rust. This just shows your inherent bias.

I've been amazed recently how many open-source projects that we rolled into our linux product were Apple sourced: LLVM, Clang, libdispatch, webkit, OpenCL, zeroConf. Can't think of anything google has done for me recently.

And if there is anyone who will knock-this out of the park, its Chris Lattner. LLVM, Clang, and openCL is all him. He has done more for compiler tech than anyone in 30 years.


>At least Google had the decency to pick a language everybody was already using and use that.

If you think Java is remotely comparable in power and expressiveness to Objective C, you should probably reconsider your line of work.

The rise in popularity of Java nearly drove me from the industry it is such a verbose half baked pile of garbage. I could fill your browser with things you can do in Objective C that you cannot do in Java at all and this incredible flexibility is why Apple is such an agile company with such limited head count.


I don't get the hate. Yeah, syntax is unfamiliar, bu once I got used to it I began to really enjoy objective-c. Ymmv etc., but it's now one of my fav languages - though I guess this is mostly due to cocoa


I also really like Obj-C now that I am familiar with it. I think the biggest pain point with iOS apps is understanding the way to build apps within the context of the iPhone (how to structure views, and the custom things like alert sheets, etc...) particularly if you are coming from a web app background. The syntax is quite nice (although sometimes verbose) once you get used to it.


I never understood what the fuss was all about either.

If you know one other language really well, Objective-C should take a week or two to get use to.

To understand all the design patters, apple HIG, XCode, profiling, libraries, debugging, app submission, etc, these combined is where youll sink your time to learn iOS development. Imo, Objective-C is the easy part.


I recently translated one of my Apps from Android to iPhone.

I had 0 objective-C experience, but I made it work. It was a bit of a frustrating experience. Many times I found myself writing Objective-C boilerplate-ish code that I had 0 clue what it was doing, considering this is a hobby / for fun project I just wanted it working.

It's not easy to google the answer to, "Why did I just add this new keyword after this colon in this random .h file.."

I didn't want to spend the next month reading Objective-C for beginners, I know what a for loop is, I also know what constructors are. I just wanted to use the language.


You may know what a constructor is, but maybe not know what a designated initializer does. ;-)


I felt the same when working on iOS. I felt I was writing way too much boilerplate code, while Android and Windows Phone just gave me a lot more "for free".


You've just described exactly what it feels like transitioning from iOS to Android development, too.


You may not hate Objective-C, but I doubt you love it either. Have you / would you ever use Objective-C to write a web back-end? To write a command-line tool?


I got started with WebObjects, a Next product a couple years before Apple bought them. Yes I've written wonderfully powerful web applications in Objective-C back when the rest of the web was being built using CGI and Perl scripts.

I loved Smalltalk and I love Objective-C at a deep level. The Objective-C runtime is incredibly powerful and its method dispatch is astonishingly efficient considering what it does. It is not as fast as vtables, but it isn't as fragile either.

It might well interest you to know that WebObjects (I'm talking 1997 here) ran on HP-UX, SunOS, AIX, and one other popular Unix of the day that slips my mind and it too shipped with a lively scripting language called WebScript which was not so different from a minimal Swift today.

The thing is, once you dig into the Objective-C runtime and spend a bit of time trying to write an interpreter, you start to realize that the interpreter almost writes itself. Swift is far from the first language built atop the Objective-C runtime.

Consider FScript (http://www.fscript.org) has been around for well over a decade and does more or less the same thing except it gives you something closer to Smalltalk than Javascript and it includes some advanced matrix manipulation goodies as well.

The majority of the people squealing with glee over the introduction to Swift seem to be the sort of people I wouldn't care to work with. If a bit of syntax puts you off so much, lord help you when a truly new paradigm hits.

Swift looks to have some nice features, but it seems to be missing the low level access to the runtime that advanced developers can use like default message handlers (forwardInvocation:/doesNotUnderstand:/methodForSelector: kinds of stuff) and the ability to fiddle method dicts at runtime which can be very useful for intercepting strange errors and unexpected code paths.

So, yes, I do LOVE Objective-C. It is my second favorite language to work in after Smalltalk and to those claiming that Swift will help them move over from Android because it less verbose - lets remember Java is the most boilerplate per capability language I've seen since COBOL. I don't know what those people are talking about.


I've done both, they were fun projects :)

The only thing that got in the way was the difficulty using the code away from OS X or iOS, and the fact that a lot of libraries for things like database access (especially those intended for iOS) were never intended to be used in a long running process. I found slow (3 week) memory leaks that someone writing an iOS app would never have hit.


I actually really like Objective-C and would totally use it as a back end language if there were good libraries to make use of. I've also written a couple of command line tools in Obj-C.


My dislike is that it uses [] for method calls. It's like making Objective-English where we swap Z and A and j for o, just for the hell of it.

If thzt sjunds like fun tj yju, thzn gj fjr Jboective-C.


It's not for the hell of it.

[ ] does not mean method call, it is the syntax for a message send.

Objective-C is a super set of C, adding an Smalltalk like object system to C. The delimiters say "I am sending a message", which is different to a method call. Also, without them the language would be much more difficult to parse, and future changes to C could break the language. It's lasted well (first appeared in 1993). Not as long as Lisp, perhaps it needs more [ ] :)


> It's lasted well (first appeared in 1993).

1983, actually.


Thanks - I felt I should type 1983, but if felt wrong! I still had my Apple ][ back then.


Thanks. Just read up on messaging and now I like it even less :(

In Smalltalk and Objective-C, the target of a message is resolved at runtime, with the receiving object itself interpreting the message. ... A consequence of this is that the message-passing system has no type checking.

http://en.wikipedia.org/wiki/Objective_c#Messages


This is exactly what gives you the ability to easily wire up standard UI components and do tihngs like KVO. KVO is really difficult in something like C++ (for example, it's practically impossible to do in Qt to create without a lot of templating/boilerplace code).


This is in my opinion the best thing about Objective-C; it clearly delineates the object/class and C dichotomy, making it easier for a C programmer (or a Smalltalk programmer!) to pick up. For years, the only changes from vanilla C were the brackets, "#import" and the @ literal syntax (IIRC).


Actually, if you ask me today, after dealing with Scala's idea of how the Option type should work, I might say that nil propagation is the best thing about Objective-C.


That's how I always felt. I liked the clear differentiation between C function calls and method calls on objects.


very genius response!


It's not hate, but Objective-C can be intimidating.


I just spent the past 2 months learning obj-c, about to release my first app and boom, X out obj-c. my luck.


90% of what you learned are Cocoa frameworks and Apple-flavored OOP patterns that will be totally applicable to apps written in Swift. Fear not!


I don't know very much at all about objective C, but the way these things generally work is that you will benefit from the experience as you learn new languages, as it will be an anchor of context against which you may base fresh perceptions.


You'll always be able to contribute to NeXTSTEP. It's not dead yet!


No worries, Objective-C is faaar from deprecated.


Objective C isn’t going anywhere.

Swift is shit. I suspect it will die in a couple years, like the misguided effort to get people to adopt the Java bridge or WebScript before that.


I don't think syntax is really the issue. Using objc these days is clunky for reasons besides syntax.


Like dealing with ARC, which is still clunky:

    @lazy var asHTML: () -> String = {
        [unowned self] in
        if let text = self.text {
            return "<\(self.name)>\(text)</\(self.name)>"
        } else {
            return "<\(self.name) />"
        }
    }
Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks. https://itun.es/us/jEUH0.l


To someone on the outside of ObjC, its just SO DAMN VERBOSE. It's unapproachable the same way Java is unapproachable.


I understand why ObjC's syntax makes some people bristle, but I've never felt that way myself. It's sort of like the people that really hate Python for no other reason than the meaningful whitespace. It's unconventional, but once you understand the rationale for it it makes sense in a way that is at least forgivable if not likable.

There have been a lot of C-based object-oriented APIs over the years. GObject has a C API. On the Mac, there's Core Foundation and a bunch of other OS X APIs that are built on top of it. For over a decade on X11, before gtk and Qt even existed, the closest thing there was to a standard graphical environment was Motif (the corresponding desktop environment was CDE), and Motif was built on top of Xt. Xt was yet another C-based object system, although it was specialized for designing UI components.

This is all well and good but you end up with a ton of boilerplate code that does nothing but manage the lifecycles of the object instances (retain/release for example), and lends itself to extremely verbose function calls in place of object methods.

One possible solution is to put together some really elaborate preprocessor macros to make it look like you have extended the C language to include special syntax for your object system, so you can at least replace this:

obj foo = obj_factory(); int c = obj_getNumberOfElements(foo);

...with something more compact like this:

obj foo = [Obj new]; int c = [foo numberOfElements];

(the second example is ObjC-ish but the former is nothing in particular other than just what the typical C object APIs tend to look like)

The only catch is that the little mini-language you are extending C with using macros can't use existing C syntax, because you can only add to the language, not alter the behavior of existing operators. So, you can't just do method calls using a dot syntax on the instance (such as foo.numberOfElements()). So, you have to come up with something new. Maybe you always liked Smalltalk, and maybe you even based much of behavior of your object system on how Smalltalk objects behave and interact? If so, you might settle on the bracket notation. This has the added benefit of making it very clear when a chunk of code is run-of-the-mill C versus when the code is triggering the syntactic sugar you created with macros to add support for your object system to the C language.

C++ doesn't exist yet, or else you might've just gone with that instead of rolling your own thing. Eventually C++ does exist, and you start to feel a little primitive for sticking with the weird macro language. You eventually build your mini-language into a C compiler so you don't have use the macros anymore. You experiment with some new alternatives to the syntax that are more conventional, but no one uses them. Many developers like that the non-C-ish syntax makes it easy to distinguish between straight C code vs. interactions with the object system, which has its own set of rules and conventions.

Anyway, that's mostly speculation, but something like that story is how I've always thought Objective-C evolved over the years. I don't mind it nearly as much as long as I don't think of it as a separate programming language from C (like C++ or Java or pretty much anything else these days), but rather think of it as C with some useful syntactic sugar that gets rid of a ton of boilerplate code for a particular C-based object-oriented API.


According to http://en.wikipedia.org/wiki/Objective-C#History, that's actually almost exactly how it came to be. (Apple even experimented with changing the syntax: http://en.wikipedia.org/wiki/Objective-C#.22Modern.22_Object...)


It really reeks of 80s. I 'd rather program in plain C.


I spent a lot of time trying to do stuff with ObjectiveC, but just hated the syntax. That's been the biggest thing keeping me from developing Mac OSX apps; I just prefer Ruby's simplicity. I'm going to seriously give Swift a try.


Yep, same here. It looks pretty JavaScript-y, which is familiar at least. I think this is a good move on Apple's part.


It's probably a wise decision to have an "Algol patterned" language. No non Algol patterned language has ever become a mainstream programming language to my knowledge.


I am not a programming language wonk; so I imagine most languages I am familiar-with/know-of are necessarily Algol patterned. What are some non-Algol patterned languages?


Lisp, Forth, Prolog (and Erlang), Smalltalk, Haskell, and Tcl all come to mind.


In particular, Obj-C = Smalltalk + C. If you subtract C from Obj-C, you'd most easily just end up with Smalltalk. But that's not the right move for mass adoption.


I agree with the first, but disagree with the second part:

COBOL, Fortran, JCL (not Turing complete, AFAIK), SQL, Excel, DOS batch files all were (fairly) mainstream at some time.


Fortran came before Algol and arguably influenced it[1]. I agree with COBOL and SQL in particular, though.

[1] http://www.digibarn.com/collections/posters/tongues/Computer...


The correctness of that image can be discussed. Fortran was specified in 1954, but the first compiler shipped in April 1957 (http://en.wikipedia.org/wiki/Fortran#History). That is earlier than Algol 58 (first two implementations in 1958 (http://en.wikipedia.org/wiki/ALGOL_58#Time_line_of_implement...), but close.

More importantly, "inspired by" does not imply that Fortran 58 is Algol-like (that same picture would declare Fortran Lisp-like, too)

For me, http://en.wikipedia.org/wiki/Fortran#Simple_FORTRAN_II_progr... certainly is nothing like Algol.


Ruby is simple and beautiful, isn't it? Too bad it never got the shower of money from big backers Javascript, PHP and now Swift got blessed with.


Beauty is in the eye of the beholder, but Ruby is anything but simple. It has one of the most complicated syntaxes of any programming language in common use.

Perl and C++ are still in the lead, but with stuff like the gratuitous introduction of alternate hash syntax, new-style lambdas, etc., Ruby is catching up.


Ruby's grammar is complex, but it's object model is incredibly simple.


Introduction of a new hash syntax wasn't gratuitous really. I think the point was to make up for the lack of proper keyword arguments. Now that they're available, it's true that it doesn't have a reason to stand on its own, but it does make the code more readable and concise, as does the stabby lambda syntax. Though I do agree with your point on simplicity really, the language does offer way too many ways to do the same thing sometimes.




Agreed. I would go so far as to say that this was "one more thing" worthy.

It's definitely more exciting than something like an incremental update to the Apple TV.


My dad tuned out as the keynote got to this point, but for me (as a web developer... for now!) this was the highlight.


I feel the exact same way. For a while now I've been looking at other ways to develop for iOS, such as HTML5 with PhoneGap or C# with Xamarin, but it's always been a kludge.

Swift looks amazing and I'm really excited to try it out tonight! Great job Apple devs.


  > So if Apple's goal was to get new devs into the iOS world, at least
  > from 10k feet, it's working
They just announced Swift, at a conference for Apple developers, with live streaming that is only easily accessed from an ios device. I think it is probably premature to pop the corks and celebrate the efficacy of the get new developers initiative.


As someone wise mentioned to me, Objective-c was 20% of the problem and Apple's silly rules and controls around app distribution are the other 80%. As someone who had their app available in the app store for nearly 8 months including 3 approved updates before being (seemingly) arbitrarily rejected, I feel the pain of that other 80%.


How else are they supposed to announce it? It's simply that, an announcement. People are talking about it now and there's info on the Apple site. I see this as a huge push forward for new developers.


The announcement was fine, it is the "its working" part that is odd considering it is less than a day old. Let's see if it actually attracts new developers before we declare it a mighty success.


Well; based on the promise of immediate inclusion in the app store and a very well thought out book about the language available for free I'd say they're doing rather well so far already.


You mentioned things that are likely to bring about the desired result of creating new ios developers. I am not disagreeing about the likelihood of success. I am simply saying that T + 8h is probably too soon to conclude that the program is successfully creating new ios developers. To be honest I think it is absurd to expect that such a program from any company could bring about the desired goal of creating new developers in less than eight hours.



I just skimmed the tour, and my impression is: Swift is a compiled, Objective-C compatible Javascript-alike with an ObjC-like object model, generics, and string interpolation. No exceptions. Based on LLVM and appears to inherit the same data structures as Cocoa apps (Dictionaries, Arrays, &c).

It feels very lightweight, sort of like an analog to what Javascript is in a browser.


I think it uses the Objective-C runtime directly, so it has access to all the frameworks and Swift classes can be loaded into Objective-C projects as well.

There are a few other languages that do this with the Obj-C runtime, for example a Lisp variant called Nu[0].

[0] http://programming.nu/


There's a Ruby implementation by a former Apple employee that does this as well: http://www.rubymotion.com/


Unfortunately it requires a $200 up front investment before you can even toy with the language. RubyMotion was my first thought when I saw the code happening on the keynote, but at least this will be free with the OS.


[1] Objective-Smalltalk http://objective.st/


> No exceptions.

This is a big shift. With such a rich type system (very Hindley-Milner .. even with "protocols" that feel like type classes?), there is no need for exceptions, for much the same reason that Haskell doesn't have exceptions in the core language, but only a monad. This would force error situations to be explicitly modeled in the types of objects returned by functions/methods. A good thing I think.

However, it does leave the hole of what if an ObjC framework you call on raises an exception? Can't handle it within Swift code? Another big omission in the manual is lack of mention of anything to do with concurrency, though "use GCD" is seen as the solution (Swift closures are compatible with ObjC blocks).


I disagree. I use exceptions a lot in OCaml. For example, when implementing a recursive type-checker, you really want to be able to abort (all the recursive invocations) quickly if e.g. you find an undeclared variable or a type error. Using an ADT would IMO incur an unacceptable syntactic and cognitive overhead.


Ocaml exceptions are a misnomer, since they are often used as a control flow primitive for non-exceptional circumstances. The point is that they are cheap. Contrast with Java, where you wouldn't want to use exceptions the way you use them in Ocaml, and would instead favour other non-local exit primitives such as "return", "break" and "continue."

Haskell doesn't care about this stuff, because lazy evaluation gives you the same control-flow patterns, and the exception monad ends up operationally equivalent to checked exceptions, but now with possibly exception throwing values made first-class. I doubt the same can be said of Swift.


Dictionary isn’t NSDictionary or NSMutableDictionary because of type inference issues (“they can use any kind of object as their keys and values and do not provide any information about the nature of these objects”).


You unfortunately probably have to deal with exceptions when crossing into Objective-C land because of https://developer.apple.com/library/mac/documentation/cocoa/....


I'm not seeing the javascript-alike-ness. What caused that connection to jump out at you?

I see the standard static FP features (from ML, Haskell, Scala, F#) with the syntactic flavor or Rust and some C# tossed in.


JS has exceptions though. I didn't notice that bit until just now… hmm. Could turn into lots of return-checking boilerplate. I'm still excited about this, very much so, but I think exceptions are worth keeping.


It's also apparently super fast, and implicitly typed, and designed for safety.


[deleted]


In the keynote they said Swift is faster than Obj-C.


"The company says that Swift apps are significantly faster than Objective-C apps, outperforming them by over 93x."

With a graph showing ObjC at 127x faster than Python, Swift 220x faster than Python.

Thus the conclusion is 220 - 127, Swift is 93x faster than ObjC.

Someone needs to resit their GCSEs.


"93x faster" sounds roughly like a 46.5x improvement in marketing.


It's not only possible, it's even not uncommon for a C programmer to get a 90X improvement in speed in their own C program. If you have naive memory management, or incorrectly implemented concurrency or parallelism, you can easily lose 2 orders of magnitude speed.


This. In my case a 1Mbyte memcpy in the middle of a loop this morning. Enough to blow the CPU cache out of the water...

300x improvement instantly by moving it out of the loop.


Are you sure it wasn't just because you were then no longer doing a large memcpy repeatedly?


Yes it was entirely covered by that :)

I think it was covered by "naive memory management" and "shitty outsourcing". I'm paid to fix their stuff.


Haha :) Maybe the shipped a better product, but the management said "No, it's not possible that this could run that fast. Something must be wrong.", so they put in some "waiting".


If the problem was just the time taken to do a 1MB copy inside a loop, why did you say the problem was clearing the CPU caches?


Because the CPU has 32k of cache in this case (ARM) so the memcpy was evicting the entire cache several times in the loop as a side effect of doing the work. The actual function of the loop had good cache locality as the data was 6 stack vars totalling about 8k.


So? Copying a megabyte is a really expensive thing to do inside a loop, even ignoring caches. (A full speed memcpy would take 40 microseconds, based on a memory bandwidth of 24 GB/s, which is a long time.)


My most painful personal experience dealing with this exact problem was with CUDA warps, during my undergrad research work.


Marketing are claiming a 91.3x.


For context: This story previously pointed to an article, but has now been changed to point to Apple.


"outperforming them by over 93x" is technically different than "93x faster"... Although I agree it is a cheap way to put it :)


Enumerations (from: https://developer.apple.com/library/prerelease/ios/documenta...):

Unlike C and Objective-C, Swift enumeration members are not assigned a default integer value when they are created. In the CompassPoints example above, North, South, East and West do not implicitly equal 0, 1, 2 and 3. Instead, the different enumeration members are fully-fledged values in their own right, with an explicitly-defined type of CompassPoint.

+100 for that. This will help developer avoid whole class of bugs.

Enumerations also support associated values. Enums in .NET are very poorly defined. Looks like Swift got it right.


Indeed. Luckily C++11 took care of the issue on the C++ side.

http://en.wikipedia.org/wiki/C++11#Strongly_typed_enumeratio...


Do Swift's enumerations allow recursive definitions?


>Unlike C and Objective-C, Swift enumeration members are not assigned a default integer value when they are created

Well that’s gonna make storing persistent values tricky.


I find it a bit sad that with all of the languages that already exist, Apple found it necessary to invent a completely new one -- and then make it proprietary. Why not use Ruby, or Python, or JavaScript -- or even Go, Rust, Clojure, or Scala? (Yes, I realize that the latter two run on the JVM, which would have been problematic in other ways.)

Heck, they could have bought RubyMotion and made Ruby the high-level language of choice for development.

I realize that Apple has a long tradition of NIH ("not invented here"), and in many cases, it suits them, and their users, quite well. But there are so many languages out there already that it seems like a waste for Apple to create a new one. Just the overhead of developing the language, nurturing its ecosystem, and ensuring compatibility seems like it'll cost more time and money than would have been necessary if they had gone with an existing language.


>Why not use Ruby, or Python, or JavaScript -- or even Go, Rust, Clojure, or Scala? (Yes, I realize that the latter two run on the JVM, which would have been problematic in other ways.) Heck, they could have bought RubyMotion and made Ruby the high-level language of choice for development.

Because OBVIOUSLY none of them solve the problems they wanted to solve (interoperabillity with Objective-C, fast, native, IDE integration, etc. Including RubyMotion which is a half-arsed implementation.


I'm not sure if you're kidding or not.

IDE integration for a new language? They wrote it themselves. Do you think it would have been harder to integrate an existing language? Fast & native also also trivially solvable.

I don't know about interop with Objective-C, that's probably the hardest part from your list.

But complaining about IDE integration when they're also the creators of the IDE is... silly...


First, I like how you break apart the issues I raised (like "IDE integration") when I said that they wanted to solve ALL this problems at once.

So, even if just adding IDE integration for an existing language was easier than creating a new one, using an existing language wouldn't solve their other issues (e.g Obj-C interoperabillity with message passing, protocols, named parameters et al). And RubyMotion wouldn't permit all the optimizations they did, nor the kind of type safety they added.

>But complaining about IDE integration when they're also the creators of the IDE is... silly...

We're not talking about PyCharm level of IDE integration here. Not even about the current level of Obj-C/C++ integration XCode offers (for which they had to create LLVM tooling and LLDB to enable all the features they wanted to offer). It goes beyond that.


I see that you don't really understand what's needed for real IDE integration. Please, understand one of the main reasons of Apple creating Clang... (hint: because the GCC guys wouldn't take their patches to improve Objc-C & add facilities for IDE integration fast enough)

Clang was easier to integrate with an IDE than GCC, and I strongly believe (after seeing what apple showed yesterday) that swift integration is even simpler.

( They must have made a new LLVM front-end to embrace IDEs equally or better than Clang )

So no, it's not silly to try to design better to have a better integration with an IDE that you control too.

Cheers.


Well, that may be true for GCC but Ruby, Python & co are well integrated into many third party IDEs. So that point, at least, is moot.


"Ruby, Python & co are well integrated into many third party IDEs" perhaps you're not familiar with the level of IDE integration we're talking about here.

Most (if not all) IDE's Ruby and Python integration is BS.

We're talking about real AST-based highlighting and suggestions, auto fixes, autocomplete for all APIs available (AND your own custom modules), integration with the debugger and the build system, and in Swift's case also integration with the REPL, LighTable-style live-variables and Brett-Victor-inspired live coding environment.

This is not your grandfather's PyCharm.


Probably because they wanted a statically typed language. Something that didn't require the JVM and not backed by Google..


My sense is they wanted "Their" language, as opposed to Go (Google) or Java (Oracle) or another tied in to a vendor.


Why does Google get kudos for inventing new languages (Go), and Apple gets mocked? (Swift)


Apple likes to control the whole product as much as possible.

The iOS/OSX ecosystem is absolutely big enough to support an exclusive language (see Objective-C), and Apple chose to create a new language that matched their goals instead of adapt something they don't control and isn't ideal.

Makes perfect sense, and Swift was by far the most impactful announcement at WWDC.


In one word: Lockin


Apple is a big, rich corporation. But in that campus there are still human developers.

Radical hypothesis: what if this started as a pet project, got the attention of more employees, then management (with or without convincing from said developers). Management sees the value in the project, and funds it officially. You know, like other big, rich corporations...such as Google.


When the project lead is one of the creators of LLVM (arguably the most fundamental low-level project in Apple after actual kernels), this sort of scenario is improbable.

Much more probable is that somebody asked top-developer Lattner for "a simpler language to compete with Java/Dalvik and C# with more casual developers" and he came up with Swift. The name itself is a message: "this thing is quick - quick to learn and quick to run, unlike VM-based stuff that must translate to Obj-C (fast to learn, slow to run) or Obj-C itself (fast to run, slow to learn)".


I would've voted for buying/licensing Xamarin instead; C#/F# would've given them everything including a lot of fresh programmers. On the other hand i'm happy they didn't as they would've tossed out Android and (at least for me) there is no alternative yet to the speed / power of development here.


Apple has enough programmers developing for their platforms.


Depends what you call 'enough' :) I would say there is no 'enough', but he.


Probably because they wanted one that integrated well with their existing frameworks, and which really took advantage of their investment in LLVM.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: