Hacker News new | comments | show | ask | jobs | submit login
Smashing Swift (nomothetis.svbtle.com)
281 points by afthonos 1312 days ago | hide | past | web | favorite | 143 comments

Compiler bugs really don't worry me - despite the name, this is is really an alpha release. However, I ran up against the lack of generic protocols myself today (for those with access, there's an interesting debate at https://devforums.apple.com/thread/230611?tstart=0). It seems like a deliberate design choice, but one I'm not really sure about.

The primary vibe I'm getting from Swift is pragmatism. They had various red lines to follow: must be as fast as Objective C; must have a minimal to non-existent runtime (so AOT compilation all the way); mustn't have garbage collection, must interoperate with Objective C/normal C with ease; etc. This had led to a language which, despite being at version 0.1, has a fair few oddities and warts already. For example, the "almost but not quite immutable" arrays seem to be an artefact of absolutely wanting array performance to be equal to standard C. Typealiases vs parametric protocols seem to be a desire to have as much type information fixed as soon as possible.

It seems that rather than design a language where they might not have solutions for their red lines up front, they've designed a language where they can provide them. Rather than make theoretically "better" choices the compiler can't deliver perfectly in V1, but can in V4 or 5, they've gone for "yeah, we can almost certainly ship that". Given that this is essentially Apple's private language, I assume they'll be quite aggressive about deprecating features and moving people onto the "better" solutions (high-kinded types, richer immutability etc) when they can deliver them whilst meeting the core goals. It's an interesting approach - most other languages are happy to take a few years to get going, whereas Swift seems to want to go from 0 to 60 in 4 months. It reminds me most of C# 1.0, but with harder restrictions on what they've been told to deliver. At the moment, it's interesting, and a big leap from Objective C. By V3, it might be "excellent".

>However, I ran up against the lack of generic protocols myself today (for those with access, there's an interesting debate at https://devforums.apple.com/thread/230611?tstart=0). It seems like a deliberate design choice, but one I'm not really sure about.

So did I! A few more related links if you're interested in this question:


http://www.artima.com/weblogs/viewpost.jsp?thread=270195 (this is about Scala, but Swift somewhat follows Scala with this design)


Very interesting, thanks.

I also ran up against another annoying bug/oversight/v0.1ism : You can't inherit from a generic class without becoming generic yourself. So, even if you fully instantiate your parent's type information, you have to be generic as well! For example:

    class Foo<T> { ... }
    class Bar : Foo<String> { ... }
You'd expect Bar to be a non-generic type, but this throws a compiler error. You have to declare:

    class Foo<T> { ... }
    class Bar<String> : Foo<String> { ... }
Which seems just odd. The best workaround I've found thus far is

    class Foo<T> { ... }
    class BarClass<String> : Foo<String> { ... }
    typealias Bar = BarClass<String>
Bleh. I've filed a radar against this one - it seems like an annoying oversight, even for this early version.

I think you (and @twic in discussion below) misunderstand what's going on here.

`class Bar<String> : Foo<String>` is exactly the same as `class Bar<T> : Foo<T>`. The identifier `String` in declaration of `Bar` does not refer to built-in type String, it is a tag for any type.

In other words, you CAN have `Bar<Int>`, `Bar<Float>` and `Bar<everything else>` after using that declaration of `Bar`.

Well, derived classes are meant to be drop-in replacement for the base class. In that sense it totally makes sense that you can't have a non-generic child class of a generic base class. Otherwise it encourages design where classes are treated as method dumps.

No. If the child class binds the type parameter of the base class (as in this example), then it's meaningless to think of that class as being generic. In this example, Bar is always Bar<String> - you can't have a Bar<Integer> or a Bar<anything else>. So why not just call it Bar? FWIW, this is how it works in Java, and it seems to work.

Indeed, calling it Bar<String> seems really weird to me. Usually, in a declaration, the thing inside the angle brackets is a the name of a parameter, which can assume various values at runtime:

    class Foo<T>
But in the subclass, it's the name of a type:

    class Bar<String>
Essentially, one is a declaration, and the other is an expression. Like and l-value and an r-value. I don't think i've ever seen a language which allows both of those in the same place.

I don't think Java is a good example for OOP best practice. For exactly these reasons. I can't think of a reason this should be allowed. It screams "I'm trying to misuse class hierarchy to do specialization instead of using composition".

Don't mix up your polymorphism.

The fact that an instance of A is assignment compatible with a location of type B does not imply that a class of type A is assignment compatible with a location of type class of B.

This is not the case for languages like C#, Java, C++ etc. (despite those languages not having class types); if a subclass does not define all the constructors of its ancestor class, the subclass is not a "drop-in replacement".

Delphi does have class value polymorphism that mirrors instance polymorphism, and it can be a source of confusion, not to mention type holes. For A inheriting from B, you can construct a value of type A using a constructor of B if you assign A to a location of type class of B; A's constructor won't run, and its assumptions and invariants won't necessarily hold. It's one of several problems that makes designing robust classes in Delphi awkward.

Exactly what I was going to say. This seems like a good design decision, especially because the typealias hack offers a work around if you really need to.

Not classes; instances. So instances of Foo : Bar<T> should be drop-in replacements for instances of Bar<T>.

In Java, to clean up the brackets (exposed via APIs) I'll do something like:

  class NodeList extends List<Node> { ... }

This makes sense - Apple's philosophy is "Everything Apple" and so they have a vested interest in designing a language that gets things moving on their platform, rather than being the best language it can be right now.

If, and this is a big if, Apple is actually interested in Swift in the long term, then they will open up the language to community development so that these sorts of things can be added by the community.

My suspicion is, though, that Swift will be improved by Apple alone and that it will be pragmatically discarded when the time is appropriate. Which doesn't mean it won't have a long life, just an undignified exit.

This is kinda misunderstanding Apple approach to introducing technology. Apple seldom introduce a technology as intermediary solution or discard them soon after. (the last time this happen that I could remember was GC in Objective-C which is replaced by ARC) They could double down on it even if it has its algorithmic flaws, and they will attempts to fix those flaws.

One of recent technologies that fall into this category is auto layout. (The performance of auto layout in iOS 7 for slightly more complex table view cells is pretty bad)

Apple main interests is always Apple, which makes sense because they are a for profit company. Just as Rails is always Basecamp. Apple technology have a fast paced development because Apple dog food their technology. WebKit in Safari, LLVM in Xcode. They invested huge in these area. That doesn't mean that they don't open up. both technologies I mentioned are open source and have huge impact to the community as a whole.

E.g. RubyMotion make use of LLVM to allows ruby to be compiled to both iOS and Android.

People like to think Apple is a closed company (as marketed by its competitors) but people who know the company well know that is far from true. They might not be the most open of company but they had open source technologies that are awesome in their own right. LLVM is just one of them. Open sourced and won the ACM award. That by itself is a achievement hard to beat.

> They might not be the most open of company but they had open source technologies that are awesome in their own right. LLVM is just one of them. Open sourced and won the ACM award. That by itself is a achievement hard to beat.

I don't understand how a community project became "Apple's achievement."

Apple did not open source LLVM. LLVM was an open source project (since 2000), which Apple used and open sourced their contributions to on the way.

Also true of WebKit. I don't think it makes sense to draw conclusions about a project which started as a closed in house affair, based on projects which were based on open source ones, as parent has.

I have only a passing familiarity with Objective C. Why is one of the "red lines" having no garbage collection?

Apple did add garbage collection to Objective C, but it only lasted a few years before being pulled. Two less important reasons are that it's hard to integrate with the otherwise trivial C(++) interop, and the frameworks just weren't designed for it.

To me, the more important answers are deterministic destruction, and no GC pauses. All of Objective C is reference counted these days, with retain/releases inserted by the compiler (so all you have to do as a programmer is resolve cyclic references with weakness). Thus, you know exactly when an object will die, and have its dealloc method called. You're also sure you'll never end up in a situation where memory pressure causes system hitching due to GC. Given that a key platform for Objective C is iOS (low memory, "low" CPU), and that Apple's trademark tends to be fluid UI, avoiding these problems is really helpful.

To be fair, reference counting isn't strictly deterministic either. The moment you mix in multithreading or call out to code you don't control and whose refcounting behavior could change over time, you can no longer know when an object will die.

As for memory pressure, reference counting does generally behave better, but in some situations you can end up doing much worse. If you manage to generate a lot of autoreleased objects (harder to do these days with ARC, but still possible) then you can end up getting your process terminated due to the memory pressure of should-be-dead-but-not-yet-freed objects.

I think that the C interop problems are the real killer. The other problems with garbage collection can potentially be solved, but as long as C is in the picture, you're doomed to a sort of halfway land where none of the good GC techniques are available to you.

I'm not very familiar with the implementation of programming languages, so maybe the terminology is subtly different, but how are either of those situations not deterministic?

When calling out to other code you don't control, you lose determinism in the sense of being able to exactly predict when objects get destroyed when you write the code. The refcount semantics of the code you're calling can change while still maintaining correctness, and this can cause your objects to be destroyed differently. Accidentally relying on this has been the cause of many OS-version compatibility problems on the Mac over the years.

For multithreading, I thought that would be fairly obvious. Once two or more threads hold ownership over a single object, you can no longer be sure which thread will perform the final decrement (at least in the general case) and so you don't know exactly when the object will be destroyed, or even which thread it will be destroyed on.

As others have pointed out, it's deterministic in the sense that the memory is reclaimed when the ref count goes to zero, and the ref count is always well-defined. I would have used the word "predictable", in that `delete foo` will always release memory, but `foo.Decrement()` may or may not, and local reasoning may not suffice.

It's still deterministic, strictly speaking. Maybe we need more subtle terminology here, like "fog of war" or "situational awareness" – as in "even with reference counting, the programmer may lose situational awareness of deallocation when calling into libraries beyond the fog of war."

In this case, I think this means it's not deterministic within the context of a single thread of execution.

Objective-C uses Automatic Reference Counting (Garbage Collection has been deprecated - and ARC works a lot better than the GC implementation), and as Swift had to uses Objective-C's memory model it must also use ARC.

> must be as fast as Objective C; must have a minimal to non-existent runtime (so AOT compilation all the way); mustn't have garbage collection, must interoperate with Objective C/normal C with ease; etc.

This sounds and awful lot like what C++11/14 would bring, with a few platform specific extensions... but then again, no vendor, and especially not Apple, likes the thought of working with a language that's really trying to be cross-platform and a language that "rewards smart programmers" ( http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Key... ), as opposed to being easy-to-teach or idiot-proof...

Still, I look at the Swift with a low-level colored glasses: I don't care if this or that category theory aspect is covered or not. I don't care if you can imagine some nicer syntax for something (everybody can). I really care how good and how often Swift can be used instead of C, not being slower and not using more resources.

I know, <your-favorite-language-almost-nobody-uses> is nicer. But for me, if it has GC, it's a no go. I see Swift as something where I win when I can use it instead of writing C, not as something where I lose since the most of problems I solve by writing software I can't solve by writing in <some other language>.

Swift is impressive language for v0.1, very pragmatic in aspects I care about. So you see that there's chance it will probably be even more expressive in some other version. Nice. Even as it is, it's much nicer than any other options for the purposes for which it was designed.

As a fairly inexperienced programmer, can someone please explain why everyone is so happy there is no garbage collection going on?

Shouldn't that be something we would want?

(Is it because control over this allows us to do extra things?)

There's a helpful bit of writing about garbage collection on http://sealedabstract.com/rants/why-mobile-web-apps-are-slow...

It boils down to there being a big performance penalty on garbage collection in memory-constrained environments.

Wow, that's a great article. I'm currently using mobile web stuff (Ionic, Angular, PhoneGap) to prototype some possible mobile apps. It's convenient, but I've been worrying about the long-range performance issues. This gives me a much more detailed way to think about it. That Apple got rid of garbage collection is especially telling.

Thanks! The linked article is absolutely a must-read for discussing the real limitations.

Reference counting has an incremental performance burden because of predictable object freeing, while garbage collection has a periodic pause. On mobile devices you often must worry about response-times; it is a lot more easy to reason 'we need to ease up on such and such at this time', and be sure you solved the problem. Garbage collection pauses can be less straight-forward to tackle.

Garbage collection causes problems with memory pressure and pauses.

ARC is very little different for the programmer, but suffers less from these problems and is easier to integrate with C language code.

No reason for GC.

Note: My post was an answer to some of the comments appearing here and not the response to the original article, which wasn't so negative as the mentioned claims.

The language is not "innovative" as they claim, so people are expecting at least some learning from mistakes of prior languages. They could've hired some smart people from more software-oriented companies or something.

>The language is not "innovative" as they claim

Not innovative compared to what? Haskell? Unfinished betas like Rust? Some language 100 people use (plus 1-2 banks and a couple of universities)?

This is a language that will jump in the top ten of most used languages in a year or so, just because it's used in a hugely popular development platform. And it had to fulfil several things to achieve that.

>They could've hired some smart people from more software-oriented companies or something

Because Chris Lattner, the guy behind the infrastructure of tens of new languages and one of the most popular C/C++ compilers, is not smart enough right? Or the team that created Swift.

And which company would that be? Go, for one, is like a 1980 language compared to Swift. And C# had 14 years to evolve, and didn't have the same functionality constrains Swift has handicapped with at all.

It is somewhat unfair to dismiss Rust as an unfinished beta in comments on an article that demonstrates how the Swift compiler isn't even up to beta standards.

Swift is a nice language for sure. It picks up nicest features of many other language and doesn't trade that off with many annoyancies (from what I've seen so far). It also appears like a very accessible language. It breaks with older languages where those older languages are bad (eg "==" vs "===") but mostly sticks to things that work and are familiar. I think these criteria are important for what I think is "good language design".

Swift is not in any reasonable sense an innovative language. There is really nothing in there that is new. Now you can get worked up about that, or just accept that innovativity is really something far less important (or perhaps even undesirable) when compared to "good design" for Swift. Especially given its unique position.

>It is somewhat unfair to dismiss Rust as an unfinished beta in comments on an article that demonstrates how the Swift compiler isn't even up to beta standards.

I agree, didn't intent to dismiss Rust per se -- I like Rust.

Just wanted to give a perspective. Swift is in beta (and buggy), but Rust (if the OP compared it to that), isn't even syntactically stable.

Based on what we know, Swift will be in a (pretty much) stable condition come Fall, for the OS X release.

Given the constraints, it's certainly very innovative.

Just to be clear: innovative for Apple development community or innovative generally? Please list some/all innovations, I would be happy to reconsider.

Apple GUI development is obviously a major constraint. Another is: no GC in order to avoid the collections happening when they want and thus making the smoothness of the UI impossible.

I still claim there isn't at this moment any more innovative language that can provide that what Swift provides, observing any GUI platform you want to observe: meaning that level of support for the platform's native GUI with that performance and elegance.

Filling a niche is not being innovative. Is totally fine, and it might be big deal, but if you take the language by itself it doesn't bring anything new to the table. But I might be wrong, so feel free to point to an innovative language feature.

> meaning that level of support for the platform's native GUI with that performance and elegance.

Seems like that criteria would bias the choice of most innovative language quite heavily towards languages developed for/by Apple, no?

It's the most innovative performant language for any GUI. There are maybe more innovative but less usable languages. If there isn't any better existing for any other platform, it's certainly not Apple's fault.

Wow, when I went to bed this comment was at 2 karma. Now it is at a negative (-3) and so are all the comments in this chain of replies that was critical of Apple's product, when before they were not greyed out. Fancy that. :)

Suggesting anything "critical of Apple's product" automatically gets voted down just makes me roll my eyes.

What if it's because the claims in the mentioned comments were actually unsubstantiated? Read the article linked by aidanhs:


From the article:

"no, your experience with server-side programming does not adequately prepare you to “think small” and correctly reason about mobile performance."

> What if it's because the claims in the mentioned comments were actually unsubstantiated? Read the article linked by aidanhs:

> https://news.ycombinator.com/item?id=7898305

Red herring. aidanhs' reply was to someone who genuinely asked why garbage collection is problematic. The comments I'm talking about have not mentioned garbage collection, just questioning what innovative really is. I guess you then tried to infer that they thought that garbage collection vs. reference counting doesn't matter, i.e. isn't innovative? I find that to be an unfair inference. They might as well think that something like that has already been done/is being done, i.e. with Rust and how it relegates any automatic memory management to opt-in libraries.

So it seems to be a disagreement about what is really worthy of being called innovative, not any technical merit, as the comments that I'm talking about has not questioned that.

> From the article:

> "no, your experience with server-side programming does not adequately prepare you to “think small” and correctly reason about mobile performance."

Straw man, unless you somehow know that all of these people only have experience with server-side programming.

I see the source of your confusion now. To me innovation means something absolutely new, not re-applying/combining prior art. Wikipedia agrees: "Innovation differs from improvement in that innovation refers to the notion of doing something different rather than doing the same thing better."

Apple is doing something different and new, comparing all available languages capable to produce fast non-stuttering GUI based applications that work smoothly even on the smartphones and demand minimal resources. And it's not a small feat by any comparison.

It's like you'd read about the first usable jet-powered car and then claim "but it's not innovative, there were jet powered planes already."

You're talking incredible nonsense. I can create fast non-stuttering GUI applications on phones with minimal resources (talking <$100 phones here) using F#. F# is far more feature rich than Swift is or ever will be. I have been able to do so ever since Windows Phone first came out. But that's not something F# does, but Windows Phone happens to be very good at non-stuttering GUI based applications that work smoothly even on smartphones with minimal resources.

Don't pretend iOS' fluidness is some merit of Objective-C or Swift, because it is a merit of the operating system.

Do read the article here linked by aidanhs:


then complain to all the authors that measured all that they measured (a lot of links and diagrams there) instead of complaining to me. It will still be an attempt to confront all the real measurements with your anecdotal evidence of one, your, particular case, unless you manage to disprove all the claims listed there.

Especially if you manage to disprove this:


You'll be celebrated as the biggest genius of our times.

How is memory usage relevant to creating fluid/responsive applications? How is JavaScript DOM management even comparable to native UIs where the UI thread runs with realtime priority in native unmanaged code? And when did we stop talking about programming languages and talking about runtime and operating systems?

You were talking about a language's capabilities of creating fast responsive UIs on limited hardware. There are no language features that contribute to that, it is the operating system and runtime that does that. Are these things well done on Apple systems? Yes, absolutely. But that has no relation whatsoever to the discussion whether Swift is an innovative language or not. Apple could have bolted Common Lisp, Fortran or Rust onto that runtime, and that would have no relevance at all to the discussion about the innovativity of those languages.

Windows Phone also has a good runtime despite that it runs various garbage collected languages. Fluid apps are created for it all the time for low memory systems in various languages. But that doesn't make Visual Basic an innovative language.

> How is memory usage relevant to creating fluid/responsive applications?

Mobile phones are generally memory constrained as compared to desktop computers, because memory takes space and power. In addition, their memory bandwidth is usually _extremely_ constrained as compared to desktop computers, making big copies, etc, expensive.

> How is memory usage relevant to creating fluid/responsive applications?

Unless there is an understanding of that there's no point in arguing further. The whole mentioned article explains, among other topics, exactly that.

You're showing some incredible arrogance here by not addressing my other points. You fail to address the point how specifically the Swift language (not the Swift runtime) is so innovative that it enables fast apps on resource constrained devices. Because I can name dozens more languages that could do the same with native access to the Apple runtime.

> I can name dozens more languages that could do the same

How about naming one that actually now does the same with the native access to any GUI resource constrained platform? F# certainly can't avoid the graph quoted. The y axis is the slowdown as soon as there isn't enough memory. The point you yourself claim that you don't understand, so how can we discuss anything further? I actually wrote my comments assuming that the people who'd discuss would understand the issues I assumed to be well-known and thoroughly documented among other places in the Drew Crawford's article.

> @acqq Don't hang me up on a figure of speech. It's frustratingly rude. I do understand how memory management relates to fluid applications. I don't understand how that point relates to Swift's innovativity, and you give the impression that you don't either. Please at least have the courtesy to show how the Swift language has an innovation that allows for responsive UIs. There are dozens of languages that are or can be reference counted.

There is absolutely nothing in the software world (or anywhere really) that meets your criteria for innovation then. I guarantee that absolutely anything you consider innovative fails the test you just described.

> To me innovation means something absolutely new, not re-applying/combining prior art.

You may have considerable difficulty finding a language that anyone uses that is innovative, in that case. Most new programming language concepts are explored in research projects, and _then_ incorporated into languages.

I successfully got the Functor example to work like this:

    protocol Functor {
        typealias T
        typealias FunctorResult
        func map<P>(mappingFunction: T -> P) -> FunctorResult
    extension Dictionary : Functor {
        func map<P>(mappingFunction: ValueType -> P) -> Dictionary<KeyType, P> {
            var newDict:Dictionary<KeyType, P> = [:]
            for (key, value) in self {
                newDict[key] = mappingFunction(value)
            return newDict

It's hard to implement generic functors as a protocol without either runtime code generation or generalized dynamic dispatch.

The typical efficient implementation of a protocol (aka an interface) is a vtable - an array of pointers to functions. When you instantiate a generic in a statically compiled language, that usually clones the body of the generic thing (method or type) with references to the generic parameters replaced with the type arguments; and the new, cloned body is handed off to codegen.

But if you instantiate via a protocol reference, the compiler doesn't statically know the implementation of the thing you're trying to call - it can't see the body of code to clone. It's an indirect function call through a variable, and without a restricted language, analyzing it quickly runs into the halting problem.

With runtime code generation, the problem can be solved with sufficient magic - the runtime can rewrite as necessary. That's how .NET implements this.

Alternatively, if the body of each generic method is compiled using some form of dynamic dispatch - e.g. every operation on a value whose type is a generic parameter is done via an table of function pointers, and this table is passed into the generic method at the point of every call - then it can work, at some cost in speed. Haskell works like this, AIUI.

Java does it even more simply, with full-fat dynamic dispatch - all generic parameters turn into Object, and method bodies get runtime casts inserted as necessary.

That doesn't actually work, because you're limiting the return type of the method in the implementation of the functor. So by implementing that protocol, you could have one mapping from Int to String (say), but not another from Int to Double.

Put another way, you need to specify FunctorResult at implementation time, and the whole point is to only have to specify it at call time.

This code works:

    let d = [1: 1, 2: 2, 3: 3]
    let intToInt = d.map { $0 + 1 }
    var IntToString = d.map { String($0) }

The code works, but the type of the generic is not guaranteed at compile time. FunctorResult could be anything at all. There is no compile-time obligation for the code to return a Dictionary<KeyType, P>.

The more canonical example of a Functor is really the Optional. The mapping method for an optional looks like this:

  func fmap(f: A -> B) -> Optional<B> {
    switch self {
    case Some(let a):
      return Some(f(A))
    case None:
      return None
However, with your protocol, I can define the mapping function for the Optional as:

  func fmap(f: A -> B) -> B[] {
    switch self {
    case Some(let a):
      return [f(A)]
    case None:
      return B[]()
This would pass the type-checker, but is not what a Functor does.

Some folks are working on a library for this already: https://github.com/maxpow4h/swiftz

yup. we have a "Functor" https://github.com/maxpow4h/swiftz/blob/master/swiftz/Unsoun... It requires constructing a "Functor" value manually, otherwise it works fine. You can still write functions over an A in Functor. The Result enum bug presented you can work around by boxing the type, such as in https://github.com/maxpow4h/swiftz/blob/master/swiftz/List.s...

The third problem a work around exists for: remove the default argument. If you want an empty init, provide one.

Sure it is buggy, but you can express these programs.

Thanks for the Box suggestion; I'll try it and see how it works for me. I wouldn't have thought of that workaround.

I don't understand what a functor is or why an array is a functor. Can anyone please explain? I did the wikipedia article for category theory and I read some stack overflow questions and I'm not sure I understand why an array is a functor. It seems some languages have their own meaning for what a "functor" is, confusing the issue.

My initial guess is that a functor is just sort of like a function that casts or does something like return an interface type from a type that implements the interface or a base class. Is that right? I still don't understand why an array is a functor though, I'd imagine it would at least have to be a function?


Actually I think he's talking about Haskell's version of an iterable, and it would then make sense to call an array a functor.


Why is that called a functor instead of an iterable or enumerable though?

A functor is a thing that can be mapped over. For example:

    map abs [-1, 2, 3] => [1, 2, 3]
Arrays are the simplest example, but it looks like an iterable. However functors retain shape whereas iterables don't. For example if we had a tree (represented visually):

    oak = -1
          / \
         2   3
If we used oak as an iterable, we would lose the structure of the tree:

    map abs iter(oak) => [1, 2, 3]
However if tree belongs to the functor typeclass (i.e. implements functor interface), then:

    map abs oak => 1
                  / \
                 2   3
The alternatives to functors are:

1. mutate the existing data structure

2. copy a new one and then mutate (two passes)

3. create a new one while mutating (one pass, increased complexity / bugs)

>However functors retain shape whereas iterables don't.

Wow. Such a simple sentence yet this is the first time I have read it, and it makes the concept much more clear than hundreds of articles before did. Thank you!

I wish all FP features would be explained so simply.

You can build a bit of mathematical intuition for the concept now that you get the basic idea. Try and work out from the functor laws why functors preserve shape. The laws are very straightforward:

    map id c = id c
(Mapping the identity function is the same as simply applying the identity function -- or with a little more category theory, the functor maps the identity function in the base category to the identity function in the functor's category)

    map f (map g c) = map (f . g) c
The second law is also simple -- mapping one function over the container then mapping another function is exactly the same as mapping the composition of the functions over the container. This one is the basis of stream fusion, a very important optimization, that allows you take two traversals of a container and turn them automatically into just one traversal.

The preservation of structure follows from just these two laws and parametricity (The element type of the containers is generic and therefore unknown, this greatly restricts what operations are available on the elements of the container. You can't, for example, conjure up a new value of the element type to insert.). I strongly recommend trying to figure out how.

Though if you take this intuition too far it tends to break down, for instance IO and ST both define functor instances but it doesn't really make sense to talk about a functor over IO preserving shape.

I get it now, thank you for this great explanation, and all the other explanations as well. I think it warrants its own wikipedia page.

The name "Functor" can be confusing; it's a thing you apply a function to, not a function itself.

A Functor is a type t parameterized by some other type a, such that if you have a function from type a to type b, you can apply that function to a t of a and get a t of b.

Most commonly, a functor will be a container of values of type a, and the mapping consists of applying the function to each value, resulting in a container of values of type b.

Other common functors include a computation that returns type a (use composition to apply the function to the result to get a computation that returns type b), or a possibly null pointer to type a (map null to null, map pointer to a to pointer to b).

In an alternate universe where functional programming was invented by programmers rather than mathematicians, map would probably be called something like convertWith and a functor would probably be called something like a convertible.

If you think of a function as being a way of converting from one kind of thing to another, then a functor is an object which has a number of things which you can convert to some other kind (a number which may be one or zero), using a method of your choice, and map is the name of the method you use to do that.

In Java we would call it "Mappable". A Mappable interface would have one method: map(). It hasn't been all that useful until recently due to lack of closures, but perhaps it would make sense in Java 8?

"Functor" is a terrible name that only a mathematician could like.

Just be thankful they didn't name it "Carnap" after the guy who introduced the idea. At least "functor" is moderately descriptive.

Remind me: can you declare a Java interface with a method signature that returns the same type as the class that eventually implements the interface? You wouldn't want the map method on mappable to simply return another Mappable, because that could be any other Mappable (i.e. a CustomList could implement map to return a CustomTree, when the functor concept requires it to return another CustomList).

You can do that partially. Because of the sub-classing and covariant return-types, I think you cannot prevent a CustomCollection from returning a CustomList. The definition would be:

  public interface Mappable<T extends Mappable<T>> {
      T map();

I would bail on type checking and use an assertion. It's not reasonable to try to statically check everything in Java that you could in some other languages.

You can't do it directly, but there is a standard idiom for doing something like that - define the interface like this:

  interface Functor<A, R extends Functor<A, R>>
And then in the implementing class, bind R to the implementing class:

  class Option<A> implements Functor<A, Option<A>>
However, this doesn't let you write an actual functor. Try to write the map method on it:

    class Option<A> implements Functor<A, Option<A>> {
        private final A value;
        public Option(A value) { this.value = value; }
        public <B> Option<A> map(Function<A, B> fn) {
            // er, i want to return an Option<B>, not an Option<A>
The problem is that you don't want to return an instance of the type of the receiver, you want to return an instance of a similar type with a different type parameter (Option<B> rather than Option<A>).

You would be absolutely fine if your functions were functions from values of some type to other values of that type (eg functions from integers to integers), because then A = B and you can get away with this. But there's no way to extend this hack to let you return an Option<B> from map.

The closest you can get in Java is probably this:

    interface Functor<A, R extends Functor<?, R>>

    class Option<A> implements Functor<A, Option<?>> {
        private final A value;
        public Option(A value) { this.value = value; }
        public <B> Option<?> map(Function<A, B> fn) {
            return new Option<B>(value != null ? fn.apply(value) : null);
There, you leave the element type of the returned functor undefined, as a wildcard, which lets you slip an Option<B> out as a return value.

The problem with that is that it's useless. Given:

        Option<String> x;
        Function<String, Integer> fn;
Then mapping the function over the option gives you:

        Option<?> y = x.map(fn);
An Option<?>. Which is no use to man or beast, because it's lost the type information.

What you really want to be able to write is:

  class Option<A> implements Functor<A, Option<_>>
Where the underscore is borrowed from Scala to mean "nothing to see here, move along please", and leaves that reference to Option in its unbound form. And then have a way of binding it right in the definition of the map function. But that's higher-kinded types, and Java doesn't have that.

If you were absolutely desperate to do this in Java, like if terrorists burst in and put a gun to your head and told you to do it, you could restore the type information by pebble-dashing the functor with some simple reflection:

    interface Functor<A, R extends Functor<?, R>> {
        <B> R map(Function<A, B> fn);
        <B> Functor<B, R> as(Class<B> b);

    class Option<A> implements Functor<A, Option<?>> {
        // other stuff as above
        public <B> Option<B> as(Class<B> b) {
            return new Option<B>(b.cast(value));
Which, given the above definitions of x and fn, lets you write:

        Option<Integer> y = x.map(fn).as(Integer.class);
And even the same thing but typed as functors, to show you there's nothing up my sleeve:

        Functor<String, Option<?>> fx = x;
        Functor<Integer, Option<?>> fy = fx.map(fn).as(Integer.class);
And as a separate function which contains no mention of the concrete functor class anywhere (although it does still need a type token for the result functor's parameter):

    private <A, B, F extends Functor<?, F>> Functor<B, F> applyAbstractly(Functor<A, F> fx, Function<A, B> fn, Class<B> b) {
        return fx.map(fn).as(b);
But to be honest, all of this is a bit like trying to carve a fine wooden sculpture with a chainsaw. If you want to do this, don't use Java. If you want to use Java, don't do this.

What discipline did the word "map" come from? ;) Is "map" even descriptive? In a programming concept, the first time I heard it I thought it was referring to the data structure of the same name, which is pretty different.

Yes, good point. I suppose replaceEach() would be a more descriptive name, so the interface might be HasReplaceEach. It's not as easy to talk about, though. Still better than "functor".

(The function being passed in to map() is an alternate way to represent a very large map, in a mathematical sense.)

Exactly. We'd only call the interface 'Mappable' if the method was called 'map', and that's something that's come from mathematical terminology.

(I'd probably have called the 'Map' class 'Dictionary', if that hadn't already been taken!)

I'd avoid 'replaceEach', because it sounds like it mutates the receiver. Maybe 'withEachReplaced'? Way clunky. In Smalltalk, this method is called 'collect', which is better than 'map', to my eyes, but still not all that obvious [1].

[1] Smalltalk has the names collect, inject, select, and reject for its map, foreach, filter, and complementary filter methods, because of a song: http://smalltalkzen.wordpress.com/2011/02/02/arlo-guthrie-an...

The Smalltalk version of foreach() is called #do:. #inject: is like reduce().

That blog post inspired a fun discussion in the Smalltalk community on what the semantics of #infect: and #neglect: should be!

Mathematicians prefer obscure technical terms to misleading and restrictive metaphors, because obscure terms force you to refer solely to (and eventually internalise) a precise and abstract definition.

I think this is the case for functors, because I've yet to see a metaphor for them which isn't misleading or restrictive. "replaceEach", for instance, would confuse me. I like working with parser combinators, where parsers are functors. If I replaceEach a parser, what have I replaced? The results of the parse? The parser hasn't been run on an input yet. Maybe it never will.

If I replaceEach a promise, what am I replacing? The result of the promise? Maybe the promise will be forever blocked, or maybe I'll cancel it.

If I replaceEach a continuation, what am I replacing?

> because obscure terms force you to refer solely to (and eventually internalise) a precise and abstract definition.

Unless some C++ people decide that “functor” is a wonderful name for a stateful function-like thing which somewhat resembles closures/lambdas but has nothing whatsoever to do with the functors from functional programming.

Well, yes, mathematicians like to be very precise about exactly what a very general abstraction entails, to the point where you are being very precise about how two almost entirely different concepts have almost but not quite nothing in common.

These fine distinctions don't belong in code meant for a general audience. Instead, don't try to generalize so much. It's not important (and in fact it's confusing) that replaceEach can be generalized to work with a parser. Parsers can have a different function with its own name and nobody needs to know that it's sort of the same concept.


More please !

Also guys, please check this new submission.


Alice and Bob play a cryptographic tetris game. Some french phd bloke has written it.

Would you please stop putting "/word" at the top of your comments? Gimmicks like that don't work here.


It is a pattern, yes. Let me show you a pattern that I have observed in Hacker News and forums since the demise of USENET.

USENET had an elegant way of ignoring without hellbanning, which the negative karma has done to me now.

Would you please study the history of Stalinism ?

Let me summarise it for you.

Ideology -- pg.

Yeah, Marx is a good guy too.

It is made of dictators and the inner ring -- Admins and Karma users like you. These people have special "powers" of punishment.

Admins have far greater powers of purging and renaming things.

Then the party cadre -- people who submit and actually keep the things in motion. They are usually not aware of the "inner ring", users like you. They want to help the lay people.

In fact no one even knows who is in the inner ring. The inner ring is clueless of the Admins.

Then there are lay users like me, who lurk around and want to help but are generally punished.

If you think I am shitting you consider the rate of new users becoming high karma users in HN over the years.

You can use d3 and it will go the top of HN.

Your pedantry is just arbitrary. There is nothing really substantial in it. Rather than judging on the basis of content, you are judging me on the "words" I use.

Gimmick: An ingenious or novel device, scheme, or stratagem, especially one designed to attract attention or increase appeal.

Why thank you, it is novel :)

Please understand the democratic roots of the word "forum" and take it seriously.

More precisely it is an endofunctor in the category whose objects are Swift types and morphisms are functions between Swift types. All this really means is that given a Swift type, say Int or String, there are Arrays of Int or String, and given a function like showInt(a : Int) : String, a functor gives you a function from an Array of Int to an Array of String (in this case it would just apply showInt to each element of the Array and the result would be the Array of String) in such a way that if you have two functions (to keep with the example, say toUppercase(s : String) : String, the function you get by applying the functor to the composition (toUppercase . showInt) is the same as the composition of functions gotten from the application of the functor to the individual functions. In addition it is required that the identity function is mapped to the identity function.

Thank you for this. I find explanations that eschew category theory bizarre, this really cleared things up for me (having spent a lot of time with functors mathematically, and almost no time with them in a programming context).

I appreciate orbifold taking the time to reply but I couldn't understand his answer at all, even after perfectly understanding what others have said. There are 3 sentences in that paragraph the second of which is very large and difficult to parse and missing an end parenthesis somewhere.

Why do you get a composition of functions by applying a functor to individual functions? It seems like a functor is just something you can map over. When a functor is mapped over the end result is another functor. Not a function nor a composition of functions.

I was unsure what notation to use, so I used english instead. In Haskell a Functor f is characterized by a higher order function fmap :: (a -> b) -> (f a -> f b), which is required to satisfy the laws

fmap (g . f) = (fmap g) . (fmap f)


fmap id = id

where (.) :: (b -> c) -> (a -> b) -> (a -> c) is the function composition operator and id :: a -> a the identity function. Those are the laws the last two sentences try to phrase in english. In mathematics a functor F between categories C and D maps objects in C to objects in D and any morphism f: X -> Y in C to a morphism F f : F X -> F Y in D, in such a way that for morphisms f : X -> Y and g : Y -> Z in C one has F (g . f) = F g . F f and F id_X = id_{F X}. So you see Haskell and math notation are almost identical, although you can express the laws only as compiler rules in Haskell.

I think it gets worlds better with diagrams. I'll try to revisit this and provide some, when I've a bit more time.

> Why do you get a composition of functions by applying a functor to individual functions? It seems like a functor is just something you can map over. When a functor is mapped over the end result is another functor. Not a function nor a composition of functions.

A function is a Functor. ;)

If you have a function f and a function g, fmap f g = h, and h is a function. A more common way to write this is function composition of f and g, which can be written: f . g. The dot (.) is function composition. Which means that fmap f g = f . g, which means that fmap = (.).

> When a functor is mapped over the end result is another functor. Not a function nor a composition of functions.

But they was describing the case when the Functor in question is a function.

If something is Iterable, it's a Functor, but not the other way around. Iterable implies there's an order to the elements, for instance. It also implies that you could get ahold of the elements if you chose.

Types which are Functors may have orders and may provide access to the elements, but the Functor interface does not provide those means. This allows you to instantitate more restrictive types. For instance (in Haskell notation)

    data Pretend a = Pretend
is a data type with only one element (`Pretend`) that pretends to be a container. Consider the two types

    Array Int
    Pretend Int
You can still consider Pretend to be a Functor (the mapping function is just a no-op) but it certainly isn't iterable.

In Haskell, Iterable is called Foldable and effectively is the following interface

    instance Foldable c where
      toList :: c a -> [a]
but `Foldable` is used because typically instead of converting it to a list you want to fold over the elements

      fold :: Foldable c => (a -> b -> b) -> b -> c a -> b

The term "functor" is badly overloaded in general -- almost every language that uses the term has its own definition. In C++, it's an object that can be called as a function (by virtue of defining 'operator()') [0]. In Prolog, it's the head and arity of a term [1]. In ML, it's a function from structures to structures [2].

[0] http://www.stanford.edu/class/cs106l/course-reader/Ch13_Func...

[1] http://www.swi-prolog.org/pldoc/man?predicate=functor/3

[2] http://en.wikipedia.org/wiki/Standard_ML#Module_system

Let's begin with the array. Imagine I have an array of characters:

var a = ['a', 'b', 'c']

I'd like instead to have the index of their position in the alphabet. That's another array of the same length that would look like this:

var i = [0, 1, 2]

To each element in a, there corresponds an element i, at the same index. And the operation we use is the same for each element:

f('a') = 0

f('b') = 1

f('c') = 2

But we could also want an array that gives us the next letter in the alphabet, like so:

var n = ['b', 'c', 'd']

Here the function is:

g('a') = 'b'

g('b') = 'c'

g('c') = 'd'

But we're very much doing the same thing. We call it mapping. So Arrays in Swift have a map function defined that works like this:

var i2 = a.map(f)

We're passing the function to apply to each member, and we get an array back, of the same length, with the result of doing so.

A functor is a generalization of that concept. It takes a type parametrized by another type, say Box<V> and a function from the parametrized type to another type, say f:V -> W. Then, according to rules that are specific to Box, if you map f over Box<V>, you'll get a Box<W> out.

The key is that exactly how a Box<V> becomes a Box<W> is up to Box.

> Why is that called a functor instead of an iterable or enumerable though?

Probably because 1) it is inspired by category theory rather than some interface/signature from another programming language 2) "Iterable"/"Enumerable" might be misleading.

2): It might make sense for things like arrays, where you can think of the function (fmap) as enumerating/iterating over the array and then, for each value in the array, uses the supplied function on it. But Functors aren't just things that you can iterate over; functions themselves are functors, and function composition is fmap. But how does the iteration-intuition make sense in this case? Other functors are more a kind of "box" than a collection that can be iterated over (for example: the Option/Maybe type). Sure, you can say that you can implement fmap on Maybe by iterating over the data that it holds, namely the single data that it holds, but that doesn't really say much.

I've had very similar experience: I try to port non-trivial, but still rather simple functional code, for example containing a couple of nested closures, and the compiler crashes. And it seems that the Xcode playground, the command line compiler and REPL all behave a bit differently. In one case playground works, but REPL crashes, in another, compiler works, REPL has trouble of figuring out types.

I have absolutely no experience on implementing compilers, but for a layman these inconsistencies seems very odd.

Microsoft's work on C# compiler (Roslyn) is amazing: they offer compiler-as-a-library so the IDE, plugins and debugger use the same backend as the compiler itself. I wish at some point Apple could adopt a similar approach.

Hmm, Apple went there before Roslyn with Clang/LLVM (whose creator is the designer of Swift and works for Apple).

The idea behind LLVM was to have the various compiler stages and tooling be re-usable and plugin like, unlike the monolithic design GCC had.

And Apple already uses the came "compiler-as-a-library" approach, even for Objective-C, to implement: the compiler, the syntax highlight, AST-based auto-completion, debugging and error fix suggestions and other stuff.

I'm pretty sure that the case with Swift is the same. It's just that it's not stable yet, so you can get crashes at various stages of all those pipelines.

>The idea behind LLVM was to have the various compiler stages and tooling be re-usable and plugin like, unlike the monolithic design GCC had.

More specifically, the idea behind LLVM was "life-long program compilation and optimization" in stages at translation time, link time, install time, and runtime. The natural way to realize this was modular components at different levels working around a good serializable IR.

If your goal is a traditional once-and-done AOT compiler, you might be forgiven for architecting something more heavily coupled and interdependent like GCC, whose IR was an afterthought (by 15 years.) LLVM's focus has shifted somewhat, but those original designs created a kind of serendipitous foundation.

That's basically what drove the design of clang. No doubt the approach with swift will be similar.

Check out LLVM (http://llvm.org)

I'm pretty sure thats what they've always done, at least for the time they've used LLVM.

Just to point out, the last two sound like it's due to the current implementation (compiler) being new, so it isn't so upsetting at all.

Now, not supporting functors is odd, though.

Yeah, that's why I said the crashes gave me hope. It seems they are trying to support those use cases, but haven't worked out the kinks. I'm totally fine with that.

Why is it odd? Is there a historical precedent for some other language exposing higher kinded types this early in the lifecycle? (Prior to 1.0, that is?)

Ωmega, maybe? Is that what we're expecting of the new "normal" already?

It's not odd that Haskell didn't have them in 1.0, because they were not conceptualized until '93[1] after the Haskell report has been published in '90. They made it into the Haskell report 1.3 ('96)[2].

We should not be comparing the development of languages now to 20 years ago - there is an enormous amount we have learned, and these insights should be fundamental considerations when designing a new language. You can't just "tack" things onto an existing language and expect it to be elegant - tacking on leads to huge languages like C++.

And yes, we should expect it as "normal" for new languages, because it is what people have come to expect to have available - although some implementations leave a lot to be desired.

If we consider how C# implements Functors for example, we see that it requires special language support, where the compiler essentially pattern matches over your code, looking for a method named "Select", with a specific signature taking a generic type and a func. This implementation completely misses the point that Functors themselves, while useful, are not special - they are a specialization of a more general concept we have - typeclasses and higher-kinded polymorphism. C# also adds the ability to create Monads, Foldable, Comonads etc, using similar patterns - but you can't create your own typeclasses, and are left to the language designers to add a desired feature.

The decision to add them onto C# like this was made not without knowledge of this, but out of consideration of working with the existing language, which was designed without their consideration, hence, why they're a pretty ugly design compared to the simplicity of Haskell's implementation.

[1]:http://www.cs.tufts.edu/comp/150GIT/archive/mark-jones/fpca9... [2]:http://research.microsoft.com/en-us/um/people/simonpj/papers...

Haskell and Scala both seem to have survived adding it later. Rust won't have it by 1.0 either, and this never seems to come up in threads about Rust. I agree it will be a nice feature, if/when it comes. I just think Swift is being held to a higher standard because, well, Apple.

By the way, thank you for the timeline for Haskell. That's exactly what I was wanting to know.

> Rust won't have it by 1.0 either, and this never seems to come up in threads about Rust.

It does come up a lot in threads about Rust in other communities :)

We've discussed how HKT could integrate into the system and I think we have a pretty good concept of how it would work. But I would caution that uniqueness and low-level memory management can often throw a wrench into the common use cases you might think of for HKT, and functional features in general.

FWIW. map() already exists. It's a global function, rather than being a protocol method, and it uses overloading. So you can still define functions merely by defining a new overload for map() that operates on your type.

Sigh, I typed "functors", not "functions", which should make a bit more sense. Autocorrect changed that without me noticing.

After reading this, I tried to find out which versions of Haskell and Scala introduced higher kinded types. I'm sure it wasn't among the first features. Does anybody know the history of either?

AFAIK you can't do Functor in Rust yet either, and for the same reason.

Its amusing to see Apple cheerleaders defend Swift as being good enough for version 0.1 when supposedly Apple is this company that only releases products when they're complete/polished/etc. Apple themselves proclaim its "Ready Today" on their marketing page as well. (https://developer.apple.com/swift/)

Apple does that with _consumer_ products (well, usually; please ignore Siri). They've historically been much more willing to ship developer stuff that's very rough around the edges; the iPhone OS 2.0 SDK shipped in a barely usable state, for instance, and Xcode 4 was a similar story.

Fair enough. :)

To me, the things I tried weren’t insane; they seemed like the obvious things to try.

These are obvious things to try, from the POV of environments like Scala. That's the wrong context from which to evaluate this language. Swift is a more modern language for iOS and is most usefully evaluated in that context. (Though it is also important to also know what Swift is not in language terms.)

Since Bolts/BFTask was mentioned, I'd like to point out that you can use the Bolts-iOS library in Swift. I added a pull request with the code examples in Swift here: https://github.com/BoltsFramework/Bolts-iOS/pull/37/files


I Agree with what Chris Granger said in https://twitter.com/ibdknox/status/473912605350719488.

I think Rust is a good example of developing a language in the open.

That said it's great to see Bret's ideas see more implementations.

If you 'agree' with Chris Granger, perhaps you can name one of the mistakes.

Also - what good is Rust being developed in the open if it can't be used for production apps yet?

Because it's not expected to be developed in the open ever and will therefore never be useful for production apps?

> I love what the compiler crashes let me hope it will do

These rose-colored glasses are getting ridiculous.

By my reading the author made that statement, not naively, but with full awareness of it's slightly ridiculous nature. I think that whooshing sound was their joke going right over your head.

Author here. To be fully honest, it's somewhere between the two. Yes, I'm aware of the irony of the statement. But what I failed to make clear in the post is that the code in the last two examples passes the type checker. Where it fails is at the intermediate representation stage. So the grammar of the language supports the constructs but the backend hasn't caught up yet.

It's possible, of course, that Apple will modify the grammar to make these use cases impossible. But I would be very surprised. I can't blame anyone for accusing me of rose-tinted glasses until we know for sure, though.

An early beta compiler crashes! Oh, noes, Apple is doomed!

I think the point here is that if the compiler crashes, that implies that it is trying to do the thing, as opposed to saying "No, filthy programmer, I will not permit you to do that thing."

Swift seems like a nice lang, but I'm surprised that proprietary, single platform language is getting so much traction on HN.

Is it because ObjectiveC is as shitty as it seems at first sight (for C like langs dev) ?

iOS is arguably one of the three top consumer software platforms (Web, Android, iOS) and a lot of people reading HN are developing for it. Personally, as a programmer, I would find Swift articles interesting and educating, even if I wasn't developing for iOS, as Swift has adopted a lot of concepts from modern statically typed languages.

Objective-C is far from being shitty. Your comment, on the other hand, is.

You are overreacting. I'm not saying it's shitty - but it SEEMS to be at least for most Java/C# devs (I know) AT FIRST SIGHT.

My hypothesis was: Objective-C is "not nice" (sorry, I'm not familiar with HN political correctness) therefore Swift feels like a true relief for iOS devs and thus this much traction.

It seems these days there are a lot of Java and C# focused people who are unfamiliar with what came earlier. They have a hard time making these sorts of comparisons.

To me the comparison that makes more sense is going from C to objc. Or alternatively, comparing objc and C++ (especially C++ as practiced in the 1990s, not the RAII or template patterns that emerged later).

Say you're looking at the landscape as it existed a while back, and you've decided to make language changes to C in order to add objects. On the one hand you have C++ as it existed then: a language that can't seem to figure itself out, that has a very complex syntax and lots of troubling situations you can walk into.

Then along comes objc. In contrast to C++, the language delta from C is tiny and easy to keep in your head. Especially before Apple started doing all this compiler-fu of recent years, it felt like just a few keywords bolted on, and most of the important additions seemed to be in the runtime library.

When I first saw objc it kind of clicked for me as the anti-C++, in a way that is truer to C and doesn't add a lot of complexity. At the time that seemed refreshing, though I think I see more good in C++ now that RAII and templates are more of a thing. (If only there existed a set of 2 people who could agree on what C++ features to use.)

Comparing one 80's language to another 80's language hardly demonstrates how great it is in 2014 ;)

Objective-C is a great language; at least a lot of us who have been developing in it for years think so. We're excited about Swift because although ObjC has modernized a great deal, there are limitations to how far it can be pushed. Swift's clean block syntax is a good example here. Objective-C, if you're not turned off by the syntax, can be a joy to program in.

It doesn't even have namespaces. ARC basically made "struct" obsolete (goodbye easy literal data instatiatons), and it doesn't have a lot of the things programmers have come to expect even in compiled languages these days. Now don't get me wrong, for a 30 year old language, it's pretty good, but calling it great may be a little over stated.

I don't mind the syntax, but I don't think it's the nicest language out there feature wise.

I would be surprised if Apple did not BSD license the source code for Swift by the end of the year, given that they have been almost single-handedly sponsoring the framework (and employing the main developers) it's built on, LLVM, for years now - keeping it open source when there's no license requirement for them to do so.

You can expect a reasonable behaviour on HN unless it has something to do with Apple..

"She" the programmer. Seriously this sounds ridiculous and is the outcome of the shameful male "betafication" women try to establish these days. I say "he" the programmer.

Okay enugh bullshit and to the point: Swift is preview. It will get better and better fromday to day. It isn't there to mimic scala or lisp paradigms. It is there to get the job done and it will get the job done.

That's just unnecessary political correctness. It's very hip amongst a very special crowd of people. Using multiple genders somehow fixes the gender dis-balance in the craft.

> I say "he" the programmer.

Well, aren't you wonderful? Why, you'll probably win the Nobel Prize for Protecting the Poor Oppressed Men!

Alternating he and she for an abstract person is quite common amongst people who don't use singular they.

Exactly. How will wee poor, oppressed male programmers survive as only 80 or 90% of the field?

For those wondering about the weird "betafication" thing, I recommend reading this fine blog: http://wehuntedthemammoth.com/

It's a trope of the self-anointed "men's rights" crowd, from people who are in a continual hysteria about how society is stopping them from being proper alpha males. Never quite getting that passive-aggressive anonymous-coward Internet message board comments are not how chimpanzee males work out their dominance issues.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact