
Smashing Swift - afthonos
http://nomothetis.svbtle.com/smashing-swift
======
archgrove
Compiler bugs really don't worry me - despite the name, this is is really an
alpha release. However, I ran up against the lack of generic protocols myself
today (for those with access, there's an interesting debate at
[https://devforums.apple.com/thread/230611?tstart=0](https://devforums.apple.com/thread/230611?tstart=0)).
It seems like a deliberate design choice, but one I'm not really sure about.

The primary vibe I'm getting from Swift is _pragmatism_. They had various red
lines to follow: must be as fast as Objective C; must have a minimal to non-
existent runtime (so AOT compilation all the way); mustn't have garbage
collection, must interoperate with Objective C/normal C with ease; etc. This
had led to a language which, despite being at version 0.1, has a fair few
oddities and warts already. For example, the "almost but not quite immutable"
arrays seem to be an artefact of absolutely wanting array performance to be
equal to standard C. Typealiases vs parametric protocols seem to be a desire
to have as much type information fixed as soon as possible.

It seems that rather than design a language where they might not have
solutions for their red lines up front, they've designed a language where they
can provide them. Rather than make theoretically "better" choices the compiler
can't deliver perfectly in V1, but can in V4 or 5, they've gone for "yeah, we
can almost certainly ship that". Given that this is essentially Apple's
private language, I assume they'll be quite aggressive about deprecating
features and moving people onto the "better" solutions (high-kinded types,
richer immutability etc) when they can deliver them whilst meeting the core
goals. It's an interesting approach - most other languages are happy to take a
few years to get going, whereas Swift seems to want to go from 0 to 60 in 4
months. It reminds me most of C# 1.0, but with harder restrictions on what
they've been told to deliver. At the moment, it's interesting, and a big leap
from Objective C. By V3, it might be "excellent".

~~~
japhyr
I have only a passing familiarity with Objective C. Why is one of the "red
lines" having no garbage collection?

~~~
archgrove
Apple did add garbage collection to Objective C, but it only lasted a few
years before being pulled. Two less important reasons are that it's hard to
integrate with the otherwise trivial C(++) interop, and the frameworks just
weren't designed for it.

To me, the more important answers are deterministic destruction, and no GC
pauses. All of Objective C is reference counted these days, with
retain/releases inserted by the compiler (so all you have to do as a
programmer is resolve cyclic references with weakness). Thus, you know exactly
when an object will die, and have its dealloc method called. You're also sure
you'll never end up in a situation where memory pressure causes system
hitching due to GC. Given that a key platform for Objective C is iOS (low
memory, "low" CPU), and that Apple's trademark tends to be fluid UI, avoiding
these problems is really helpful.

~~~
mikeash
To be fair, reference counting isn't strictly deterministic either. The moment
you mix in multithreading or call out to code you don't control and whose
refcounting behavior could change over time, you can no longer know when an
object will die.

As for memory pressure, reference counting does generally behave better, but
in some situations you can end up doing much worse. If you manage to generate
a lot of autoreleased objects (harder to do these days with ARC, but still
possible) then you can end up getting your process terminated due to the
memory pressure of should-be-dead-but-not-yet-freed objects.

I think that the C interop problems are the real killer. The other problems
with garbage collection can potentially be solved, but as long as C is in the
picture, you're doomed to a sort of halfway land where none of the good GC
techniques are available to you.

~~~
baddox
I'm not very familiar with the implementation of programming languages, so
maybe the terminology is subtly different, but how are either of those
situations not deterministic?

~~~
bmm6o
As others have pointed out, it's deterministic in the sense that the memory is
reclaimed when the ref count goes to zero, and the ref count is always well-
defined. I would have used the word "predictable", in that `delete foo` will
always release memory, but `foo.Decrement()` may or may not, and local
reasoning may not suffice.

------
acqq
Still, I look at the Swift with a low-level colored glasses: I don't care if
this or that category theory aspect is covered or not. I don't care if you can
imagine some nicer syntax for something (everybody can). I really care how
good and how often Swift can be used instead of C, not being slower and not
using more resources.

I know, <your-favorite-language-almost-nobody-uses> is nicer. But for me, if
it has GC, it's a no go. I see Swift as something where I win when I can use
it instead of writing C, not as something where I lose since the most of
problems I solve by writing software I can't solve by writing in <some other
language>.

Swift is impressive language for v0.1, very pragmatic in aspects I care about.
So you see that there's chance it will probably be even more expressive in
some other version. Nice. Even as it is, it's much nicer than any other
options for the purposes for which it was designed.

~~~
bobbles
As a fairly inexperienced programmer, can someone please explain why everyone
is so happy there is no garbage collection going on?

Shouldn't that be something we would want?

(Is it because control over this allows us to do extra things?)

~~~
aidanhs
There's a helpful bit of writing about garbage collection on
[http://sealedabstract.com/rants/why-mobile-web-apps-are-
slow...](http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/)

It boils down to there being a big performance penalty on garbage collection
in memory-constrained environments.

~~~
wpietri
Wow, that's a great article. I'm currently using mobile web stuff (Ionic,
Angular, PhoneGap) to prototype some possible mobile apps. It's convenient,
but I've been worrying about the long-range performance issues. This gives me
a much more detailed way to think about it. That Apple got rid of garbage
collection is especially telling.

------
jmah
I successfully got the Functor example to work like this:

    
    
        protocol Functor {
            typealias T
            typealias FunctorResult
            func map<P>(mappingFunction: T -> P) -> FunctorResult
        }
        
        extension Dictionary : Functor {
            func map<P>(mappingFunction: ValueType -> P) -> Dictionary<KeyType, P> {
                var newDict:Dictionary<KeyType, P> = [:]
                for (key, value) in self {
                    newDict[key] = mappingFunction(value)
                }
                return newDict
            }
        }

~~~
afthonos
That doesn't actually work, because you're limiting the return type of the
method in the implementation of the functor. So by implementing that protocol,
you could have _one_ mapping from Int to String (say), but not another from
Int to Double.

Put another way, you need to specify FunctorResult at implementation time, and
the whole point is to only have to specify it at call time.

~~~
jmah
This code works:

    
    
        let d = [1: 1, 2: 2, 3: 3]
        let intToInt = d.map { $0 + 1 }
        var IntToString = d.map { String($0) }

~~~
afthonos
The code works, but the type of the generic is not guaranteed at compile time.
FunctorResult could be anything at all. There is no compile-time obligation
for the code to return a Dictionary<KeyType, P>.

The more canonical example of a Functor is really the Optional. The mapping
method for an optional looks like this:

    
    
      func fmap(f: A -> B) -> Optional<B> {
        switch self {
        case Some(let a):
          return Some(f(A))
        case None:
          return None
        }
      }
    

However, with your protocol, I can define the mapping function for the
Optional as:

    
    
      func fmap(f: A -> B) -> B[] {
        switch self {
        case Some(let a):
          return [f(A)]
        case None:
          return B[]()
        }
      }
    

This would pass the type-checker, but is not what a Functor does.

------
leorocky
I don't understand what a functor is or why an array is a functor. Can anyone
please explain? I did the wikipedia article for category theory and I read
some stack overflow questions and I'm not sure I understand why an array is a
functor. It seems some languages have their own meaning for what a "functor"
is, confusing the issue.

My initial guess is that a functor is just sort of like a function that casts
or does something like return an interface type from a type that implements
the interface or a base class. Is that right? I still don't understand why an
array is a functor though, I'd imagine it would at least have to be a
function?

Edit:

Actually I think he's talking about Haskell's version of an iterable, and it
would then make sense to call an array a functor.

[https://en.wikibooks.org/wiki/Haskell/The_Functor_class](https://en.wikibooks.org/wiki/Haskell/The_Functor_class)

Why is that called a functor instead of an iterable or enumerable though?

~~~
wting
A functor is a thing that can be mapped over. For example:

    
    
        map abs [-1, 2, 3] => [1, 2, 3]
    

Arrays are the simplest example, but it looks like an iterable. However
functors retain shape whereas iterables don't. For example if we had a tree
(represented visually):

    
    
        oak = -1
              / \
             2   3
    

If we used oak as an iterable, we would lose the structure of the tree:

    
    
        map abs iter(oak) => [1, 2, 3]
    

However if tree belongs to the functor typeclass (i.e. implements functor
interface), then:

    
    
        map abs oak => 1
                      / \
                     2   3
    

The alternatives to functors are:

1\. mutate the existing data structure

2\. copy a new one and then mutate (two passes)

3\. create a new one while mutating (one pass, increased complexity / bugs)

~~~
dscrd
>However functors retain shape whereas iterables don't.

Wow. Such a simple sentence yet this is the first time I have read it, and it
makes the concept much more clear than hundreds of articles before did. Thank
you!

I wish all FP features would be explained so simply.

~~~
acomar
You can build a bit of mathematical intuition for the concept now that you get
the basic idea. Try and work out from the functor laws why functors preserve
shape. The laws are very straightforward:

    
    
        map id c = id c
    

(Mapping the identity function is the same as simply applying the identity
function -- or with a little more category theory, the functor maps the
identity function in the base category to the identity function in the
functor's category)

    
    
        map f (map g c) = map (f . g) c
    

The second law is also simple -- mapping one function over the container then
mapping another function is exactly the same as mapping the composition of the
functions over the container. This one is the basis of stream fusion, a very
important optimization, that allows you take two traversals of a container and
turn them automatically into just one traversal.

The preservation of structure follows from just these two laws and
parametricity (The element type of the containers is generic and therefore
unknown, this greatly restricts what operations are available on the elements
of the container. You can't, for example, conjure up a new value of the
element type to insert.). I strongly recommend trying to figure out how.

------
dirtyaura
I've had very similar experience: I try to port non-trivial, but still rather
simple functional code, for example containing a couple of nested closures,
and the compiler crashes. And it seems that the Xcode playground, the command
line compiler and REPL all behave a bit differently. In one case playground
works, but REPL crashes, in another, compiler works, REPL has trouble of
figuring out types.

I have absolutely no experience on implementing compilers, but for a layman
these inconsistencies seems very odd.

~~~
danabramov
Microsoft's work on C# compiler (Roslyn) is amazing: they offer compiler-as-a-
library so the IDE, plugins and debugger use the same backend as the compiler
itself. I wish at some point Apple could adopt a similar approach.

~~~
coldtea
Hmm, Apple went there before Roslyn with Clang/LLVM (whose creator is the
designer of Swift and works for Apple).

The idea behind LLVM was to have the various compiler stages and tooling be
re-usable and plugin like, unlike the monolithic design GCC had.

And Apple already uses the came "compiler-as-a-library" approach, even for
Objective-C, to implement: the compiler, the syntax highlight, AST-based auto-
completion, debugging and error fix suggestions and other stuff.

I'm pretty sure that the case with Swift is the same. It's just that it's not
stable yet, so you can get crashes at various stages of all those pipelines.

~~~
0x09
>The idea behind LLVM was to have the various compiler stages and tooling be
re-usable and plugin like, unlike the monolithic design GCC had.

More specifically, the idea behind LLVM was "life-long program compilation and
optimization" in stages at translation time, link time, install time, and
runtime. The natural way to realize this was modular components at different
levels working around a good serializable IR.

If your goal is a traditional once-and-done AOT compiler, you might be
forgiven for architecting something more heavily coupled and interdependent
like GCC, whose IR was an afterthought (by 15 years.) LLVM's focus has shifted
somewhat, but those original designs created a kind of serendipitous
foundation.

------
noobermin
Just to point out, the last two sound like it's due to the current
implementation (compiler) being new, so it isn't so upsetting at all.

Now, not supporting functors is odd, though.

~~~
pohl
Why is it odd? Is there a historical precedent for some other language
exposing higher kinded types this early in the lifecycle? (Prior to 1.0, that
is?)

Ωmega, maybe? Is that what we're expecting of the new "normal" already?

~~~
sparkie
It's not odd that Haskell didn't have them in 1.0, because they were not
conceptualized until '93[1] after the Haskell report has been published in
'90\. They made it into the Haskell report 1.3 ('96)[2].

We should not be comparing the development of languages now to 20 years ago -
there is an enormous amount we have learned, and these insights should be
fundamental considerations when designing a new language. You can't just
"tack" things onto an existing language and expect it to be elegant - tacking
on leads to huge languages like C++.

And yes, we should expect it as "normal" for new languages, because it is what
people have come to expect to have available - although some implementations
leave a lot to be desired.

If we consider how C# implements Functors for example, we see that it requires
special language support, where the compiler essentially pattern matches over
your code, looking for a method named "Select", with a specific signature
taking a generic type and a func. This implementation completely misses the
point that Functors themselves, while useful, are not special - they are a
specialization of a more general concept we have - typeclasses and higher-
kinded polymorphism. C# also adds the ability to create Monads, Foldable,
Comonads etc, using similar patterns - but you can't create your own
typeclasses, and are left to the language designers to add a desired feature.

The decision to add them onto C# like this was made not without knowledge of
this, but out of consideration of working with the existing language, which
_was_ designed without their consideration, hence, why they're a pretty ugly
design compared to the simplicity of Haskell's implementation.

[1]:[http://www.cs.tufts.edu/comp/150GIT/archive/mark-
jones/fpca9...](http://www.cs.tufts.edu/comp/150GIT/archive/mark-
jones/fpca93.pdf) [2]:[http://research.microsoft.com/en-
us/um/people/simonpj/papers...](http://research.microsoft.com/en-
us/um/people/simonpj/papers/history-of-haskell/history.pdf)

~~~
pohl
Haskell and Scala both seem to have survived adding it later. Rust won't have
it by 1.0 either, and this never seems to come up in threads about Rust. I
agree it will be a nice feature, if/when it comes. I just think Swift is being
held to a higher standard because, well, Apple.

By the way, thank you for the timeline for Haskell. That's exactly what I was
wanting to know.

~~~
pcwalton
> Rust won't have it by 1.0 either, and this never seems to come up in threads
> about Rust.

It does come up a lot in threads about Rust in other communities :)

We've discussed how HKT could integrate into the system and I think we have a
pretty good concept of how it would work. But I would caution that uniqueness
and low-level memory management can often throw a wrench into the common use
cases you might think of for HKT, and functional features in general.

------
eridius
FWIW. map() already exists. It's a global function, rather than being a
protocol method, and it uses overloading. So you can still define functions
merely by defining a new overload for map() that operates on your type.

~~~
eridius
Sigh, I typed "functors", not "functions", which should make a bit more sense.
Autocorrect changed that without me noticing.

------
pohl
After reading this, I tried to find out which versions of Haskell and Scala
introduced higher kinded types. I'm sure it wasn't among the first features.
Does anybody know the history of either?

AFAIK you can't do Functor in Rust yet either, and for the same reason.

------
ksk
Its amusing to see Apple cheerleaders defend Swift as being good enough for
version 0.1 when supposedly Apple is this company that only releases products
when they're complete/polished/etc. Apple themselves proclaim its "Ready
Today" on their marketing page as well.
([https://developer.apple.com/swift/](https://developer.apple.com/swift/))

~~~
rsynnott
Apple does that with _consumer_ products (well, usually; please ignore Siri).
They've historically been much more willing to ship developer stuff that's
very rough around the edges; the iPhone OS 2.0 SDK shipped in a barely usable
state, for instance, and Xcode 4 was a similar story.

~~~
ksk
Fair enough. :)

------
stcredzero
_To me, the things I tried weren’t insane; they seemed like the obvious things
to try._

These are obvious things to try, from the POV of environments like Scala.
That's the wrong context from which to evaluate this language. Swift is a more
modern language for iOS and is most usefully evaluated in that context.
(Though it is also important to also know what Swift _is not_ in language
terms.)

------
gfosco
Since Bolts/BFTask was mentioned, I'd like to point out that you can use the
Bolts-iOS library in Swift. I added a pull request with the code examples in
Swift here: [https://github.com/BoltsFramework/Bolts-
iOS/pull/37/files](https://github.com/BoltsFramework/Bolts-iOS/pull/37/files)

------
blahber
/annoyed

I Agree with what Chris Granger said in
[https://twitter.com/ibdknox/status/473912605350719488](https://twitter.com/ibdknox/status/473912605350719488).

I think Rust is a good example of developing a language in the open.

That said it's great to see Bret's ideas see more implementations.

~~~
gress
If you 'agree' with Chris Granger, perhaps you can name one of the mistakes.

Also - what good is Rust being developed in the open if it can't be used for
production apps yet?

~~~
dignati
Because it's not expected to be developed in the open ever and will therefore
never be useful for production apps?

------
stefantalpalaru
> I love what the compiler crashes let me hope it will do

These rose-colored glasses are getting ridiculous.

~~~
andybak
By my reading the author made that statement, not naively, but with full
awareness of it's slightly ridiculous nature. I think that whooshing sound was
their joke going right over your head.

~~~
afthonos
Author here. To be fully honest, it's somewhere between the two. Yes, I'm
aware of the irony of the statement. But what I failed to make clear in the
post is that the code in the last two examples passes the type checker. Where
it fails is at the intermediate representation stage. So the grammar of the
language supports the constructs but the backend hasn't caught up yet.

It's possible, of course, that Apple will modify the grammar to make these use
cases impossible. But I would be very surprised. I can't blame anyone for
accusing me of rose-tinted glasses until we know for sure, though.

------
tkubacki
Swift seems like a nice lang, but I'm surprised that proprietary, single
platform language is getting so much traction on HN.

Is it because ObjectiveC is as shitty as it seems at first sight (for C like
langs dev) ?

~~~
rimantas
Objective-C is far from being shitty. Your comment, on the other hand, is.

~~~
tkubacki
You are overreacting. I'm not saying it's shitty - but it SEEMS to be at least
for most Java/C# devs (I know) AT FIRST SIGHT.

My hypothesis was: Objective-C is "not nice" (sorry, I'm not familiar with HN
political correctness) therefore Swift feels like a true relief for iOS devs
and thus this much traction.

~~~
asveikau
It seems these days there are a lot of Java and C# focused people who are
unfamiliar with what came earlier. They have a hard time making these sorts of
comparisons.

To me the comparison that makes more sense is going from C to objc. Or
alternatively, comparing objc and C++ (especially C++ as practiced in the
1990s, not the RAII or template patterns that emerged later).

Say you're looking at the landscape as it existed a while back, and you've
decided to make language changes to C in order to add objects. On the one hand
you have C++ as it existed then: a language that can't seem to figure itself
out, that has a very complex syntax and lots of troubling situations you can
walk into.

Then along comes objc. In contrast to C++, the language delta from C is tiny
and easy to keep in your head. Especially before Apple started doing all this
compiler-fu of recent years, it felt like just a few keywords bolted on, and
most of the important additions seemed to be in the runtime library.

When I first saw objc it kind of clicked for me as the anti-C++, in a way that
is truer to C and doesn't add a lot of complexity. At the time that seemed
refreshing, though I think I see more good in C++ now that RAII and templates
are more of a thing. (If only there existed a set of 2 people who could agree
on what C++ features to use.)

~~~
hetman
Comparing one 80's language to another 80's language hardly demonstrates how
great it is in 2014 ;)

------
hit8run
"She" the programmer. Seriously this sounds ridiculous and is the outcome of
the shameful male "betafication" women try to establish these days. I say "he"
the programmer.

Okay enugh bullshit and to the point: Swift is preview. It will get better and
better fromday to day. It isn't there to mimic scala or lisp paradigms. It is
there to get the job done and it will get the job done.

~~~
rsynnott
> I say "he" the programmer.

Well, aren't you wonderful? Why, you'll probably win the Nobel Prize for
Protecting the Poor Oppressed Men!

Alternating he and she for an abstract person is quite common amongst people
who don't use singular they.

~~~
wpietri
Exactly. How will wee poor, oppressed male programmers survive as only 80 or
90% of the field?

For those wondering about the weird "betafication" thing, I recommend reading
this fine blog:
[http://wehuntedthemammoth.com/](http://wehuntedthemammoth.com/)

It's a trope of the self-anointed "men's rights" crowd, from people who are in
a continual hysteria about how society is stopping them from being proper
alpha males. Never quite getting that passive-aggressive anonymous-coward
Internet message board comments are not how chimpanzee males work out their
dominance issues.

