
Swift Resistance - mpweiher
http://owensd.io/2015/01/29/swift-resistance.html
======
austinz
The article discusses two Swift versions of the same program, one that uses
UnsafeMutablePointer (which is basically direct C-style memory access), and
one that uses Swift's built-in Array type. The former runs well, the latter is
slow.

If you look at the Objective-C code, BackingBuffer is a C struct containing a
C array of Pixels (themselves each a 32-bit C struct).

The problem is that the best analogy to Swift's Array type is not the C array
in Objective-C - it's the Foundation class NSArray. For the given use case,
most Objective-C developers would eschew NSArrays and classes serving as data
bags in favor of fixed-length C arrays containing C structs. C arrays don't
perform bounds checking, they require you to manually decide when to allocate,
copy, and free the backing memory, they require you to know how much space
each piece of data requires and work with the data accordingly, and so forth.

The fact that NSArrays can't hold value types, only references to classes,
complicates things a bit, but I doubt the Objective-C code would be anywhere
as fast if the backing store was an NSArray holding NSValues or something
similar. Comparing the Swift implementation using Arrays to the Objective-C
implementation using C arrays is not really fair.

~~~
lerno
We've essentially been told to use Swift arrays as we would C arrays, so your
analogy is flawed. Another premise for Swift is that it should be usable for
low level, high performance programming but "safer than C". This is something
we don't see in practice.

Furthermore, using UnsafeMutablePointers in Swift makes the program a
magnitude larger than the corresponding C program, with no additional gain in
perf, readability or safety. It is a last resort, not something you should use
all of the time.

Consequently I find it hard to agree with you.

~~~
jlarocco
It's odd that you said "We've essentially been told ...". Will Apple reject
apps using C arrays or something?

In any case, I don't really see the problem here. Every feature of every
language has some kind of performance trade off. And it's natural to expect
higher level abstractions in higher level languages will have some amount of
performance overhead versus lower level abstractions.

In this case, there's no shortage of alternatives, so pick one and get on with
it. If it ends up being too slow, go back and look at the alternatives and
swap out Swift arrays for C arrays or NSArray or whatever.

~~~
lerno
No, but we have been told that the performance characteristics of the native
Swift array should be the same as an array in C.

Judging from what the compiler team at Apple regularly write in forums and on
twitter, it seems that they consider the performance issues for arrays
(actually, this happens in other circumstances as well) to be a very real
issue.

When you say "swap for a C array", then I suppose that you mean we allocate a
chunk of memory like in C, then access it using unsafe methods (as there is no
"C array" in Swift). Unfortunately, working with UnsafeMutablePointer and
friends is extremely verbose and clumsy, which makes it an extremely bad
experience.

~~~
Glide
Isn't there toll free bridging between a Swift Array and an NSArray? It would
make a lot of sense for the performance characteristics of the two being the
same if that were the case.

Yeah and swapping to a C array isn't that simple in Swift. It seems easier to
just have objective c code and call that from Swift when needed.

~~~
mikeash
There's bridging, but it's not toll-free. You have to convert between them,
which is pretty slow. Toll-free bridging means that no conversion is
necessary, e.g. you can take a CFArray and just treat it like an NSArray and
everything still works with no conversion.

~~~
eridius
Conversion may or may not be slow, depending on the elements involved.
Specifically, if you're converting from Array<T> to NSArray where T is a class
of @objc protocol type, it's guaranteed to be O(1). Converting from other
types (such as Array<Int>) will do an O(n) copy, as it has to bridge each
element (e.g. Int would bridge to NSNumber).

Conversely, treating any NSArray as an Array is always just a call to
-copyWithZone (which is O(n) for mutable arrays and O(1) for immutable
arrays), although the docs say that upon the first element access it type-
checks the elements of the array (though presumably converting to [AnyObject]
is always free).

~~~
lerno
The conversion is only supposed to happen once, but there was a bug (don't
know if it is fixed yet, it was discovered quite a while ago), which made the
Swift <-> ObjC NSDictionary conversion to be performed each time an access was
made.

~~~
eridius
Can you give some more information? This is the first I've heard of that. The
only thing that comes to mind that would explain this sort of behavior is if
you actually make a copy of the Dictionary first and then access that copy;
since it's a copy, the conversion would presumably invoke the normal copy-on-
write behavior and make a copy of the backing storage, while leaving the
original Dictionary un-converted. In that case, any subsequent copy + access
would have to re-convert. And if that's what's going on, then that sounds like
expected behavior, not a bug.

An example of what I'm talking about would be something like

    
    
      let dict = self.someDictionary // dict is a copy, not a reference
      let x = dict["foo"] // this would then convert
      let y = dict["bar"] // but this wouldn't
    

If you mean that the `let y = dict["bar"]` would convert as well, then I'm a
bit skeptical, because it seems implausible that the conversion code in
Dictionary would even be capable of "converting" a native storage to a native
storage (i.e. it's reasonable to expect that the conversion code explicitly
converts an NSDictionary to a native storage).

~~~
lerno
I am not sure of the exact behaviour. I only read something Lattner wrote as a
comment on very slow behaviour when passing things back and forth into
CoreGraphics that relied on bridging Swift arrays/dictionaries into their CF
counterparts.

In the debug trace, it appeared that the Swift hash map got bridged over and
over again every time it was to be read. Since this was a set of attributes
for CoreGraphics graphics rendering, the dict/array conversion ended up
costing magnitudes more than what the rendering itself did.

From what I remember, Lattner (might have been someone else on the compiler
team) said that such conversion should only have needed to happen once, and
not at every access to an element. Something like that.

This issue started to occur when it was no longer possible to directly create
NSDictionary/NSArray from Swift.

I may also misremember things. Best would be to ask on the Apple developer
forums.

~~~
eridius
> _This issue started to occur when it was no longer possible to directly
> create NSDictionary /NSArray from Swift._

Huh? You can still create NSDictionary / NSArray in Swift using the exact same
API you would in Obj-C. That's always been possible. And in fact Swift even
extends NSArray / NSDictionary to support the Swift literal syntax.

------
unsigner
> "I cannot do it because the performance is so flipping terrible in debug
> mode. Trying to debug your app with optimized code is just a pain."

I've been shipping PC and console games in C++ since 2000. Debug builds have
ALWAYS been utterly unusable for us - what we do is run in "release" mode with
optimizations, ASSERTs, and debug prints enabled, then when the problem
arises, turn off optimizations in a handful of related files and recompile.
Then we ship another, "goldmaster" version which strips out the asserts and
prints.

~~~
repsilat
If this performance-critical part of the code is battle-tested, they could
always compile this bit with optimisations and the rest in debug mode, too.

~~~
drewcrawford
I wish this were true, but unfortunately you can't mix unoptimized and
optimized code in Swift. File dupes of rdar://19527999.

------
oofabz
He has the solution to his own problem. He just needs to use the unsafe array.

People have been writing quality games for decades using unmanaged memory
access. It is not some ticking time bomb that will inevitably blow up your
app. It just requires a little more care to get right. Considering this guy is
manipulating pixels directly and writing his own blitter, I believe he has the
skills to pull it off.

I can understand why he would prefer to use managed code but it's not
compatible with his goals and he has a clear alternative. His issue is not
Swift's fault, it's his refusal to use the tools that Swift gives him.

~~~
lerno
In theory Swift's arrays are little more that a c array + length, and this is
how we have been encouraged to use them.

The issue here is nothing about "managed code", but about how the Swift
compiler behaves without any optimization passes. Currently (and it should be
noted that this is a recognized issue), the unoptimized builds are orders of
magnitude slower than ObjC because Swift will auto refcount code that in ObjC
is either not ref counted at all, or uses manual RC. After optimization, Swift
can eliminate many uses and put things on the stack etc, but we are at the
mercy of the optimizer for that.

Compare this to the (compararively) predictable behaviour you have in C.

Yes we can in theory use UnsafeMutablePointer everywhere, but in that case
it's much more efficient to write the function in C instead.

~~~
mikeash
I would wager that the problem is that Swift arrays aren't just a pointer plus
a length, but rather a pointer plus a length plus copy-on-write semantics. You
probably need some smart optimizations to eliminate the copies, and I'd put
good odds on the performance problems here coming from lots of unnecessary
array copies.

~~~
lerno
I'd have to check in detail, but just looking at the profiler, about half of
the time is in the actual setter code and half in bounds checking. One would
think that bounds checking is a faster operation than the set if copy-on-write
is the culprit? But maybe it does some other magic as well. Something you
notice though is that release/retains are _everywhere_.

------
josephlord
This PR puts the Swift array on terms I think:

[https://github.com/owensd/handmade-
swift/pull/1](https://github.com/owensd/handmade-swift/pull/1)

The final keyword means that at least one indirection is avoided for each
access to the buffer. The speedup is significant. I do wonder if there is a
way that the lookup could be cached though if the compiler was a bit smarter.

There is also the option of using a pure function rather than accessing the
instance buffer for fairly similar performance (see:
[https://github.com/owensd/handmade-
swift/pull/2](https://github.com/owensd/handmade-swift/pull/2))

------
josephlord
One significant thing slowing this down is all the access to the the shared
mutable buffer. I'm confident it would go quicker if renderWeirdGradient was
changed to be a pure function that either returned a new populated buffer or
had the buffer as an argument and returned the result.

An even quick fix that would have some improvement would be to make the buffer
final so that less indirections are required to read and write from it.

As others have already said the unsafeMutableBuffer may be appropriate for
performance critical inner loops.

~~~
josephlord
I've just tried making the buffer final and it results in a speedup of 8 times
(not rigourously measured). From ~0.02 to between 0.0045 and 0.0025 on my
computer.

------
GuiA
Be a good citizen, file a bug report everytime you encounter problems with
your tools.

~~~
chris_wot
Are Apple responsive?

~~~
jakobegger
In the last year I filed 7 bug reports to Apple.

One was closed as a duplicate. For another one they requested additional
information.

The other 5 reports received no response at all.

------
josephlord
I've posted this response: [http://blog.human-friendly.com/the-importance-of-
being-final](http://blog.human-friendly.com/the-importance-of-being-final)

Separately I got excellent optimised performance with a final class and this
renderWeirdGraphics method which outperforms the Objective-C version by 40% in
optimised builds:

    
    
        func renderWeirdGradient(blueOffset: Int, _ greenOffset: Int) {
            let height = buffer.height
            let width = buffer.width
            buffer.pixels.withUnsafeMutableBufferPointer { pixels->() in
                for y in 0..<height {
                    let row = y * width
                    for x in 0..<width {
                        let i = row + x
                        pixels[i].green = Byte((y + greenOffset) & 0xFF)
                        pixels[i].blue = Byte((x + blueOffset) & 0xFF)
                    }
                }
            }
            self.needsDisplay = true
        }

------
oulu2006
I can't help but feel like we continue to just go backwards; Swift doesn't
look like they spent much time really thinking about what they wanted to solve
and how to solve it. I thought this was an interesting read:
[http://owensd.io/2014/09/24/swift-
experiences.html](http://owensd.io/2014/09/24/swift-experiences.html)

I know saying this is a bit terse, but a native Clojure compiler for iOS would
have been the best thing Apple could have done -- let someone else with far
better language skills who has already done a lot of thinking about values,
identity, state and time solve it and then just use the work.

~~~
klibertp
Considering that Apple created Dylan I'd argue that they have all the
"language skills" needed. But Dylan was a failure, in part because of
politics, but also because most programmers are conservative (I'd say dumb
code monkeys, but this isn't exactly true) and don't like their "new"
languages and tools to be "too new".

Clojure is a really great language, solving real problems in a nice way. But
in language popularity charts I just checked it's far beyond first 20 entries
and close to Forth(!) on one side and Erlang in the other in popularity.
Clojure (and Erlang, and Forth of course, but also Haskell, OCaml, F#,
Smalltalk and Io and many more) is just _too new_ , too unfamiliar, too
intimidating for our conservative programmers to consider using. And for Apple
that was probably the reason for rolling out a somewhat "normal" language
instead of something really good.

~~~
tempodox
How can you say that languages that have been around since the 1970s are _too
new_???

~~~
klibertp
For mainstream programmers they are, in that they have features they never saw
before. You know, like in "it's something new for me" or something like that.

The fact that objectively some of those languages come from '50s and '60s
(like Lisp in '56 and ML in '63 IIRC), which makes them ancient by today's
standards, doesn't matter. It's really sad. I devoted a couple of years to
learning about and trying to use such languages (see here[1] if you want) but
I'm in a very small minority; most young programmers never use anything other
than 1-3 core languages they learned; the number of known languages increases
with years of experience, but it's still biased towards currently mainstream
languages. Which are mostly crap.

Anyway, that's how it is: most programmers are very conservative in their
choice of tools they use and feel no need to look for alternative tools to
use.

[1]
[https://klibert.pl/articles/programming_langs.html](https://klibert.pl/articles/programming_langs.html)

------
zelos
The last benchmarks[1] I saw for Swift showed it's generally pretty slow
without optimisations on, but various differences in the language from
Objective C mean the compiler can optimise it much more, so generally it's
faster than Objective C (and actually comparable to pure C at times[2]).

[1] [http://www.jessesquires.com/apples-to-apples-part-
two](http://www.jessesquires.com/apples-to-apples-part-two) [2]
[http://www.jessesquires.com/apples-to-apples-part-
three/](http://www.jessesquires.com/apples-to-apples-part-three/)

~~~
mpweiher
Those benchmarks (part 2) were ludicrously flawed, comparing native integers
in Swift with NSNumber objects in Objective-C.

The fact that the standard library qsort() routine, which uses function
pointers and pointers to the objects to be sorted in the array has overhead is
also obvious, a truly native implementation is still much better[1]

[1] [http://blog.metaobject.com/2014/09/no-virginia-swift-is-
not-...](http://blog.metaobject.com/2014/09/no-virginia-swift-is-
not-10x-faster.html)

------
geoelectric
I like how he blew past the suggestion to turn off bounds checking in the
comments (which I assume otherwise shouldn't affect the ability to debug),
when that looks like almost certainly the reason why the array version is a
couple orders of magnitude slower when unoptimized.

------
zghst
Instead of wholly claiming Swift is slow (with your limited expertise in the
language, as well as running no optimizations), why don't you reach out to see
what you can do to improve performance? Sid you reach out to anyone to see
what you can do to improve performance / if you are doing things the Swift
way?

~~~
lstamour
This blog post is a reaching out, and the article clearly points out that with
optimizations it runs fine, but that you can't always run with optimizations
enabled, e.g. when debugging or stepping through sections.

------
CmonDev
A gentle introduction to Apple marketing: "Swift" doesn't mean "swift" \-
terms and conditions apply (not to mention the name was stolen from the
original Swift language - modifications made to Wiki were swift indeed).

------
te_chris
My problem with swift is even more simple than this guy's.

I have a new client build and really wanted to use swift for the compile time
safety features. Then I started looking at ios7 compatibility...total fucking
joke. After isolating what is still a very popular device, especially by
revenue in the case of my client, apple have basically made it impossible use
this language as long as people have customers who use an iPhone 4. In this
case, for my client, that's fucking heaps of them.

It doesn't have to be this way though, they could prioritise adding support
for static libs,then all developers will be able to switch over as we'll be
able to use the existing libraries we rely on, but they just don't seem to
give even half a shit about making it something people might actually use.

/end rant

~~~
sneak
Or they could just wait a year for the iPhone 4 to go away, and focus their
attentions on building a watch for people to firehose money at them for.

Seems pretty simple to me. Skate to where the puck is going.

~~~
jakobegger
I think the puck is going in the direction of an ever increasing number of
people who buy used iPhones. So I don't think that the iPhone 4 is going away
soon.

Looking around, I see that a lot of people who can't afford (or don't want to
afford) new iPhones are starting to buy used iPhones instead of cheap new
phones.

~~~
illumen
I've seen this in various circles. People seem to be getting used iphones
pretty cheap. Some of them even buy apps.

