
Swift is not - mpweiher
http://www.splasmata.com/?p=2798
======
adamnemecek
These tests are meaningless without comparing the produced assembly (maybe the
obj-c compiler is very well optimized since they've been doing that for quite
a while now) and even then these microscopic tests don't really tell you much
about performance in real scenarios. Swift has the advantage, as far as
performance goes, that methods are not dispatched dynamically. In obj-c, every
single time you call a method, you call objc_msgSend which executes at least
12 (IIRC) additional instructions (or more, depending on the situation). Plus
as far as I know, not all objects are heap allocated which will also make
things faster.

~~~
stormbrew
I'd go further: Never trust a benchmark you can't run yourself. That goes for
Apple's, and it appears to go for the ones in this post as well.

And microbenchmarks (and it doesn't get much more micro than "loop without
doing anything") in particular have always been very poor predictors of real
performance in my experience. They can be useful when trying to improve a
specific area, but they should not be used for bold pronouncements.

~~~
CmonDev
Well, Apple is known for misleading benchmarks, and that guy is not.

~~~
stormbrew
I don't think it really matters what reputation anyone has, if you throw
benchmarks out into the world and make claims about them, you should throw the
code out too so people can repeat it and find flaws.

I'm not suggesting malicious intent at all here. People make mistakes, and
contrary to what seems to be popular opinion, _good_ benchmarks are _hard_.

------
cowsandmilk
Author did not say if he compiled with compiler optimizations, which appear to
be very important for swift being "swift".[1]

[1]
[https://twitter.com/Catfish_Man/status/473752917347139584](https://twitter.com/Catfish_Man/status/473752917347139584)

~~~
mpweiher
I made similar tests with optimization and saw similar numbers. Without
optimization it was significantly worse.

~~~
solarexplorer
I don't get it. "Similar numbers" but "significantly worse"? You measure
performance and compile without optimization?!

If you have better numbers, please share them!

~~~
mpweiher
> "Similar numbers" but "significantly worse"?

Yes. Similar numbers to the ones posted when compiling _with_ optimization.
Significantly _worse_ when compiling _without_ optimization.

> You measure performance and compile without optimization?!

No, I tend to measure _with_ optimization. In fact, I personally don't do
Debug builds at the all. However, when the numbers were _so_ bad, I did a
Debug build to cross check, and lo and behold, those numbers were even worse.

That clear things up?

~~~
solarexplorer
Yep. Thank you.

------
StefanKarpinski
Benchmarking things like "assignment" in compiled languages is pretty
meaningless. The compiler may or may not eliminate the code altogether,
depending on whether it determines that it's dead code. Assignment may not
even actually do anything. Even with more complicated code that definitely
does _something_ , you have to be a bit careful to make sure that the whole
computation isn't deemed to be dead. It's not unlikely that Swift's current
LLVM optimization passes aren't doing as much as Objective-C does, even with
optimizations turned on. This won't have as much effect on benchmarks that do
some actual work, even simple ones.

------
sriku
I think the optimizations for both need to be looked into carefully. For the
following micro benchmark, Swift came in 1.5x slower than the C version. Both
were built in Release mode with "-O3", and both with "Debug" checked OFF in
their respective schemes. This factor is way smaller than the orders of
magnitude being reported by the OP. Can't seem to find OP's benchmark source
too.

Swift -

    
    
        var j : Int = 0
        var k : Int = 0;
        var i : Int = 0;
        for (i = 0; i < 1000; i += 1) {
            k = 0;
            for (j = 0; j < 1000000; j += 1) {
                k += j % 7;
            }
        }
    

C -

    
    
        int j = 0, k = 0;
        int i = 0;
        for (i = 0; i < 1000; ++i) {
            k = 0;
            for (j = 0; j < 1000000; ++j) {
                k += j % 7;
            }
        }
    

update: If I use "fastest unchecked" mode when building swift instead of the
"fastest" mode, the swift version is 8% _faster_ than the C version.

machine: 1.7GHz core i5 macbook air (mid 2011), 4GB RAM.

edit: For clarity, "1.5x slower" means "swift took 1.5x the time C took" and
"8% faster" means "C took 1.08x the time swift took".

edit: To OP - Please put a link to your benchmark source and projects
somewhere, or show your project settings as a snapshot or _something_. For
one, I don't believe any of your results based on my own trials.

edit: Changed ++i and ++j to i+=1 and j+=1. This is the version that is 8%
faster than C. OP seems right in that i+=1 seems faster than ++i.

edit: (It's becoming impractical to keep this in sync with my project.) When
using Int/int, I get swift 8% faster and when using Int64/int64_t, I get C to
be 17% faster.

~~~
m_mueller
Finally some actual data, thank you! Do you know the functionality behind
'fastest unchecked'? Does it only leave away memory bounds checks or
arithmetic overflow checks or all of it? Depending on how it works, Swift
could be quite interesting for HPC programming in the future. My dream
workflow would be something like

create working and correct program with all the comfort of modern languages /
IDEs

\-->

switch to unchecked-but-warn (or whatever that's called like), run some test
suites and get rid of all warnings.

\-->

switch on highest optimizations and switch off checking, verify results.

\-->

port to multicore / multinode / accelerators using some directive based
approach.

note that you won't have much safety on todays' accelerators anyways, so
removing that safety net when still in an easily debuggable environment is
going to make life easier when debugging the accelerated code - it's all about
being able to exclude error classes.

~~~
sriku
> Does it only leave away memory bounds checks or arithmetic overflow checks
> or all of it?

My guess is only arithmetic checks and not array bounds checks. If array
bounds checking is removed, the language won't be any safer (security wise)
than C, hence I think that's kept through out.

Btw "Int" and "Int64" in Swift are actually "struct"s and not a primitive type
like in C. So the compiler is indeed able to keep performance for these small
structs on par with primitives. This is promising for swift's claimed speed I
think. These structs have extension methods like uncheckedAdd(),
uncheckedDivide() and so on, which lends some evidence to my "unchecked refers
only to arithmetic" guess.

edit: .. that Int/Int64 are structs means that the + and += are all overloaded
operators, compared to the raw operators used in C. Given that, the compiler,
I must say, is impressive.

~~~
m_mueller
Interesting. When you say 'So the compiler is indeed able to keep performance
for these small structs on par with primitives', is this based on your
experiments or on some theoretical analysis? If the former that's indeed
impressive - I would guess that behind the curtains it uses some kind of macro
based approach rather than actually implementing it as a struct where you're
always going to have some additional integer ops to calculate addresses? If
the latter I don't quite get how you arrive at that conclusion, could you
expand?

~~~
sriku
A lot of this is in the category of "not-entirely random guess" :) .. need to
dig deeper to see this.

Here is why I said what I did (and I did mean the former) --

My micro-benchmark is actually comparing the struct-based code in Swift with
primitive-based code in C. If "k += j % 7" is to be interpreted literally in
Swift sans optimization, it would be two function calls and two stack word
copies (structs go on the stack and by copy, whereas classes go on the heap
and by reference). k and j are both structs and so operator overloading is
needed to write this expression. That the Swift compiler generates comparable-
to-C performance in these cases is good news I think.

One major caveat with these benchmarks is that the same LLVM backend is being
used for both. I'd like to be able to compare Swift with C/ObjC as compiled by
GCC as well for a truer picture of "faster than C".

For practical purposes, I think this is nicely close. I haven't tried the ObjC
bridge yet though.

------
ori_b
For the empty loops, I'd be willing to be that the objc compiler is optimizing
them away entirely. I suspect that the Swift compiler may not be that smart
yet. There are plenty of times I've wanted to benchmark some C operation,
looked at the generated assembly to figure out what is going on, and found
that it was empty.

Without source, disassembly, and something that you can test for yourself,
this benchmark is rather uninformative.

------
SoftwareMaven
So we are comparing a language with 10+ years of optimizations against a
language that just barely released in beta? We have no idea what settings the
author chose (I'm not implying the author is attempting to deceive) and,
assuming they are the defaults, it seems safe to assume Apple would choose
very conservative defaults for the beta, especially the first release (it
doesn't matter how fast it goes if it doesn't do what it's supposed to).

In two months, after people start to understand what's going on, where the
trouble spots are, and what the right way to fiddle bits, if we are still
seeing performance like this, I'll be interested.

~~~
na85
Isn't the point of a high-level dynamic language that you shouldn't need to do
any bit-fiddling to write performant code?

Swift seems self-defeating.

~~~
alejgoon
I think that's hardly the case. Or are javascript and python and ruby self-
defeating as well?

~~~
danbruc
Depends on the way you look at it - on the one hand JavaScript is a terrible
language, on the other hand it is (more or less) the only thing you have
available in a browser.

------
jrajav
Google cache of the post, since the blog 'sploded:
[http://webcache.googleusercontent.com/search?q=cache:www.spl...](http://webcache.googleusercontent.com/search?q=cache:www.splasmata.com/?p=2798)

------
mindstab
Ok so I won't be writing engine code necessarily in Swift. Or servers. But I'm
not sure that's what it's for in the first place.

How about user facing apps? Haven't we been through this a billion times on
language of choice for job. Java, python, php etc. Each has a use. For a lot
of what Swift will probably and should be used for I don't see this being an
issue that hasn't already been beaten to death.

~~~
Davertron
For normal apps I would mostly agree, but for a lot of games performance
matters.

------
PaulRobinson
Benchmarks like this miss the point of what it is to write production software
code. The only people who care about this style of benchmark are people with
very specialised use cases and academics (and their students), the latter of
which are trying to prove something that is easy to prove, but doesn't really
tell us anything of practical use.

For most purposes, a user of your software is not going to feel the difference
between 0.0006 seconds and 0.06 seconds. One is 100x faster, so is "better",
but no user will care.

The stuff that is really expensive (3D graphics, etc.) that users care about
are being handed off into custom APIs that are highly optimised and hardware
acceleration anyway.

What's more important for most programmers when choosing a language is not
"how fast is it for this code to run?", but "how quickly can I ship code?"

Code in the app store next quarter is going to beat code you never get
shipped, every single time.

Not long ago I had a project I thought would be perfect for Haskell. I don't
know Haskell but I am a very experienced Ruby-ist. I thought about using the
project as a means to learn Haskell but wanted the code shipped within a
month. I did it in Ruby. It's probably slower, it's perhaps not as elegant as
a Haskell solution. But it's shipped. And that therefore beats the Haskell
solution which does not exist.

Swift appears to have modern features that mean many programmers will feel
more comfortable developing applications using it, many of whom would have
struggled with Obj-C.

In that sense, it's going to beat Objective C for performance in the only
benchmark that matters commercially: the one that gets ideas into user's hands
the quickest.

If you think the rate of development so far has been fast, well, just watch.
It's about to get mental.

I don't expect an academic to understand that, nor would I expect most
junior/undergrad devs, and I can see in some special cases you would want to
think about optimisation (in which case turn that little piece of code into C
and link it in), but in the real World, that's how coding works.

------
supercoder
We might be missing something here and I think ultimately Swift will be faster
than Obj-C but it does highlight it's probably not wise to just start blindly
converting all your code to Swift just yet until we get a better feel for what
it's good (and not) good at.

Also with the fact that Apple says they _will_ be breaking code as time goes
on I'll also be on the Swift sideline for now.

Edit: Why are people finding this comment so offensive ?

~~~
gumby
> I think ultimately Swift will be faster than Obj-C

Is there something about the semantics swift that you think would inherently
permit faster code than well-written ObjC?

Edit: I don't know why this comment was down voted -- it's a perfectly
reasonable question that others have responded to. Oddly, the parent comment
_I_ was responding too is apparently also being down voted. WTF?

~~~
gte910h
Yes, the types.

Strong types are far easier to optimize.

~~~
gumby
Well, swift does a lot of type inference it's true but it doesn't appear to be
more to be more strongly typed than ObjC (I could be wrong because I have only
skimmed the ebook). A lot of ObjC code simply returns an id when it could
return a tighter type. They seem semantically equivalent to me.

Typically a language's efficiency comes from its ability to restrict
ambiguity. E.g. a scoped sequence of expressions without gotos allows the
compiler to eliminate variables, reorder statements, unroll loops etc because
it can see 100% of the uses of a variable by examining that scope. On the
other extreme a language like BASIC is harder to optimize in such a fashion
because the program counter could be set into the middle of the loop, and
because all variables are global, they have to survive the scope.

I am genuinely interested if their are features in swift that aren't also in
ObjC.

A language can be equivalent to another and still be easier to program in
because the cognitive load on the _programmer_ is lower so good code can be
written more quickly. Swift seems to be an interesting attempt at doing
_that_. But that's orthogonal to the question of compiled efficiency.

~~~
debreuil
It is definitely more strongly typed than objective c. Swift's type
inferencing is up there with Haskell, and Objective C/XCode's 'find all uses'
is still string based.

Totally agree with your 'reduce ambiguity' observation, and I think long term
Swift will do very well there. It will also will allow tooling to radically
improve from what it is, and multicore to have a reasonable chance to be
helpful.

Imo it seems a lot of the language decisions they have made are to make it
more compiler/memory friendly (eg the way assign works with arrays). That
bodes well for speed/memory/battery I think.

------
TheMagicHorsey
The issue might be that the benchmarks shown during the presentation were for
iOS, and the author here might have built for OS X. The other possibility may
be that the we may need to tweak the compiler settings.

I certainly don't think that Apple was outright lying in their benchmarks.
Those sort of shenanigans are a PR nightmare, and nobody thinks its worth it
anymore.

~~~
wutbrodo
> nobody thinks its worth it anymore.

I'm curious which period you're referring to here. I haven't been following
Apple presentations in detail for a while, but I remember the iPad 2
announcement was chock-full of flat-out lies. Do you mean that this is
something that's changed since then (March 2011)? Or do you mean that it's not
worth it for developer-focused presentations vs consumers (who would
presumably be more credulous)?

EDIT w/ source: [http://fortune.com/2011/03/03/steve-jobs-reality-
distortion-...](http://fortune.com/2011/03/03/steve-jobs-reality-distortion-
takes-its-toll-on-truth/)

~~~
EdwardCoffin
> the iPad 2 announcement was chock-full of flat-out lies

Do you have anything concrete to back this up with?

~~~
wutbrodo
Ah of course, that was dumb of me. I was remembering this from a few years ago
but I should've taken the time to dig up a source referring to it. For some
reason I was thinking "everybody must remember that".

Here's an article that's pretty succinct about the inaccuracy and lies in the
iPad 2 announcement: [http://fortune.com/2011/03/03/steve-jobs-reality-
distortion-...](http://fortune.com/2011/03/03/steve-jobs-reality-distortion-
takes-its-toll-on-truth/)

~~~
axman6
Shock news: Marketing puts company's products in positive light!

All companies tweak the truth to make themselves look good (which company was
touting their huge sales numbers when they had barely sold any products to
actual customers, just retailers?). Apple aren't alone, they just get the most
press.

~~~
wutbrodo
I don't know why almost everyone in this thread is acting as if anything even
incidentally negative about Apple is a personal attack on them and their
family; it's really rather pathetic.

As you said, this is common practice, and Apple may or may not be one of the
more egregious offenders [1], but it doesn't matter. I wasn't saying anything
about Apple being shittier than other companies in this regard; I was
responding to the parent commenter's claim that a company wouldn't do
something like that these days. Now how the fuck is your claim of "Every
company does this, leave Apple alone!" not in full support of my point (and
arguing against a point that, as near as I can tell, nobody made)?

[1] IMO the lie about the Samsung quote is a level of dishonesty you don't see
all that often but my point is that differences in degree like that aren't
really relevant

------
bigdubs
I know they said on stage that they've spent a lot of time getting Swift fast,
but aren't we still in beta? Isn't it a bit early to throw Swift under the bus
for performance reasons?

~~~
bsaul
Problem is everyone wants to know whether they just skip to swift right now or
just wait a few years until the tech matures. Claiming a speed increase of x2
and ending with -x2 is a 400% difference. We're not talking minor differences
here.

As a sidenote, i've recently heard Core Data wasn't used internaly by Apple.
Makes me think that i'll skip to swift once apple gives me the list of which
of their app is coded with this language.

~~~
bigdubs
Based on the differences between ++i and i+=1 alone I would think that there
are just bugs they need to squash and that the open beta, and all these eyes
on it, are going to help find the cases that are stumbling.

Also, for many of the tests where it was a 400% or order of magnitude knock in
performance there are a lot of comments in here refuting those results, so I'm
not sure what to think yet.

------
alejgoon
The author seems to argue that these benchmarks disprove that Swift is fast.
While I have no interest in Swift whatsoever, I'm unswayed by a few tight
loop, simple benchmarks. It would be interesting to compare larger programs
that are part of cpu-bound computations.

~~~
dinkumthinkum
If you watch the keynote, some bold, sort of specifc claims are made. There is
some big disparity going on between what is being said in the WWDC and what
this person what found. They are claiming over 100x speedups and this person
is seeing it be 10x slower.

------
wting
Maybe Swift is in reference to developer speed.

From a PL / compiler standpoint it's pretty obvious that Swift is not going to
match Objective C from a performance standpoint. That's why Apple used a micro
benchmark comparing a static, compiled language vs a dynamic, interpreted
language (Python) for marketing purposes.

> We can’t know exactly what’s going on behind the scenes, but my hunch is
> some of what we take for granted in Objective-C – the straight C scalar data
> types – are actually classes in Swift. And the more you rely on classes, the
> more Automatic Reference Counting is in there somewhere, retaining and
> releasing like there’s no tomorrow, often for no good reason.

If they switched from primitive types to classes for built-in types, that
means vtables and an additional memory lookup per element access weakening
caching and memory locality.

Creating a Swift int would involve instantiating a class, incrementing the ARC
count, and storing data compared to only storing data for primitive types.
That explains why appending Swift's Ints to an array is so much slower than
NSNumber.

On a side note, this isn't as much of a problem with Java—despite everything
being implemented with vtables—because of JIT and better CPU branch
prediction.

 _Disclaimer: I have no Objective C / Swift experience._

~~~
nardi
> From a PL / compiler standpoint it's pretty obvious that Swift is not going
> to match Objective C from a performance standpoint.

This shows a deep misunderstanding of Objective-C and Swift. Swift has way
more type information at compile time than Objective-C, and thus gives the
compiler much more room to optimize (in addition to being safer). My guess is
we'll see Swift eventually be faster than Objective-C at just about
everything. For instance, they could unbox class instances into values where
possible, which is something they're probably not doing much of yet.

~~~
mpweiher
This applies to naive code, where you let the compiler fix it. In those cases,
Swift can be better because of the reasons you mention.

However, if you actually put your mind to it, for the 1-5% of code that
actually matter, performance-wise, it is hard to beat the performance, and
more importantly: predictability, of C.

------
eddieroger
Shouldn't we wait until Swift is in production for us and we're not using a
compiler still built for beta before we come up accusations like these? Swift
may not end up being the greatest thing since sliced bread, but how about we
get our hands on the production-ready build of it before we decide.

~~~
_random_
Apple posted a benchmark during the presentation. It was validated. They lied.
Maybe they shouldn't boast about performance before releasing?

~~~
eddieroger
The tools they used to benchmark may not be the same ones we all got in the
beta. Maybe our validations are the wrong data point.

------
ctide
It'd be interesting to see how Objective-C would compare to Swift with ARC
enabled. For the majority of us, who use ARC in our projects, these benchmarks
don't mean a whole lot.

------
muaddirac
I never trust apple keynote benchmarks, but I was really holding out for the
swift ones to be true.

It probably makes no difference in the majority of common iOS/OSX development,
though.

------
cromwellian
It would be interesting to see something like Octane ported to Swift just so
we can compare it to other languages that are supposedly "slow".

------
gte910h
These appear to be in line with _unoptimized_ swift. mpweiher, can we have
some gists of the proj or something for these numbers?

~~~
mpweiher
Not my numbers, but I've gotten similar results, with optimized builds. Yes,
unoptimized is even worse. See also:

[https://devforums.apple.com/message/974858#974858](https://devforums.apple.com/message/974858#974858)

[https://devforums.apple.com/thread/227905](https://devforums.apple.com/thread/227905)

[https://devforums.apple.com/message/971211#971211](https://devforums.apple.com/message/971211#971211)

------
koenigdavidmj
And this is why C++ can still be fast, even when the common STL
implementations are a steaming pile performance-wise. The naturally developed
style has been one-reference-only, so a lot of this never has to get done.
With Objective-C, anything that isn't C is on the heap and reference counted.
Even _Java_ can allocate objects on the stack in tight loops (though this is
done by the JIT, not by a change in how variables are declared).

~~~
TillE
What parts of modern, popular STL implementations are notably slow?

The only issue I've ever had is the blatantly obvious one of reallocating
std::vector as it grows. You really want to reserve() enough space in advance
if at all possible, because copying large chunks of memory is not fast.

~~~
yoklov
You don't always get to use a modern, popular STL. Sometimes you need to use
the hardware vendor's crappy, unoptimized STL (because they assume you won't
use it). This is part of the reason the STL is unpopular in game development,
though the story here has been getting better.

To answer your question, in my experience when the standard doesn't mention
the desired performance characteristics of a function or method, even the
popular STLs (libstdc++ and libc++ certainly) optimize for maintainability
over performance. Not sure I can pull up an example of this right now but they
are numerous. A good place to start is the standard containers other than
deque or vector.

A bigger issue IMO is that the existence of some methods in various standard
classes/templates really murder performance, no matter how good the
implementation is. One particularly bad example is the bucket() method,
local_iterator types, etc. in c++11's unordered_map (probably the rest of the
unordered family too). These force the table to be implemented in such a way
that every insertion requires a heap allocation, and iteration requires much
more pointer chasing than would otherwise be necessary (e.g. with a probed
implementation), which is... unfortunate for the cache.

------
octopus
Interesting, did you run your tests on iOS8, OS X 10.10 or on OS X 10.9.3 ?

Xcode or command line ?

------
pessimizer
Brutal. Hopefully Swift makes up for it in developer productivity.

------
mantrax5
Wise people know to avoid micro-benchmarks, but this is not a even micro-
benchmark, it's a nano-benchmark....

Testing isolated things like empty loops or ...a single assignment. Those
really don't make sense for modern compilers. The dead giveaway is having to
modify your code in weird ways, so you don't trigger dead code elimination. If
you trigger dead code elimination it means you code isn't producing anything,
and you're benchmarking various edge cases that won't occur in the real world.

You need to at least implement a simple algorithm or some small unit of
functionality that makes sense in a real program. Swift's compiler is designed
to aggressively inline and remove reference counting overhead in cohesive
units of code, however if you test everything statement by statement, then
whether you enable optimization or not, it can't optimize that much.

That said I don't mind the article. It might help Apple discover places where
Swift might be made even faster.

