
App Developers on Swift Evolution - ingve
http://curtclifton.net/app-developers-on-swift-evolution
======
pcwalton
Apple's engineers are right on this one. From a performance point of view, the
only way Java (and dynamic languages like JavaScript) get away with having
almost everything be virtual is that they can JIT under a whole-program
analysis, add final automatically in the absence of subclassing, and
deoptimize in the presence of classloaders. In other words, Java
optimistically compiles under the assumption that classes that aren't
initially subclassed won't be, and if that assumption later turns out to be
invalid it deletes the now-incorrect JIT code and recompiles. This isn't an
option for Swift, which is ahead-of-time compiled and can't recompile on the
fly.

If you don't allow an ahead of time compiler to devirtualize anything, you're
going to have worse method call performance than _JavaScript_.

~~~
dilap
Yet Obj-C seems to get by quite fine, and everything is super-dynamic there.

Being able to dynamically hack behavior is a _huge boon_ when coding against a
closed-source, 3rd party platform.

Doesn't matter so much in an open-source context, where you can just modify
code and ship your own version of the library if necessary.

~~~
pcwalton
Depends what you mean by "just fine". Objective-C method dispatch is very
slow. It's significantly slower than that of JavaScript, for example.

~~~
dilap
Just fine in the sense that it's the dominant language on iOS, and iOS sets
the bar, IMO, for snappy experiences on modern platforms.

Stuff implemented in java and javascript, not so much.

Maybe saying objc is cheating a little bit, because it's so easy to drop into
straight C to optimize. But stuff like UITableView (edit: scroll => table) is
all heavily based on objc message sending, and it's easy to get great
scrolling perf using that api.

I don't know the exact number, but you can get a lot of message sends done in
16ms.

(Another example -- GitUp, insanely fast git client, objc.)

(On the other hand, I do remember that C++ based BeOS on 66 MHz PowerPCs back
in the day was _double-insanely_ fast, so maybe Obj-C is slow, on a language
level, and we just don't know it because everything else is so much slower, on
an implementation level. But then again that stuff was all virtual calls. I
just don't buy the argument that at the _system framework_ level dynamic
dispatch is a barrier to good performance.)

~~~
pcwalton
> Just fine in the sense that it's the dominant language on iOS, and iOS sets
> the bar, IMO, for snappy experiences on modern platforms.

> Stuff implemented in java and javascript, not so much.

I agree with the data but not the conclusion. iOS' performance is great in
spite of using Objective-C as its language, not because of it.

The only performance-related features that Objective-C brings to the table are
(a) being able to piggyback on the optimizations implemented in popular open
source C++ compilers; (b) being compatible with C, so you can easily inline C
into Objective-C when Objective-C's slow performance bites you; (c) not
requiring tracing garbage collection. When you're actually in heavy
Objective-C land with method calls in tight loops, performance is pretty bad,
both because message sending is slow and because methods can't be
devirtualized. Apple knows this, and they've basically reached the limit of
what they can do without incompatible changes to the Objective-C semantics.
Hence the performance-related design decisions in Swift.

(As for UIScrollView/UITableView, I agree that they're very fast relative to
Android or current Web browsers for example. I know the reasons for this and
they have nothing to do with the implementation language. Algorithmic and
engineering concerns often trump programming language performance.)

~~~
mpweiher
>iOS' performance is great in spite of using Objective-C as its language, not
because of it.

Nope. Objective-C is a _great_ language for performance. Remember the 97-3
rule. That is, only roughly 3% of your code are responsible for almost all the
performance. You just don't know which parts those are.

The dynamic features of the language give you awesome productivity to more
quickly get to the point where you find out which 3% to actually optimize
heavily. Doing this optimization is straightforward because you have all the
tools at your disposal, the road to them is smooth and the performance model
is predictable.

I have repeatedly achieved performance that's better (yes!) than equivalent C
programs, and with a little bit of work you can actually maintain nice OO
APIs.

>(b) being compatible with C so you can easily inline C into Objective-C

I really don't understand where this (common) misunderstanding is coming from.
Objective-C _is_ C, or more precisely a superset of C. You don't "inline C
into Objective-C". You use different aspects of Objective-C as appropriate. If
you are not capable of using the full capabilities of the languages, that is
your fault, not a problem of Objective-C.

This seems to have been lost, with people (inappropriately) using Objective-C
as a pure object-oriented language. It is not. It is a _hybrid_ object-
oriented language. In fact, the way it was intended to be used is to write
your components using C and use dynamic messaging as flexible packaging or
glue.

> because methods can't be devirtualized

Sure they can. I do it all the time:

    
    
       SEL msgSel=@selector(someMessage:);
       IMP devirtualized = [object methodForSelector: msgSel];
       for (i=0;i<LARGE_NUMBER;i++) {
           devirtualized( object, msgSel, arg[i] );
       }
    

Did you mean they can't be automatically devirtualized by the compiler? I hope
you understand that these are two different things. Of course, it would be
nice to have compiler support for this sort of thing, which would have been
miles easier than coming up with a whole new language:

    
    
       for (i=0;i<LARGE_NUMBER;i++) {
           [[object cached:&cache] someMessage:arg[i]];
       }
    
    

Or some other mechanism using const.

> reached the limit of what they can do without incompatible changes to the
> Objective-C semantics. > Hence the performance-related design decisions in
> Swift.

Swift performance is significantly worse and waaaayy less predictable than
Objective-C, and that is despite the Swift compiler running a bunch of
mandatory optimizations even at -O0.

~~~
plorkyeran
Calling via a function pointer isn't devirtualization. It's still an indirect
call, and does not allow the function to be inlined since the actual function
is not known until runtime. It merely gets you from something 2-4x as
expensive as a c++ virtual call to something roughly as expensive.

~~~
mpweiher
Last I checked, a C++ virtual function call loads the method pointer via an
indirect memory load. Hard for the CPU to speculate through, so typically a
pipeline stall.

A function pointer that's in a local variable, so loaded into a register is a
completely different beast, as the measurements bear out.

In my measurements, a message-send is ~40% slower than a C++ virtual method
call, whereas an IMP-cached function call is ~40% faster, and slightly faster
than a regular function call.

~~~
plorkyeran
The vtable for an object being actively used will be in L1 cache, and when a
virtual function is called in a loop, Intel's CPUs have been able to predict
the target for many, many years. ARM may not; I've never had recent to deeply
investigate virtual call performance on iOS.

Finding calling via a function pointer to be faster than calling directly
suggests that you were not actually measuring what you thought you were
measuring.

------
klodolph
This is a cultural issue much more than a technical issue. I can relate to
both sides. "I want to be able to patch anything myself," versus "I want to be
able to reliably reason about how my module works."

The rhetoric on both sides can quickly get stupid. This is one of the major
ways in which languages are divided. Ruby folks are used to being able to
monkey patch everything, and Python folks tend to avoid it even though they
can. JavaScript programmers are divided on the issue: should you add a method
to Array.prototype, or is that just asking for trouble? I've certainly seen my
own fair share of bugs and crashes caused by programmers substituting types
where they shouldn't, and seen my fair share of frustrating limitations in
sealed modules that should just expose that one method I need, dammit.

Objective-C leaned towards the "go wild, do anything" approach, which grew
quite naturally out of the minimalistic "opaque pointer + method table"
foundations for object instances. One of the reasons that you make such a
choice in the first place is the ease of implementation, but in 2015, writing
your own compiler is easier than ever. So Apple is quite naturally considering
application reliability as a priority (since they're competing with a platform
that uses a safer language for most development).

Unfortunately, it's a cultural fight where one side or the other must
necessarily lose.

~~~
kstrauser
Because I have no experience with Swift, could someone more informed explain
this to me? How would my subclassing an Apple-provided class and overriding
its methods affect anyone but me? In Python, if I write:

    
    
        class MyRequest(requests.Request):
            def get(self): do_something_stupid()
    

then my coworkers can still use `requests.Request` itself without getting my
bad behavior, and if they find themselves looking at a flaw in my code, they
know not to blame the upstream author. What's different about the Swift
situation?

I'm kind of horrified at the idea of an OOP language that wouldn't easily let
me override a parent class's behavior. If I break something in the subclass,
it's on me to fix it. That never reflects poorly on that parent class.

~~~
winstonewert
Sure, it only affects you. The problem is that changes to superclass can now
break your code.

For example, perhaps right now the function looks like this:

    
    
        def get(self, url, method='GET'):
            ...
    
        def post(self, url):
            self.get(url, method='POST')
    

You override get, to add some additional functionality. All is fine.

Then somebody realizes that the original code was silly, and rewrites it:

    
    
        def get(self, url, method='GET'):
            if method != 'GET':
               print "DEPRECATED!"
            self.request(url, method=method)
    
        def post(self, url):
            self.request(url, method='POST')
    

It seems to do the same thing, and probably passes all the same tests. But
suddenly, now your code doesn't get called for post requests and your
additional functionality breaks in a mysterious way.

Perhaps its still your fault for doing a bad job subclassing, but its going to
look like its the fault of the person who fixed the parent class.

~~~
dplgk
> changes to superclass can now break your code

The same exact thing happens when using composition.

------
devit
Final by default is correct, since otherwise you are effectively exposing and
having to maintain an extra stable API towards subclasses, which is a
nightmare and won't be done properly unless it's intentional.

In fact, having virtual methods at all outside of
interfaces/traits/typeclasses is dubious language design since it meshes
together the concepts of interface and implementation and makes it
inconvenient, impossible or confusing to name the implementation instead of
the interface.

The issues in the discussion are instead due to Apple's framework code being
closed source and unmodifiable by users and developers, and also buggy
according to the author.

~~~
lpsz
I'm an app developer. This change will absolutely break some of my stuff, and
it's going to suck. Even with that, I do feel OP is taking an overtly
political stance (even using the word "banned".) This change is perfectly
reasonable within the already-strict mindset of Swift. Having a less-strict
language just to work around potentially buggy Apple frameworks would be
setting a bad precedent.

Using "final" also has some performance wins by reducing dynamic dispatch. [1]

[1]
[https://developer.apple.com/swift/blog/?id=27](https://developer.apple.com/swift/blog/?id=27)

~~~
eridius
What will it break? Name one single thing.

Remember, the only change here is the _default_ for classes that don't specify
it. As I stated on the list earlier, I guarantee you 100% that when Apple
starts vending framework classes written in Swift they will make an
intentional decision on every single class as to whether it is final or not.
And the language default won't impact that at all.

~~~
HaloZero
If Swift changes the default, do you think they'll Audit all of UIKit and
AppKit to fix it? They're still transitioning things over to Swift piece by
piece still. I imagine they'll let defaults work the way it is unless there is
a good reason not to.

------
adrianm
I find this slow march in "modern" language design toward completely static
compilation models troublesome to the extreme. It feels like a significant
historical regression; they speak as if Smalltalk and the Metaobject Protocol
are things to revile and shun, not elegant programming models that we as
programmers should aspire to understand and use in our own programs.

To elide these features as a matter of principle implies that you believe your
compilation model is perfect, and is able to deduce all information necessary
for optimal compilation of your program statically, perhaps augmented with
profiling information you have obtained from earlier runs. It also makes
upgrading programs for users more difficult since patches must be applied in a
non-portable manner across programs. I shan't mention the fact that they make
iterative, interactive development an ordeal. The Swift REPL is progress
(although REPLs for static languages are nothing new), but it still pales in
comparison to the development and debugging experience in any Smalltalk or
Lisp system.

There is no reason why the typing disciplines Swift is designed to support
should demand the eradication of all dynamism in the runtime and object model.

If you have never heard of the Metaobject Protocol or similar concepts before,
here is the standard reference: [https://mitpress.mit.edu/books/art-
metaobject-protocol](https://mitpress.mit.edu/books/art-metaobject-protocol)

This discussion also reminds me of this essay by Richard P. Gabriel:
[https://www.dreamsongs.com/SeatBelts.html](https://www.dreamsongs.com/SeatBelts.html)

~~~
pcwalton
OK, but Swift is an ahead of time compiled language, unlike Lisp or Smalltalk.
That makes the tradeoffs completely different.

------
munificent
Methods in C# are non-virtual by default and almost every class in the core
libraries is sealed and the world hasn't ended in .NET land.

I have definitely done some hacks to work around bugs in frameworks I've used.
But I've _also_ had to deal with users who broke _my_ libraries or
inadvertently wandered into the weeds because it wasn't clear what things were
and weren't allowed.

This is one of those features where the appeal depends entirely on which role
you imagine yourself in in scenarios where the feature comes into play.

~~~
randomfool
But .NET definitely struggled with cultural issues around making APIs virtual.
Because of Microsoft's strong 'no breaking changes' rule they were extremely
cautious about adding virtuals- in my experience it was not unusual to see it
costed at 1 week dev/test time for a single virtual on an existing method (in
WPF).

C++ is also non-virtual by default and I think it's worked out OK.

~~~
derekdb
Some of the cultural issues around avoiding virtual came from hard lessons
with v1.0. After shipping v1 they realized that there were a large set of
security and compatibility issues with not having framework classes sealed.
No-one I worked with really like the idea of sealing all our classes, but the
alternative was an insane amount of work. It is just too hard to hide
implementation details from subclasses. If you don't hide the details then you
can never change the implementation.

I can't say for swift, but there were also real security challenges. It is
hard enough to build a secure library, but to also have to guard against
malicious subclasses is enough to make even the most customer friendly dev run
screaming. My team hit this and it cost us a huge amount of extra work that
meant fewer features. vNext we shipped sealed classes and more features and
the customers were happier.

------
msie
Sealing classes by default is troubling. I'm having a bad feeling about the
future of Swift. I also think it's growing too big already.

~~~
pjmlp
The fragile base class problem shows that not sealing is also troubling, much
more than sealing them.

~~~
mpweiher
Hmm...that problem that both Objective-C and Smalltalk don't have...

~~~
pjmlp
All OO languages suffer from it.

~~~
msie
How big of a problem is it really?

~~~
pjmlp
Quite big for library writers.

You cannot ever be sure how changes to the public / protected API from a class
affects classes down in the inheritance tree.

Specially bad is when methods that were never supposed to be overwriten change
their semantics, leading to erratic behaviour on classes that have overwritten
them.

------
angerman
The whole argument boils down to how developers were treated with apples
libraries so far. The submission/review process is quite prohibitive, and the
core libraries (like almost every piece of software) have flaws. Together with
the opaque intransparent radar bug reporting / bug resolution system, you had
to resort to method swizzling to keep your sanity (I guess the PSPDFKit guys
can speak volumes on that).

Going forward, I hope apple sticks to the open source approach they took with
swift. That more of the libraries will follow, with Apple encouraging more
community participation.

------
vor_
This is a correct decision. APIs have to be designed to support subclassing
properly. It's also a performance win.

~~~
msie
Interesting, it's like how Swift forces you to think about nulls. Now you have
to think about people subclassing your classes.

------
e28eta
As an iOS developer for years, I have never resorted to a runtime hack,
subclass and re-implement, or similar trick to work around a framework bug.

Our team rarely shipped a new version of the app concurrent with a .0 release
of iOS, so that might be related, but we always found ways to work around
issues while respecting the APIs provided.

I understand other products and other developers have had a different
experience, but I'm not overly concerned about this particular change.

~~~
veidr
OK, but: As a (Mac) OS X developer, in 2003 I implemented a custom NSTextView
subclass to fix two specific bugs that were impossible to work around
otherwise. That subclass was used in everything we shipped for years and
years... on OS X 10.3, 10.4, 10.5, 10.6, and 10.7.

(After that I lost track, but I think one or both bugs were finally fixed.)

I feel like maybe this change will make Apple frameworks more stable in the
long run, but that will take a tech eternity (10+ years).

In the meantime, the overall user experience will be degraded by system
framework bugs that can no longer be worked around. It just sounds more
aspirational than realistic.

"Let's make it impossible for developers to work around our bugs -- that will
_force_ us to write perfect software!"

------
zeckalpha
Apple has been pushing composition over inheritance for years. No surprises.

~~~
bsaul
I'm not sure so i'm asking : is your comment ironical ? I see inheritance
everywhere on uikit.

