Yup, ObjC is light and scripty. Whenever you don't like what it's doing you just make it do what you want, categories, cast to id, swizzling, respondsTo, NSSelectorFromString, etc, and when it's too slow you write those functions in C/C++ which it integrates well with.
Swift you're always contorting yourself to express things in a way that the language wants, it reminds me of writing java, it worships far to heavily at the altar of type safety as if type safety was a goal in and of itself.
That seems the opposite, very few things need those dynamic features and those that do can be special cased, which makes Objc the more pure but equally more academic language to me.
It's the difference between a language that trusts you not to be a fool, and a language that assumes no one should ever be allowed to use dynamic runtime features.
I've enjoyed my time with Objective-C very much. For consumer coding, I think it's very close to the safety/expressiveness sweet spot.
Agreed. When I first started out I never could have guessed it would become my favorite language. Apple did a lot of great things during my tenure writing ObjC - GCD and ARC are probably the best. I couldn't believe how easy it was to write performant code, both for powerful desktops and power-starved mobile devices.
It was the first place (pre-ARC) that I learned to manage my own memory, and the first (pre-GCD) that I learned to be as safe as possible in a multi-threaded environment.
If in fact Swift is the death-knell of Objective-C, I will be sad. I've written C and C++ and Python and Go and Scheme and Lisp and on and on (Java, C#...) and it will remain one of my favorites for years to come.
By academic, I meant academic problems that Swift solves such as increased type safety...
Not academic research as in we did a study of developers who write iOS apps and determined that type safety as implemented in swift helps great developers write apps faster. (F# on the other than has a great type system that doesn't get in your way)
It's all academic until things blow up at runtime.
Choosing more type safety over less is a pragmatic concern - one input among many into the cost benefit analysis we all do when picking languages and platforms. More type-safe languages tend to have more boilerplate and hoops to jump through, but they also tend to be more robust. The latter can be proven theoretically more efficiently than with empirical research, but it is no less true for it. We are limited by the laws of mathematics, however 'pragmatic' we fancy ourselves.
Having never written a line of swift or ObjC, I have no opinion on the matter at hand. I also make my money out of the thoroughly unsafe Javascript. But it's not a good idea to blithely dismiss type safety as an ivory tower concern of academic CS. The benefits are real, and can be measured in the number of panicked pages/emails received at 3am.
None taken, many great projects such as the Linux kernel are written in languages with near zero type safety, and their authors with tremendous experience continue to this day to advocate the choice of languages with poor type safety over those with excellent type safety.
I think it's worth hearing Linus out on the reasons you might want to choose a less type safe language over a type safe one.
OK, there's a difference here between “type correctness” (your binary NOT trying to apply a string function to a floating-point number, e.g.) and “type safety” (your language using formal methods to ensure type correctness). If you as the programmer are willing and able to ensure type correctness manually, the type safety of your language becomes close to irrelevant. It's just that not everyone is a kernel programmer who needs the extra oomph you get if you sacrifice type safety.
It can be used in ways that provide more type safety than C. (And people do - that's part of why not every C++ program is a crumbling ruin.) But people (including Linus) choose not to use that extra safety. They have their reasons, not all of which are stupid.
Type safety is not the only good in a programming language.
Very little of research in programming languages is anything like “We tried it out on a bunch of developers and the data support our conclusion that ...”.
PL is not an empirical science in that sense, it's more like mathematics. So your initial intuition about type safety, etc. is right on the money—that's the academic side of it.
Happily, the two co-exist well, and increasingly so. Where one is a better choice for a particular job, you have it. It's just that you now have two tools to choose from.
If you use swift features then your class is inaccessible to ObjC. If you mark your class @objc then you can't use many swift features.
The only thing that works well is writing most of your app in swift and writing ObjC wrappers for C/C++ code, because dealing with C in Swift is atrocious and many libraries simply don't work in Swift because of initialization issues with complex structs.
Also, having two tools to choose from that do practically the same thing is never really a good idea because you waste time trying to figure out whether Swift->ObjC interop is going to fuck you over more than ObjC->Swift interop for this particular class.
Also, once you start mixing code and have your code importing a Swift header and your Swift bridging header importing your ObjC classes you start getting huge compile times as anytime you change an ObjC header your entire swift code has to be recompiled which takes forever because the swift compiler is dog slow.
My advice, pick one language for your project and stick with it, the language I would advise until Apple writes base libraries in Swift is ObjC, because if you run into performance issues C/C++ is going to save your ass far more than Swift will.
And mostly because
var cell = tableView.dequeResuableCell("foo") as! CustomTableViewCell
is MORE typing than
CustomTableViewCell *cell = [tableView dequeResuableCell:@"foo"];
And what the heck does that exclamation mark even do? Will the program then crash on NULL (more unsafe than ObjC) or will it just liquify to Kentucky Bourbon?
Personally, I find the concept of NULL (though toxic) more comprehensible and manageable than all those question & exclamation marks in Swift.
Swift you're always contorting yourself to express things in a way that the language wants, it reminds me of writing java, it worships far to heavily at the altar of type safety as if type safety was a goal in and of itself.
ObjC is practical, Swift is academic.