That seems the opposite, very few things need those dynamic features and those that do can be special cased, which makes Objc the more pure but equally more academic language to me.
It's the difference between a language that trusts you not to be a fool, and a language that assumes no one should ever be allowed to use dynamic runtime features.
I've enjoyed my time with Objective-C very much. For consumer coding, I think it's very close to the safety/expressiveness sweet spot.
Agreed. When I first started out I never could have guessed it would become my favorite language. Apple did a lot of great things during my tenure writing ObjC - GCD and ARC are probably the best. I couldn't believe how easy it was to write performant code, both for powerful desktops and power-starved mobile devices.
It was the first place (pre-ARC) that I learned to manage my own memory, and the first (pre-GCD) that I learned to be as safe as possible in a multi-threaded environment.
If in fact Swift is the death-knell of Objective-C, I will be sad. I've written C and C++ and Python and Go and Scheme and Lisp and on and on (Java, C#...) and it will remain one of my favorites for years to come.
By academic, I meant academic problems that Swift solves such as increased type safety...
Not academic research as in we did a study of developers who write iOS apps and determined that type safety as implemented in swift helps great developers write apps faster. (F# on the other than has a great type system that doesn't get in your way)
It's all academic until things blow up at runtime.
Choosing more type safety over less is a pragmatic concern - one input among many into the cost benefit analysis we all do when picking languages and platforms. More type-safe languages tend to have more boilerplate and hoops to jump through, but they also tend to be more robust. The latter can be proven theoretically more efficiently than with empirical research, but it is no less true for it. We are limited by the laws of mathematics, however 'pragmatic' we fancy ourselves.
Having never written a line of swift or ObjC, I have no opinion on the matter at hand. I also make my money out of the thoroughly unsafe Javascript. But it's not a good idea to blithely dismiss type safety as an ivory tower concern of academic CS. The benefits are real, and can be measured in the number of panicked pages/emails received at 3am.
None taken, many great projects such as the Linux kernel are written in languages with near zero type safety, and their authors with tremendous experience continue to this day to advocate the choice of languages with poor type safety over those with excellent type safety.
I think it's worth hearing Linus out on the reasons you might want to choose a less type safe language over a type safe one.
OK, there's a difference here between “type correctness” (your binary NOT trying to apply a string function to a floating-point number, e.g.) and “type safety” (your language using formal methods to ensure type correctness). If you as the programmer are willing and able to ensure type correctness manually, the type safety of your language becomes close to irrelevant. It's just that not everyone is a kernel programmer who needs the extra oomph you get if you sacrifice type safety.
It can be used in ways that provide more type safety than C. (And people do - that's part of why not every C++ program is a crumbling ruin.) But people (including Linus) choose not to use that extra safety. They have their reasons, not all of which are stupid.
Type safety is not the only good in a programming language.
Very little of research in programming languages is anything like “We tried it out on a bunch of developers and the data support our conclusion that ...”.
PL is not an empirical science in that sense, it's more like mathematics. So your initial intuition about type safety, etc. is right on the money—that's the academic side of it.