By academic, I meant academic problems that Swift solves such as increased type safety...
Not academic research as in we did a study of developers who write iOS apps and determined that type safety as implemented in swift helps great developers write apps faster. (F# on the other than has a great type system that doesn't get in your way)
It's all academic until things blow up at runtime.
Choosing more type safety over less is a pragmatic concern - one input among many into the cost benefit analysis we all do when picking languages and platforms. More type-safe languages tend to have more boilerplate and hoops to jump through, but they also tend to be more robust. The latter can be proven theoretically more efficiently than with empirical research, but it is no less true for it. We are limited by the laws of mathematics, however 'pragmatic' we fancy ourselves.
Having never written a line of swift or ObjC, I have no opinion on the matter at hand. I also make my money out of the thoroughly unsafe Javascript. But it's not a good idea to blithely dismiss type safety as an ivory tower concern of academic CS. The benefits are real, and can be measured in the number of panicked pages/emails received at 3am.
None taken, many great projects such as the Linux kernel are written in languages with near zero type safety, and their authors with tremendous experience continue to this day to advocate the choice of languages with poor type safety over those with excellent type safety.
I think it's worth hearing Linus out on the reasons you might want to choose a less type safe language over a type safe one.
OK, there's a difference here between “type correctness” (your binary NOT trying to apply a string function to a floating-point number, e.g.) and “type safety” (your language using formal methods to ensure type correctness). If you as the programmer are willing and able to ensure type correctness manually, the type safety of your language becomes close to irrelevant. It's just that not everyone is a kernel programmer who needs the extra oomph you get if you sacrifice type safety.
It can be used in ways that provide more type safety than C. (And people do - that's part of why not every C++ program is a crumbling ruin.) But people (including Linus) choose not to use that extra safety. They have their reasons, not all of which are stupid.
Type safety is not the only good in a programming language.
Very little of research in programming languages is anything like “We tried it out on a bunch of developers and the data support our conclusion that ...”.
PL is not an empirical science in that sense, it's more like mathematics. So your initial intuition about type safety, etc. is right on the money—that's the academic side of it.