Hacker News new | past | comments | ask | show | jobs | submit login
A Eulogy for Objective-C (realm.io)
125 points by AlexeyBrin on Aug 4, 2015 | hide | past | favorite | 75 comments



I really miss the dynamic features of objective c in swift. Writing a JSON mapper or serializer is really simple if you learn the run time.

You can't do that in swift.

Objective C was a compromised version of small talk. And swift is a compromised version of a true modern language.

I'd much prefer a non compromised small talk.It's better for medium size mobile apps.

And until the rest of the libraries are in swift it doesn't feel very different.

It's like you're writing Python but you're forced to use Java libraries and Java style apis.

I hope apple doesn't ram it down our throats. Well their own frameworks depend on its dynamism and In many ways in my limited opinion objective c is better for developer happiness.


I initially had the same complaints about moving to Swift, especially with regards to dynamically created objects.

However I have found benefit in having strict model types that my JSON data must map into — or else cause compiler errors. I far prefer to pick up problems up at compile time then learn about them at run time.


Yup, ObjC is light and scripty. Whenever you don't like what it's doing you just make it do what you want, categories, cast to id, swizzling, respondsTo, NSSelectorFromString, etc, and when it's too slow you write those functions in C/C++ which it integrates well with.

Swift you're always contorting yourself to express things in a way that the language wants, it reminds me of writing java, it worships far to heavily at the altar of type safety as if type safety was a goal in and of itself.

ObjC is practical, Swift is academic.


That seems the opposite, very few things need those dynamic features and those that do can be special cased, which makes Objc the more pure but equally more academic language to me.


It's the difference between a language that trusts you not to be a fool, and a language that assumes no one should ever be allowed to use dynamic runtime features.

I've enjoyed my time with Objective-C very much. For consumer coding, I think it's very close to the safety/expressiveness sweet spot.


Agreed. When I first started out I never could have guessed it would become my favorite language. Apple did a lot of great things during my tenure writing ObjC - GCD and ARC are probably the best. I couldn't believe how easy it was to write performant code, both for powerful desktops and power-starved mobile devices.

It was the first place (pre-ARC) that I learned to manage my own memory, and the first (pre-GCD) that I learned to be as safe as possible in a multi-threaded environment.

If in fact Swift is the death-knell of Objective-C, I will be sad. I've written C and C++ and Python and Go and Scheme and Lisp and on and on (Java, C#...) and it will remain one of my favorites for years to come.


I assume by "academic" the other poster meant "informed by academic research". Swift is evidently more so than Objective-C.


By academic, I meant academic problems that Swift solves such as increased type safety...

Not academic research as in we did a study of developers who write iOS apps and determined that type safety as implemented in swift helps great developers write apps faster. (F# on the other than has a great type system that doesn't get in your way)


It's all academic until things blow up at runtime.

Choosing more type safety over less is a pragmatic concern - one input among many into the cost benefit analysis we all do when picking languages and platforms. More type-safe languages tend to have more boilerplate and hoops to jump through, but they also tend to be more robust. The latter can be proven theoretically more efficiently than with empirical research, but it is no less true for it. We are limited by the laws of mathematics, however 'pragmatic' we fancy ourselves.

Having never written a line of swift or ObjC, I have no opinion on the matter at hand. I also make my money out of the thoroughly unsafe Javascript. But it's not a good idea to blithely dismiss type safety as an ivory tower concern of academic CS. The benefits are real, and can be measured in the number of panicked pages/emails received at 3am.


Have you thought about switching to type script to get the benefits of type safety?

http://www.typescriptlang.org/


No offence, calling type safety an “academic problem” shows a tremendous amount of inexperience.


None taken, many great projects such as the Linux kernel are written in languages with near zero type safety, and their authors with tremendous experience continue to this day to advocate the choice of languages with poor type safety over those with excellent type safety.

I think it's worth hearing Linus out on the reasons you might want to choose a less type safe language over a type safe one.

http://harmful.cat-v.org/software/c++/linus


OK, there's a difference here between “type correctness” (your binary NOT trying to apply a string function to a floating-point number, e.g.) and “type safety” (your language using formal methods to ensure type correctness). If you as the programmer are willing and able to ensure type correctness manually, the type safety of your language becomes close to irrelevant. It's just that not everyone is a kernel programmer who needs the extra oomph you get if you sacrifice type safety.


Linus is talking about C++ which no ones considers safe.


It can be used in ways that provide more type safety than C. (And people do - that's part of why not every C++ program is a crumbling ruin.) But people (including Linus) choose not to use that extra safety. They have their reasons, not all of which are stupid.

Type safety is not the only good in a programming language.


Very little of research in programming languages is anything like “We tried it out on a bunch of developers and the data support our conclusion that ...”.

PL is not an empirical science in that sense, it's more like mathematics. So your initial intuition about type safety, etc. is right on the money—that's the academic side of it.


I couldn't agree more! There is a ton of things that you can do with the run time that just doesn't seem possible in swift.


agreed


Happily, the two co-exist well, and increasingly so. Where one is a better choice for a particular job, you have it. It's just that you now have two tools to choose from.


No they don't.

If you use swift features then your class is inaccessible to ObjC. If you mark your class @objc then you can't use many swift features.

The only thing that works well is writing most of your app in swift and writing ObjC wrappers for C/C++ code, because dealing with C in Swift is atrocious and many libraries simply don't work in Swift because of initialization issues with complex structs.

Also, having two tools to choose from that do practically the same thing is never really a good idea because you waste time trying to figure out whether Swift->ObjC interop is going to fuck you over more than ObjC->Swift interop for this particular class.

Also, once you start mixing code and have your code importing a Swift header and your Swift bridging header importing your ObjC classes you start getting huge compile times as anytime you change an ObjC header your entire swift code has to be recompiled which takes forever because the swift compiler is dog slow.

My advice, pick one language for your project and stick with it, the language I would advise until Apple writes base libraries in Swift is ObjC, because if you run into performance issues C/C++ is going to save your ass far more than Swift will.

And mostly because

  var cell = tableView.dequeResuableCell("foo") as! CustomTableViewCell
  is MORE typing than
  CustomTableViewCell *cell = [tableView dequeResuableCell:@"foo"];


And what the heck does that exclamation mark even do? Will the program then crash on NULL (more unsafe than ObjC) or will it just liquify to Kentucky Bourbon?

Personally, I find the concept of NULL (though toxic) more comprehensible and manageable than all those question & exclamation marks in Swift.


Yup, [nil anyMessageReturnsNil] is probably the best decision ever made in ObjC.


Also the agonisingly long compile times.


"I'd much prefer a non compromised small talk.It's better for medium size mobile apps."

Sounds like RubyMotion.


For anyone who isn't familiar, Aaron is one of the most venerable Objective-C programmers out there. He worked at NeXT in the '90s and has been teaching new Objective-C programmers since OS X was released.


Can I blissfully go on writing OS X and iOS apps in Objective-C (I spent 15+ years training brain cells in Obj-C)? Aaron H. seems to suggest I can -- Apple isn't just pong to deprecate the entire language one morning in early June I hope. I have learned just enough Swift to read it, but not ready to invest the time to learn it well.


You're definitely safe writing your own apps in it for well into the foreseeable future — there's not only way too much app code out there for it to be deprecated soon, but the vast majority of the system and bundled apps are in Objective-C.

I'd expect Objective-C jobs, though, to decline in number over time as Swift jobs rise. If you're doing it professionally, not learning Swift is going to increasingly limit your options, perhaps pretty rapidly — a few months ago, I got turned down for a job in part because of lack of Swift experience. Ended up landing a different job where we're writing a Swift app and step 1 for everyone on the team was "learn Swift".


However, as those Objective-C jobs decline their pay-scale will probably increase as companies desperately attempt to find someone to maintain their legacy apps.


Not too improbable. I can still remember hand-wringing search quests for COBOL programmers.


ObjC is far from dead for me. And it will take Swift a long time (if ever) to deliver what ObjC can do.

Swift is not completely bad per se, but it's far from being as approachable as C. In C, you only need to read all the header files, then you “know everything”. In Swift, there aren't even any files with declarations you _could_ read. But then, a completely obscure language seems like a perfect fit for a walled garden.


FYI for those who prefer to watch videos at faster speeds.

Instead of realm.io, I went to the youtube url instead:

https://www.youtube.com/watch?v=2s7UR-rNFss

The notes say: "THIS VIDEO IS NOT MEANT TO BE WATCHED ON ITS OWN. We went to great pains to synchronize it with the speaker’s slides,"

Well, the embedded youtube at realm.io only allows speed of 1.5x.

The direct youtube url has the speed option for 2x, so I watched it there. However, before I saw it, I spent a few seconds to flip through the slides to get an idea of what Aaron was speaking to.


It's pretty nice though, having the slides synchronised to the video. And on realm.io, below the video, there's a transcript with linked resources :)


The text below was simply excerpts instead of the full transcript. The incomplete excerpts looked like snippets of context to guide people to the middle parts of the video. You actually had to watch/listen to the video to get the full text.

For example, at the beginning, he says the word "tricycle" and when I searched for that word, it wasn't in the text below. That's when I knew it wasn't a full transcript.


The developments that Facebook is working on with React Native are just so much more exciting than another riff on C++. For most "utility" apps, I just don't think performance is such an issue that the point of research should be these (comparably) incredibly low level concerns.

When I see the bugs going on in iOS apps, it has to do with models that are broken for what modern programs looks like today: things like asynchronicity. Animation is still a mess to wrap your mind around when you have a series of asynchronous events (animations, loads, what have you) that are tied together, but can also be canceled half way through. This is why I can still break many apps by just tapping all over the place (or clicking all over the place: http://tolmasky.com/letmeshowyou/Yosemite/Safari%20Animation... ).

Combining the determinism of React style immediate mode drawing with JS async/await is truly exciting for app development.

Now, of course the elephant in the room is that neither of these matter that much when most the apps being written are games that will continue using C++ or Unity (if you actually care about speed these are your best bet). Especially with the work Apple is doing with Metal that will improve these platforms as well.


>I just don't think performance is such an issue that the point of research should be these (comparably) incredibly low level concerns.

Yes. Performance maybe not. But power efficiency is. If your mobile app is bloated and takes more resources than needed to perform its task then you're wasting the user's battery.


Oh, React, the newest silver bullet.


Objective C: All the blazing speed of Smalltalk with the memory safety of C.


I learned Objective-C from Aaron Hillegass's book many years ago. Pretty funny to hear from someone who still loves the language.


No language is perfect. If a language is not grounded to solve concrete problems, then we just end up with a new c++.

"Swift is not the ultimate language, it’s not flawless, it is compromised because of Objective-C interoperability."


If I didn't have so much code in objective-c - I'd welcome the passing. Until then - I've built a project to migrate objective-c code > swift en masse. check out http://objc2swift.io for hockeyapp download.


Awsome speach! When Apple dumped Objective-C in WebObject we at OOPS printed a TShirt that we showed the crouds at WWDC. I can only repeat now what it read. [objC retain] Kerusan, OOPS, Sweden


One question. Why IOKit was not written in ObjC?


DriverKit was the NeXTSTEP 3 driver environment, and it was written in ObjC.

In old mailing lists posts (https://www.geeklair.net/~dluke/PCIdevice_tutorial.txt), Godfrey van der Linden (one of the IOKit architects) made some comments about the IOKit language choice:

  As IOKit is a child of DriverKit which was implemented in ObjC we 
  wanted to reduce the amount of work necessary to convert from one 
  space to the other, as ObjC is single inherited this was easiest for 
  most of our drivers.  At first we were going to allow multiple 
  inheritance as we ported to C++ but then we tripped over the whole 
  can of worms called 'virtual' inheritance.  We really, really didn't 
  want to turn all of our developers into C++ experts, which you have 
  to be to use virtual inheritance properly.  Another side effect is 
  that we are now reasonable language independent.  At some stage in 
  the future we may be able to move IOKit over to a good programming 
  language.
I suspect that if they had re-written IOKit in 2010, it would be in Objective-C. My guesses why it was done in C++ in 2000:

- programmer familiarity/comfort level: remember that classic Mac development was mostly being done in C/C++ by the mid-90s, and the Mac developers still around by 2000 didn't need to be given more reasons to jump ship. Carbon was created so that Mac C/C++ applications could be brought to X, and having drivers written in C++ was another easing factor.

This also includes Apple's internal development teams--I don't think all the longtime Apple teams were that bullish on NeXT technologies. Remember that the original OS X Finder was a C++ Carbon app built with CodeWarrior.

- ObjC not considered inevitable: in 2000 it was not at all clear that ObjC would be thriving for another 15+ years. The C/C++ Carbon API was one direction for the future, the Cocoa-Java bridge was another, another was putting C++-style syntax on top of ObjC (which was never completed). C++ wasn't going anywhere, whereas it would be painful to have driver development handcuffed to ObjC if applications abandoned it.

- performance: it probably would have been a non-issue, but C++ is a safer choice performance-wise.

There might be other interesting tidbits on the darwin-development or darwin-drivers if you go back to 2000/2001.


In 2000 or thereabouts there was a strong push towards Java.


ObjC has a pretty beefy (and therefore somewhat slow) runtime which is an issue for kernel stuff.


Or explaining this better:

The objects of ObjC work very differently than C++ objects.

In Obj-C for example, there are no "methods" like C++, in C++ when you put a method in a class, (although implementation details may vary) it works like a function pointer in a C struct, and calling that method is just executing whatever code that pointer points to.

In Obj-C in contrast, to call a function of a class, you actually pass a message, that first must be interpreted, then the runtime that interpreted the message find the correct function definition to use and run.

For most apps people don't care about this, but game developers for example frequently make iOS and Mac games that in time-critical parts use pure C or even Obj-C++ so that they can skip this messaging part (that can cause significant performance problems sometimes).

Example: yesterday I was trying to figure why a open source game I like was slowing down, I ran a profiler, and the most called function, and the one with most execution time, was a simple "GetSomeRandomVar()" for spaceships (it is a spacefight battle game), I looked in the source, and saw that every frame, every spaceship call this at least once for every other spaceship, if the spaceship has an AI it calls it many, many times, I inlined it and the code became much faster.

I am very sure that if it was a ObjC call (with messaging) it would be so slow that the original author probably would not even have uploaded that to github. (by the way, the game IS an OSX game, but the original author wisely decided to go with C++ instead of ObjC)


I think putting it that way exaggerates the differences a bit. The message doesn't really have to be "interpreted" in any meaningful way AFAIK. A message consists of a selector (essentially a method name) and a receiver, then whatever other arguments the method will take. The runtime looks up the selector in the receiver's method table, much like with C++ virtual methods, and then jumps to the implementation it finds. There is a little more fanciness (some method caching, and support for nil receivers), but I don't feel like it's hugely dissimilar from virtual methods.

For anyone interested, here's a good analysis of Objective-C's message dispatch machinery: http://blog.zhengdong.me/2013/07/18/a-look-under-the-hood-of...


The vtable is a flat structure with known layout in C++. In Objective-C, it's a hash table (with precomputed hashes and lots of other tricks—but it's still fundamentally a hash table). There's a big difference between the two.


There's a big difference between the data structures, yes, but the basic logic for message sends is still pretty similar. They both boil down to a simple lookup in a data structure and a jump. Would you really characterize the difference between a vtable and a hash table as "you actually pass a message, that first must be interpreted" in the case of a hash table?


I think the biggest issue is that in a vtable you jump directly to the method by index, and with objc_msgsend it looks up a string in a hash table.

Mostly the difference is ints vs. strings in my opinion.

As well in C++ if you don't mark your method virtual you can bypass all the vtable crap.

I love ObjC but method resolution is far more expensive than in C++, especially non-virtual methods.


Method selectors in Obj-C don't have to be strings, as long as you can turn a string into one occasionally.

You do still need to do a hash lookup to get the method implementation, since the selector table changes at runtime and objects have to safely respond to unhandled selectors and all that.


I meant string as in char* not NSString, selectors specifically the SEL datatype are strings

They certainly don't have to be, however, they are.

    SEL selector = @selector(applicationDidBecomeActive:);
    printf("%s",(char*)(void*)selector);
    // Prints applicationDidBecomeActive:


> in C++ when you put a method in a class, (although implementation details may vary) it works like a function pointer in a C struct

That's true only for virtual methods.

Also to add to what you said, no real way of doing stack allocated objects in ObjC is an issue as well.


Also, since classes are mutable, calling a method in a multithreaded environment may result in lock contention, which I imagine being tricky in a driver.

So using obj-c in the kernel might require giving up some of the language's dynamism. Not sure how this was handled in NeXT.


Yes, but then ObjC is not designed for low-level the way C & C++ are. In fact, the design goals are practically the opposite of what C++ aims at.


Blaine Guest? Typo?


Yeah, I noticed that too. I told them and they fixed that typo.


Read the eulogy when the base libraries are Swift...

When swift code that interacts with the base libraries is faster, and when swift code compiles faster.

Swift so far is a language that over promises and under delivers...


Yeah... I'm pretty certain the only code in iOS 9 that is written in Swift is the Calculator app, which was also the first app I remember Apple shipping that actually used their public version of UIKit; until iOS 4, when Apple dogfooding their library finally made UIKit "good", Apple was still using the as-yet-much-better private UIKit they had written for iOS 1 that shared very little... it even had its own UIScrollView called UIScroller and its own UITableView called UITable. While I'm sure the language keeps getting better as people provide feedback, the product isn't going to move past that "omg this is amazing" threshold until Apple dogfoods it, and until then Apple is just using everyone as guinea pigs ;P.

FWIW, I think Apple can't ship any base libraries that uses Swift until they finish ABI-compatibility, as they need to be working off the same base Swift runtime as the apps that would run against those libraries. Given that for the last year people have been shipping apps that link against an old-and-incompatible Swift runtime that Xcode bundles into your app, and given how the runtime is "intrusive" (putting stuff into the global Objective-C namespace, for example), it isn't even clear yet (so this may require more time on their end to build out more infrastructure) how they will be able to pull that off without saying "all Swift apps developed before now need to be reshipped or they will stop working on iOS 10".


> FWIW, I think Apple can't ship any base libraries that uses Swift until they finish ABI-compatibility

You hit the nail on the head here. Until the ABI is finalized, Apple can't ship dynamic libraries to developers that are written in Swift. Swift won't be ready to fully replace Obj-C until this happens.


> ... until then Apple is just using everyone as guinea pigs.

Exactly. I'm not being judgemental about that, but I don't see myself wanting to invest time into being a guinea pig. If Apple wants to build a tool that only works in and for their walled garden, I won't donate any work to it.


Oh goodness. Having written some small apps in both Objective-C and Swift, I'm not even sure which language I should jump back into when I want to pick it up again.


Objective C, it has a more clear path at the moment.


What does that even mean?


Objective C has been around since the 1980s, it has a long history with NeXT and Apple thus is very mature. Code you write in Objective C now can reasonably be expected to stay very similar going forward. Swift however is still very much in the wind where you run into "well this language feature has not been implemented yet" and with each version that is released your code needs to be migrated or changed to accommodate the changes in the language. The path forward is clear with Objective C vs Swift where you just don't know if some of the issues will even be addressed. I've been writing Objective C since 2007 and Swift since shortly after it was released. Swift still has a long way to go before I would use it as the starting language for a new app that I expected to have users / maintain.


Swift is moving fast as you say, and is clearly receiving Apple's favor and attention. Any app that you start with objective-c today will end up with a legacy codebase in the next couple of years.


I'm not sure, I've had great fun learning it, and writing it is usually really nice. Sure, there have been pains along the way, but the Apple engineers are doing an amazing job with evolving Swift, intelligently, and very quickly. It is the future, and for many developers I know, it is already part of the present.


It's certainly gotten much better with Swift 2.0, it feels like Apple finally tried to build an app with Swift 2.0 and fixed a lot of the stuff that is horribly broken.


How is it not an Apple spin on, say, Scala? Seems similar.


Hearing one of the core engineers from the LLVM compiler team speak at some length about their aims and approach to Swift, it was cleat that they had reflected long and hard on the approaches of many other languages. This engineer had very informed and nuanced views on the strengths and weaknesses of pretty much any other language you asked about. I'm sure Scala was on this list, also "Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list" — http://www.nondot.org/sabre/


How is Scala not a Java spin on F#? It also seems similar.

I don't see how any of these comparisons diminish the languages in question.


It's like Scala but without all the unknown performance wrappers and confusing code, and with a highly tuned runtime with a minimal library.


Scala and modern C++ had a baby


it(swift) feels that way to me too.. my first impressions were 'scala'.


I've been working on a medium-sized Swift 1.2 app for the last few months and, while it's come a long way from 1.0, I still don't think I'd recommend it for new work yet. Compile times are agonizingly long, even worse than C++.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: