After doing a couple of projects with it I've found it to be a really beautiful language from a code formatting point of view.
The whole 'overly verbose' thing is probably just because you are encouraged to give your methods and arguments good descriptive names, and often when you do need short variable names for some maths calculations or something it will be while inside a method block using good old C style anyway.
I don't know, it's flexible, (arguably) attractive and it runs fast.
Wow - this is fantastic! I'm glad i submitted this now, just reading the comments here is very heartening. I was beginning to think i was the only guy in the world who liked obj-c :) (i work in a c#/java predominaated environment where most people troll me about obj-c all day!)
I'd always considered myself a C++ guy before picking up obj-C, but looking at the result I get from the two I'm finding I'm just a lot faster with obj-C.
Now I have no desire to incite a language flamewar (to each their own blub!), so I will leave it at that ;)
I like obj-c, and never really found it hard. IMHO, the real challenge with Apple development is not the language but learning all the frameworks and the Apple way of doing things. The learning curve is a bit steep at first, but quite powerful (and intuitive) once you get over the hump.
I think Objective C is awesome, but I suspect many people have a hard time separating the language from the library. Apple has a pretty unique way of developing frameworks and SDKs. Comparing Android and iOS development, the Android SDK makes sense out of the box because it follows typical Java/C#/C++ style framework design.
99% of the people who use it have no choice, so not really.
But it's nice to see a semi-tutorial wrapped in an argument. Arguments are more fun to read than tutorials, so I picked up a few factoids (named args) without having to read Apple's documentation (which is a little dry).
It's important to realize, that ObjC (and Smalltalk, Self...) does not have named arguments in the conventional sense (ie. like Python, &key arguments in CL...) but mix parts of message name with it's arguments. In essence, it only looks like named arguments - and on the other hand, this syntax essentially precludes support for real named arguments. For why this is bad look at all these #with:, #with:with: ... #withValues: methods in Smalltalk's standard library that are only thin wrappers used to simulate true optional or named arguments.
A four-word name for filter and filtering for equality by using a string formatting DSL are not really on my list of lovely things. I'm sure Objective-C does something well but this guy is not pointing it out.
EDIT: Not sure why the downvote, I'm simply stating that many other languages have far better syntax for filtering an array so I'm not sure why the author would think the syntax in Obj-c is all that great unless they have never done the same thing in any other language.
I don't think these are the real problems with Objective-C.
To quickly address this issues, before discussing the real problems: "Ugly" - get over it, "Verbose" - get over it, "Memory Management" - Obj-C memory management is in fact super simple as long as you follow some really simple rules, especially with ARC.
Now the real problems I have:
1. Lots of unnecessary code. With the old runtime, you had to modify your code at four (!) places to introduce a new property in a class. With the new runtime, that has decreased to three, with ARC, to only 2, but that's still one more than what should really be necessary (@synthetize should go).
2. The whole header file thing. I know that Obj-C is a descendant and strict superset of C but I still find separating header files from .m files somewhat tedious and unnecessary. This leads to
3. Having to declare methods and properties before you use them. I mean, this is 2011, this shouldn't be necessary.
4. The syntax for having "private" methods and properties is awkward even by Obj-C standards. Basically you have to create an anonymous category in the implementation file.
5. Properties (with ARC) should default to strong for objects, assign for everything else.
6. The [[X alloc] init] way to construct objects; I understand that from a theoretical point of view it's cool to separate allocation from initialisation but I've never ever needed to allocate an object with anything other than the alloc message.
7. The debugger. GDB and Xcode are atrocious. Have a look at the C# debugger in Visual Studio; that's how a debugger should look like.
Overall, while a lot has been improved recently in Obj-C, my main gripes with the language all come from the facts that it's a C superset (which is also a massive advantage) and that it's an almost 30 years old language; the programming world was very different 30 years ago.
Very true, all of this would improve the language a lot. I have often thought about writing a language that does these kind of things and compiles to Obj-C.
Also, I would love to be able to do some kind of metaprogramming without resorting to strings.
I'm trying to do just that with a project I started a few days ago called Khaki. It's a bit of a learning exercise and quite early, but you might find it useful in a month or two.
Both of them wouldn't work for MD() due to the trailing nil.
But for MD(val, key, vals...), it'd give you a slightly better error, complaining about the number of arguments when you do MD().
Otherwise they are the same in this case.
## removes one non-whitespace character – often the comma – before the ## if vals or __VA_ARGS__ is empty.
e.g
#define MO_LogDebug(fmt, ...) NSLog((@"DEBUG " fmt), ##__VA_ARGS__)
MO_LogDebug(@"This works as expected %d", resultCode);
// Removes the comma automatically in here:
// NSLog((@"DEBUG " @"This works too without args"), );
MO_LogDebug(@"This works too without args");
So, languages that have primitives like this typically are designed to be used by people who don't know much about algorithms, or simply don't want to be hassled with it, either now, or ever: they want to write code, they want it to "work", and they want to move on to something else; in essence, we are talking "scripting languages".
Languages that some look at as "real programming languages", in comparison, tend to not have syntax like this, and the reason why is that you often, either now, or at some point later, are going to care whether the data structure you just allocated is a red-black tree, a hash map, a patricia trie, or even an AVL tree (which I include mostly to make a point: there actually are situations where it is preferred to a red-black tree).
When this suddenly matters, you are in the situation where what you want to be able to do is to make a very small modification to areas of your code where you need to select a different algorithm, in order to get the different result; you don't want to be forced to rewrite half your code to use a different syntax just because it was slow (I mean, if you wanted to do that, you'd have written it in Ruby and then recoded it in C).
Therefore, you find that it is normally the case in languages like Java, C++, and Objective-C, that there are no "built-in container types", as you will never find a container type that is actually correct to use in an even fractional majority of the cases; in fact, most of the time, there isn't even a single obvious choice in these languages for what class to use: you find default implementations of multiple algorithms.
Objective-C, here, is no different from this concept: NSDictionary is just an interface, and can be implemented by numerous backends. Apple has a rather good implementation backing the default version, and even attempts to switch between algorithms as the data structure grows, but your code is always just a few identifiers away from choosing a different subclass in that collection hierarchy.
Languages with macros and (often) monads allow developers to redefine the syntax and types to accomplish these goals. The point I am making here is that there is a reason why language designers make this tradeoff: it isn't at all obvious that languages should have a built-in container syntax, and when you find languages that don't you can (and should) notice a pattern.
(C++11 is actually an interesting thing to analyze regarding this tradeoff, by the way: the new "common initializer" syntax is designed to provide as much of the benefits as possible of a simplified built-in data type syntax without taking on the semantic burden of having it; however, it also does not provide syntax that ends up being entirely devoid of the type of the container.)
> Languages with macros and (often) monads allow developers to redefine the syntax and types to accomplish these goals.
You can't redefine haskell's syntax unless you're using Template Haskell (which is a language extension, not part of Haskell itself). It also has nothing to do with monads. Likewise with most MLs, or with Erlang. All of them have a literal list syntax.
And if containers have no reason to be special, why would strings be special? They're just sequences of unicode codepoints after all.
Continuing your argument into absurdity, why have literal syntax for most datatype at all really? You could just shove a bag of bytes into a constructor when you want integers or floats as well. Now you've got one literal syntax (which isn't even for a datatype): bunch of bytes.
The way the do notation syntax (used by most people for describing monads) in Haskell translates to function application allows you to do some fairly interesting things with syntax abuse.
As for strings, it is very seldom that you find interesting alternative implementations: the only one I can think of is a rope. Interestingly, C++11 now allows you to override string literals, so you can actually do this.
> The way the do notation syntax (used by most people for describing monads) in Haskell translates to function application allows you to do some fairly interesting things with syntax abuse.
Sure but it's not syntax redefinition.
> Interestingly, C++11 now allows you to override string literals, so you can actually do this.
You still have a literal string notation. Literal notations don't have to impede multiple implemetations, and the truth is there is generally a primary representation used for the vast majority of cases (even if that representation is a cluster class and flexible under the interface).
In Cocoa, the primary sequence and maps are NSArray and NSDictionary, what would be the issue with making those literal? And one of your objections is
> When this suddenly matters, you are in the situation where what you want to be able to do is to make a very small modification to areas of your code where you need to select a different algorithm, in order to get the different result; you don't want to be forced to rewrite half your code to use a different syntax just because it was slow
But that makes no sense: as long as all equivalent containers implement the same interface (which they do, or you can't swap them anyway) that creating an object be done with a literal or with a constructor and a bunch of messages has no influence on the rest of the code, the only thing you need to change is the initialization code in both cases.
Hell, a smart enough editor can even swap between the literal and the "constructor" versions of a given collection (IntelliJ can do that for Python dicts, for instance). Not to mention in many cases the non-literal can just take the literal as a parameter, if the collection with a literal syntax has been well chosen, that way you get your cake eat it.
Your parent poster never said anything about intelligence in his post. In addition, Haskell does not have syntax for dictionaries/hashes, or really any "container type" other than lists (and list comprehensions).
I believe my key mistake was using the term "real programming languages" as the alternative to "scripting languages": the word "real" is quite harsh; I have softened the statement slightly by changing "we look at" to "some look at".
I will also, though, point out that I write almost as much (if not more) Python as I do Objective-C++ these days: I therefore can be said to certainly not consider Python to be for "stupid people", without including myself in that set. ;P
This is why I do most of my editing with objective-C in xcode, while I do most of my editing with python in vim.
Tab completion of methods is very nice to have, and makes using xcode as fast as using vim for me. (now, if I could have vim keybindings with xcode tab-complete, then I'd rocket through my editing...)
Ugly but a case can be made that this end justifies the means.
Another option is implementing your own dictionaryWithObjectsAndKeys where you flip the varargs and feed them back into the subclass dictionaryWithObjectsAndKeys.
Step 1) You first allocate an object of type NSArray by passing a message "alloc" to the NSArray Class object.Yes every class in objective C is really a class object in the Objective C runtime.Now this class object may allocate data or it may not.It may also return you a pointer to a previously allocated object(Yes that is a cool way of implementing the singleton pattern.In any case it will return you a pointer to an allocated memory space or nil.
Step 2) Now you tell the allocated object how to initialize itself.The allocated object may have been already initialized and it might just append the strings.The allocated object might be nil.
What I am trying to point out is that there is a lot of dynamism involved in writing an initialization in a two step message passing.It almost feels like the objects are alive.My opinion is that this is real object oriented programming.
The poster seems to be complaining that there is not similar syntax to many scripting languages, where you can do [@"string1", @"string2"], not about two-phase message allocation; he wasn't complaining for "dictionaryWithConstructorIDontWantToType:", he was complaining against it.
In this specific case, you actually can usually use "arrayWithObjects:" (yes, the syntax that the poster didn't like), and it is frankly preferred. If you are doing alloc/init, and (as in your example) storing to a local variable, you should also send an autorelease in the same statement, so as to guarantee exception safety of allocation for subsequent code; "arrayWithObjects:" takes care of all three steps for you.
Yes..arrayWithObjects is definitely a better choice in this case...I was just trying to point out what exactly a line like object = [[[MyClass alloc] initWithObjects:@"",...] autorelease] means...and why it is more powerful than writing object = @"",@"",@"",.
Although the latter approach can be taken by using macros ,I was making an argument for the beauty of the first approach.For example it would not be possible to allocate a singleton object with a line like that in Python.
I wonder if I'm the only person that thinks that Xcode is the problem, not ObjC. Having experience with C I feel just fine writing code in Objective-C, but only the thought of trying to use Xcode instead of Emacs is painful.
I'd love to know what Eclipse/VS/NetBeans users think about it, maybe it's easier if you're already used to working inside a huge IDE.
Most of the problems I've hit with iOS development are Xcode-isms.
I've recently been sent on a merry hunt to get the built-in git integration from imploding, and the project groups vs folders ambiguity has caused mistakes of the "what actual file is this name pointing too again?" variety (and this in turn leads back to the git integration issues when a file is not where you think it is).
It would also be nice if Objective-C++ was a first class citizen with regards to refactoring tools (I understand this might be hard to implement however).
Rarely has it been a problem with the actual language itself.
I believe JetBrains have an Obj-C IDE now. Haven't tried it yet but I know they are pretty good at the whole refactoring thing (I use ReSharper at work all the time).
I'm (some what) proficient in many programming languages. Some more than others. I've worked in languages like Python, Eiffel, C#, C++, Prolog, Java, OCaml, F#, Ruby and as of recently, after a long hiatus, Haskell.
I have such a strong dislike for Objective-C, I can't explain it. I mean C++ is insane in a way, and Haskell is a pure Mindbender. But Objective-C's syntax, to me, is indeed so verbose, I'd rather read the EU regulations on "the common organisation of agricultural markets".
I had to create a couple of iPhone apps, and I probably will have to create some more in the future.
Is there any way for me to overcome my unnatural dislike for this language?
And yeah, this article did not work for me, as you would have already guessed.
As someone who likes Objective-C a lot, I have to say this is a pretty bad defense of it. No offense, clearly we are both fans of the language, I just don't think this will convince anyone on the other side. In fact, I find very little defense of it at all in this post. It seems his fundamental argument is "its a matter of taste", which while true, in no way conveys the many "whys" of the choices made in this language. This is sad because most of the things people are initially turned off by actually have very logical reasons behind them, which over time you do grow to love and miss when you leave.
1. The first major point (which is found at the end of this post) in any "defense of Obj-C" should be that this is a very pragmatic language. It makes a lot of wise tradeoffs and rarely strives for "purity" or "religion". This helps to put a lot of the language choices in context, and actually is the strongest reason its a great language in my opinion.
2. "Ugly" - The point is certainly not that you "see through the brackets", that's a terrible argument! The point is to understand why we have brackets, because they allow for named arguments without colliding with existing C syntax. Additionally, they make it clear to the user that what is about to happen is not a traditional method call, it is a message send. This is because you can do both in Obj-C (not to mention Obj-C++).
3. "Verbose" - First off this is a framework decision. It's possible to take Obj-C without Cocoa, and write a framework that is just as non-verbose as Python (and vice versa). But fine, you could make the argument that the named parameters "encourage" verbosity if you want. The key here is not to hand wave this away as a matter of "taste", but to understand the logic behind this: its very easy to read. I can show lots of non-programmers Obj-C code, and it really reads like english. In fact if you just read aloud many Obj-C snippets, it often sounds very close to English. 80% of coding is reading code, and if you work on a big project, its reading other people's code. You never run into a piece of code that has 4 arguments and have no idea what the last "true" is for. Similarly, you never have appendChild(node1, node2) moments where you're not sure which is the node you're appending and which is the one being appended next to. Apple's SDK's are repeatedly praised and a big part of that is that the frameworks are very friendly. I personally find the opposite trend of terseness incredibly strange: why do we focus so much on shrinking variable names.
4. "Memory Management" - This I will admit was more or less fair. The MM story just isn't that great with Obj-C. I have to admit I thought it wasn't a big deal before spending a lot of time in a dynamic language, but it really is annoying coming back to it. And while I'm not 100% sure yet, ARC is not the be all end all. I have to say I think about memory almost just as much with ARC (perhaps simply because I'm not used to it). At least with explicit management I knew exactly what was going on, with ARC I kind of feel that its half magic and half really hard situations. I really hope I'll eat my words about this soon. Now, the counter argument is that this is why Obj-C is so fast. I honestly don't know if that's true. I know enough smart people on both sides of the argument to say that I simply don't know enough about it.
On #4: personally, I believe it is a mistake to attempt to abstract memory management from developers unless you can abstract all resource management (as there is no fundamental difference between memory mappings, file handles, database connections, or minimum-wage bicycle messengers), and in the attempt to fully abstract memory management (as in, with garbage collection) it usually becomes impossible to abstract arbitrary resource allocation (due to non-deterministic finalization; interestingly, this is a similar tradeoff to CAP).
With this realization, Objective-C is actually really good at this, requiring many fewer (if not actually no) try/catch blocks to get "safe" code than is required on seemingly almost every line of Java (which doesn't even have C#'s "using" to help it out, a primitive that causes its own problems due to lack of contracts), and has conventions that make it quite simple to "locally" (as in, without having to read anything not directly around the area you are analyzing) determine whether code you are looking at is handling things correctly.
Therefore, and I kid you not: when I saw "a defense of Objective-C" I was anticipating an article that was going to start with how awesome the memory management was (listing the auto-release pool paradigm as something that people typically haven't developed into mature C++ memory paradigms), followed by a discussion of the balanced tradeoff between fully dynamic typing and dispatch with low-level performance-oriented code; instead, I read "You’ll have to prise my garbage collecter out of my cold, dead hands!", sigh, and go back to coding. :(
>as there is no fundamental difference between memory >mappings, file handles, database connections,
>or minimum-wage bicycle messengers
The number of programs you can write without file handles, database connections or minimum-wage bicycle messengers is dwarfed by the number of programs that are extremely painful to write without dynamic memory allocation. We can argue all day about how 'fundamental' the difference is but there is a profound practical difference well-recognized in the decades of work that's gone into memory GC. 'It's ok/actually awesome that Obj-C kind of blows at memory management because other languages kind of blow at universal resource management' is a pretty specious argument.
I would love to see a reference for the "profound practical difference well-recognized in the decades of work that's gone into memory GC".
Many languages, including Objective-C (although, arguably/mostly Foundation), manage to provide primitives that make it easy to manage either at once; the only high-level complexity you have to give up is cycle detection, which is a serious problem and "known tradeoff" in many fields, including deadlock detection (hence, why I mention an interesting connection to things like CAP).
Also, I would also love to see a reasonable program that does not have external resources: I find that almost all the work my programs are doing are managing and moving around external resources... from threads to sockets to money, you are probably not doing anything terribly useful unless you are dealing with a non-memory resource.
Regardless, the goal of these statements is "a defense of Objective-C", not "why Objective-C is amazing": the "defense of Java", when you show someone a four-level nested try/finally whose sole purpose is to make non-deterministic finalization of File objects exception-safe, is "but we have garbage collection, which has these nifty properties, including automatic cycle detection".
>I would love to see a reference for the "profound practical difference well-recognized in the decades of work that's gone into memory GC".
I don't think you need a reference - when we discuss algorithms we have a notation to describe an algorithm's behaviour in time and space yet never bicycle messengers. If anything, this suggests these resources are, indeed, somehow (and obviously) more fundamental.
The 'defense of Obj-C' article thing is pretty silly, no argument there.
Unlike other resources, dangling references to memory are catastrophic. They generally go undetected (no layer of indirection to invalidate them) and can arbitrarily corrupt any future object, violate any type safety rules of the platform, and make even bug-free code execute incorrectly. "Is this object really dead?" is the kind of mind-numbing but critically important question that computers demonstrably answer much more reliably than people. That's why even objc uses a simple but expensive form of garbage collection (reference counting) rather than making you declare "not only am I not using this, I hereby bet my reputation that nobody else is either" by calling free().
The idea of type-safety has nothing to do with this: a type-safe language implemented entirely with reference counting, having no ability to write to arbitrary memory locations on purpose or on accident, has this same tradeoff.
Additionally, as I think this is also relevant to your comment, if you take a step back for a second from the notion that an object /is/ memory, and think of memory as being one of the resources that an object is using, things become more clear.
Simply put, we have two common ways of reclaiming objects automatically: garbage collection and reference counting. The tradeoff is that with garbage collection, you get free cycle detection at the cost of not having deterministic finalization.
Each of these options has downsides: neither is fundamentally better than the other, and not having either one causes you to have to scratch your head occasionally, or add extra code to deal with the lack of automation.
(edit:)
Thinking about this overly simple description, I realize that I'm over-simplifying a little too much... a lot of the fundamental problem has to do with an inability to determine the order of finalization of objects in a cycle, which is what causes a lot of mistakes in Java finalizer implementations.
In comparison, the practical problems are that the common implementations involved take extreme positions: if you have a garbage collected language, it normally is not "reference counting with a cycle detector bolted on", which yields a weird property of possibly arbitrary delays on finalization of valuable resources when you aren't under memory pressure.
(on a side note: would be cool if HN had a SO-style notification system which would tell me when someone replies ...)
I think I have a solid understanding of both models and how to work with each, but as far as syntax goes I'm fairly ambivalent (granted, most of my code is in C/assembly).
Hi, Thanks very much for your thorough and constructive criticism :)
For me, writing articles like this is as much about learning to be a better communicator as anything, and you've (indirectly) really got me thinking about how i could have written this better.
I'm thinking there should be a single 'theme' to an article, and that every topic/point discussed should be related to that theme. In the case of this blog post, it should have been 'pragmatism' as you identified.
If i had written with that in mind, i could have used every point to illustrate how obj-c really reflects pragmatic principles in so many ways, which would have been a much better article.
Not that I agree with him, but I do understand what he meant by that, and might be able to explain slightly: many people believe that "design patterns" are something that languages like Java (quite in particular, this language is held up during these conversations) require to work around design deficiencies.
Arguably, this is true: many of the patterns simply "fall away" if you have access to features like multi-method dispatch (Lisp+CLOS), message passing with transparent proxying (as in SmallTalk, and to some extent Objective-C), or continuations (Lisp/Scheme).
However, the argument often goes even further into the realm of complaints against "all kinds of patterns, even those that tend to exist in all abstractions, even mathematical ones", deriding things like factories as "overly-complex" (in the same concept of "overly-complex" that make some people choose NoSQL over RDBMS not for scalability, but due to the common lack of schemas).
Factory is actually perfect example of pattern that is mostly used to overcome the fact that passing classes around is unnecessarily complex in Java and thus it's favorable to wrap code calling constructor in it's own class and pass instances of it. In "dynamic" languages (including ObjC) you can simply use use the class object (and if the need arises: things that behave like class objects) itself instead of factory in most cases.
Yup, that's pretty much what i meant, plus a bit of what veyron meant when he was talking about obsession with the GoF.
Wow, you guys really are reading a lot into what i've written, keep in mind that this was written as a light-hearted and hopefully factual rebuttal to all the anti-iphone/objc trolling at my work. Peace :)
> The first major point (which is found at the end of this post) in any "defense of Obj-C" should be that this is a very pragmatic language.
"Pragmatic", much like "practical", is utterly worthless for programming language discussions. It means something different to everybody, and it's basically the tool you'll reach for when you have absolutely nothing good to say about the language you're trying to "defend".
Both "pragmatic" and "practical" should be the godwin points of PL discussion. They're complete bull.
> The point is to understand why we have brackets, because they allow for named arguments without colliding with existing C syntax.
Just because there are reasons it's done that way does not mean it's any less ugly, does it?
> Additionally, they make it clear to the user that what is about to happen is not a traditional method call, it is a message send.
For pretty much all OO PL languages, the distinction is just cutesy, the end result is that you're synchronously calling a method on an object, whether you call it "message" or "method" has little relevance.
> This is because you can do both in Obj-C
Do you mean that in a "function pointer in a struct" sense? This would use the `->` sigil would it not? And obj-c 2.0 managed to include dotted property names, so that could have been used (obviously, it wasn't because obj-c merges Smalltalk syntax into C and Smalltalk's message-send syntax does conflict with C if unbracketed)
> It's possible to take Obj-C without Cocoa, and write a framework that is just as non-verbose as Python
Not quite, Obj-C has structural deficiencies (in part inherited from C) which mean it has boilerplate you can't do without, such as the duplications between header and implementation files.
For much of its history, it also lacked any form of "blocks" (and those added in obj-c 2 are pretty verbose), meaning your Obj-C code would have a harder time with scoped resources than the equivalent Python code.
Other intrinsic verbosity issues of obj-c: iterations pre-Obj-C-2 (and fast iterators), type annotations, retain/release calls (we'll see how ARC fares), ...
I would say embracing Cocoa's chosen verbosity (although I sometimes find it misplaced) is a much better strategy than claiming Obj-C can be as terse as Python.
They could have added garbage collection if they wanted to add as much ram to the iphone as the Android phones have but I guess they already made their choice long before one could develop code for the iphone.
Arguably far more important than the amount of RAM—which costs in dollars, battery, and sometimes even physical size—is the non-deterministic performance of GC, which directly costs in responsiveness.
> And GC is always less efficient than no GC, even with more memory.
No, it's not. GC allows for more efficiency in many scenarios. Imagine creating and destroying many small objects. With traditional memory management, every malloc incurs a cost to allocate a chunk of memory from the free store. This will involve some work to break the correct-sized piece of memory off some larger block, and some bookkeeping for later restoring that memory to the block and possibly enlarging the block, merging with others, etc.
On the other hand, in a modern generational GC, a malloc typically consists of adjusting a single pointer to the nursery pool by the size requested.
Depending on the scenario, GC can be just as fast, or even faster. In other scenarios, manual memory management may be faster.
> This will involve some work to break the correct-sized piece of memory off some larger block, and some bookkeeping for later restoring that memory to the block and possibly enlarging the block, merging with others, etc.
This manual work incurs a penalty only for the developer, not for the performance. And you get to fine tune your memory allocations to your program's behavior.
> in a modern generational GC, a malloc typically consists of adjusting a single pointer to the nursery pool by the size requested.
Sure, but by "less efficient" I'm not only referring to allocation speed, but also to memory usage. That nursery pool is auto-managed, and not program specific (with the exception of some smart gc guesses).
No, the work I described is not done by the typical developer. When's the last time you coded your own malloc ? Under the covers, though, malloc has to a lot of expensive bookkeeping of the sort I described that modern GC doesn't have to worry about.
And GC can be more efficient in terms of memory usage as well. Modern GCs compact and so avoid most loss due to fragmentation. Where malloc will generally leave blocks partially unused, GC can allocate objects consecutively with no space between them, even if the objects vary in size (though there may be loss due to alignment, but that loss will occur in any memory management scheme).
Just as a counter to your assertions - the code base I work on at work has its own memory allocation in it. It isn't the same as malloc, because all memory is allocated up front (and done by the caller, not the callee). There are no gaps in the memory (aside from those for aligning things nicely) and there is not really any way that you could make the memory usage any better in these regards (e.g. it is dense, and there is zero overhead after initialisation).
Garbage collected languages prevent this sort of thing from working, because they force overhead, and even though they might be able to do better than a lazy programmer, a person who cares about what they are doing will do ok.
Not necessarily. Especially not if you can program the cache (no idea if that is possible on an ARM processor).
The thing is that Malloc isn't particularly smart about how memory is allocated so you can end up with various tangles, etc.
On the other hand if you have enough memory you can do a hole world copy which, among other things, means that allocating memory is O(1). This is better than malloc if you have short lived and small objects.
If your available memory is less than five times the working-set size, GC is less efficient than some sort of malloc/free perhaps with reference counting
Where did you get this 5x number? As for reference counting, that's rather time-expensive, and does horrible things to the cache in multi-threaded scenarios.
The reference counting would only be used in certain cases where the dynamic extent of the object is unknown. Most objects can be cleaned up with an implicit management scheme like the RAII pattern in C++.
Is it considered poor form around here to submit your own articles? If so, i'm terribly sorry and will remove, however i feel strongly that a lot of people write off obj-c based on superficial weaknesses, and that it has a lot of strengths that are very underrated, and i wish i'd known a lot of this stuff when i started off in iphone development.
I don't know anymore, but traditionally it's quite acceptable. That's why self-posts are in gray text and hard to read--if you have a lot of things to say about a subject, post it to your blog and submit the blog here, don't make a long self-post.
-quote-
Go and look at some Lisp, then come back – obj-c will suddenly look better :)
Trust me, it grows on you. You’ll soon learn to see through the brackets – just like the green vertical text in the matrix, it becomes invisible after a while.
-end quote-
Its ironic that a Lisp/Scheme person would say the same as to why s-expressions are good (the parens become invisible!) so I don't see why objective-c syntax is better but somehow uses the same reasoning.
And even if you haven't, you can learn how to read it in ten minutes. Say what you want about Obj-C, but I think its readability is really outstanding.
I really hope Apple promotes MacRuby to first class citizen for both OS X and iOS development. That would make things a whole lot more modern and easier. Who knows, once we have quad core CPUs on even the lowliest of iDevices we could see this happening.
There’s a philosophy here: user happiness is prioritised over developer happiness.
See also the App Store policies. Apple's priorities tend to be it's users, Apple, then any 3rd party developers. I don't tend to think this is a bad thing...
This is overly simplistic. In their heyday MS catered to developers first and their users second. You can make some snarky comment about this, but at the time it allowed them to become one of the largest and most powerful tech companies out there.
Google catered to it's own engineers first.
The point is there are different ways of building a business; don't blindly choose the Apple path just because they're currently at the top of the heap.
I love it too for its readable and descriptive method names. It is also surprisingly flexible considering its age. (I guess what I am saying is that Apple kept it up-to-date pretty well.)
I love Obj-C message send syntax and I miss it dearly when using any other language. I love that a method implicitly documents itself at the call site, and that the delimiters occur between calls rather than in the middle of them.
And it works great with code completion. You just type '[' and the editor immediately knows what you are doing. Choose a method and the parameters are right in front of you. No documentation required, unless you actually need in-depth information.
I didn't find this very convincing at all. For starters, the code presentation reinforces the verbosity and difficulty working with it. Don't most languages support named parameters and in an easier to understand way? I'm not qualified to discuss memory management but that point also left me confused.
Xerox PARC, GUI, inspired Steve Jobs, all that fun stuff.
Objective-C is a sort of nerfed Smalltalk bolted onto the side of C. Yes you can see the seams and it's clunky-looking, but it works. Smalltalk semantics means it's much easier to build robust and loosely-coupled programs than in C++, and you don't even occur that much runtime overhead.
I do all my game dev in Objective-C, and I don't even have a Mac that I use actively. In fact I defy people to write significant Objective-C programs that don't use or need the Cocoa or NS libraries. I think they will have an easier and more fun time of it than when working in C++.
After having spent a couple years doing embedded OS development in C, Objective C is beautiful. The verbosity makes it much easier to read than typical C code and the memory management is brilliant compared to doing it yourself. It walks the line between static and dynamic types quite well, with most things being static except when dynamic makes things a lot easier. I think Objective C is brilliant considering what they were able to add to C while maintaining great speed and full compatibility with C.
I dislike the NextStep libraries (and all of Apple's subsequent creations) far more than Objective-C itself. The completely unnecessary verboseness is what drives me up the wall.
If the libraries were designed more like C++'s (say what you will about STL, but at least set<> is called set<>), I'd probably have no issues whatsoever.
You may dislike the verbosity, but it's hardly "completely unnecessary." A typical Objective-C program is inherently self-documenting. It's nice knowing what a group of arguments is used for without having to look up the method prototype.
For me the problem isn't Obj-C. It's cocoa. (Unless things have changed,) if everything was consistent in cocoa from a language perspective i.e. every library / framework is accessible with Obj-C from the start as opposed to being only C. If I remember correctly the Addressbook portion is one of the culprits.
It just reads beautifully. Otherwise working with Objective-C was like pulling teeth at times. It had been a long time since I had to write so much code to get such simple things done.
Not a fan of the syntax, but my real gripe is with the header files, and all the DRY that entails. If I were Apple that's the first thing I'd aim to fix, rather than giving us ARC.
Agreed. I've had to use factories in Obj-C plenty. I guess the argument that I would make is that they are much easier to implement in Obj-C and require less boilerplate (especially if you use blocks), not that the pattern doesn't exist in Obj-C.
After doing a couple of projects with it I've found it to be a really beautiful language from a code formatting point of view.
The whole 'overly verbose' thing is probably just because you are encouraged to give your methods and arguments good descriptive names, and often when you do need short variable names for some maths calculations or something it will be while inside a method block using good old C style anyway.
I don't know, it's flexible, (arguably) attractive and it runs fast.
Yay Objective-C! :)