Hacker News new | past | comments | ask | show | jobs | submit login
The eero programming language - a dialect of Objective-C (eerolanguage.org)
131 points by basil on July 7, 2012 | hide | past | favorite | 107 comments



Programmers "liked" Java because they were told to by Sun. Programmers "liked" C# because they were told to by Microsoft. Now, programmers "like" Obj-C because they are told to by Apple. In that regard, this seems like a step in the right direction...

But it's only a half step. If you look at what's happened to the other two languages mentioned above, they have evolved such that their runtimes are now more important than the languages they originally hosted.

What really needs to happen to Obj-C is for people to wake up and realize that the runtime is actually really nice. Combined with LLVM, one could make the case that the Obj-C runtime could compete with the JVM or the CLR.

(Of course, the one huge, massive, glaring omission is any sort of managed memory... Well, at least there was a garbage collector at one point.)


First, I want to counter your use of "like" in quotes, as if programmers couldn't have perfectly valid reasons to like these languages.

I like Java for its low-cost allocations, sane implementation of constructors, and checked exceptions that prevent undeclared control flow behavior. The huge trove of solid-quality libraries and a general community focus on unit testing and documentation (perhaps in no small part driven by Apache) has been quite valuable.

I like C# for its type inference, relational algebra ala LINQ, reactive extensions, adoption of F#, and a general R&D focus on adopting language design from the functional languages.

I like Obj-C for the ease of which I can drop to C and assembly, and the ease-of-implementation inherent in Apple's GUI frameworks. That's about it -- there's a lot to dislike about ObjC.

All that aside, and given my own significant familiarity with the underlying runtimes, I'm trying to understand what you think is so advantageous about Apple's ObjC runtime. It is effectively an extremely simple dynamic dispatch runtime, reliant on the OS memory management combined with reference counting, zero support for any complex runtime state inspection (say, to support breaking reference cycles), and now with some C-derived glue for closures and compiler-emitable calls to support somewhat-automatic reference counting. The class registration system has hardly changed in 20 years, has no support for namespaces, the type encodings available at runtime are complex-to-decode-strings, and the whole thing is, quite honestly, the kind of rickety design you'd expect to emerge from C hackers in the late 80s early 90s, incorporating none of the advances in the field from either Sun/Java or Microsoft/C#.

ObjC predates the work on Strongtalk that led to the modern JVM, it either predates or fails to inherit from the language R&D outside of C, and predates the CLR runtime work done by Microsoft. It's a DINOSAUR.

I don't really understand where you see the runtime advantage here, and I spend nearly all of my time writing ObjC.

The only advantage I see to the ObjC runtime is that it's required to interface with their native libraries.


I have read your discussion of the Objective-C runtime three times to try and understand it. Your criticism seems to boil down to "The ObjC runtime is insufficiently complicated." (For example, you say several times that it lacks features X and Y, for many values of X and Y.)

There is a school of thought that says simplicity is a feature. That C++ is an example of "language gone wrong". That the C hackers in the 80s and 90s had the right idea.

It is not reasonable to compare Java and ObjC because in spite of the marketing material, they have very different performance characteristics. For the most part, garbage collectors are not efficient enough to be comfortably used on 5w computers. If you need a non-GC native-performance language, your choice is C++, C, or ObjC. Clearly you're in the "Features are good, use C++" camp, but the people who are not in that camp are also in pretty good company.


Clearly you're in the "Features are good, use C++" camp

It appears that since you didn't understand my criticism, you decided to substitute your own strawman that you could respond to. Your reference to C++ -- and my assumed preference for it -- is an odd choice, given that I did not actually mention the language.

I think C++ is broken for reasons of poor language design. Simplicity is good in both implementation of a language and the use of it, the balance of that is complex and warrants significant consideration.

I will provide an alternative summary, since I think "The ObjC runtime is insufficiently complicated." is both loaded, and incorrect:

The ObjC runtime is poorly architected, shows its age, and is not sufficiently well designed to permit the implementation of features that would grossly simplify the use of the language.

In support of this I mentioned a number of features that simply are non-implementable on the current runtime, including automatic handling of reference cycles. Your contrary (and unsupported) assertion is that GC is "for the most part" not efficient enough for use on 5W computer. Somebody should tell Microsoft and the MonoTouch guys. Even Android's GC is usable, and it's an fairly primitive mark-and-sweep collector.


Can you expand on "shows its age"? Code entropy doesn't increase as a function of time alone. An ANSI C "Hello, World!" from 1990 doesn't show its age in the context of C today, for example.

Is age an issue in its own right, or is it what informs what you perceive as a poor implementation that precludes the implementation of "modern" features? Also, I wonder if you perceive any possibility that progress might actually occur in the form not of accumulating more features on top of the modern fashionable designs but of starting from an earlier base and pursuing an alternate path of evolution to that of extant runtimes — do you?

For example, is it possible that there is some innovation which could occur on top of C which would not be possible in the context of some modern language and its modern runtime which would make it a more desirable platform than the modern alternatives? As someone who works primarily in C (and is a little tickled by commentary elsewhere on this item about C programmers in the '80s and '90s that suggests everyone moved "up" from C to C++; very Microsoft!) I would say that's certainly the case, because there are already things that make C more appealing than C#, Java, etc. An extension to the type system to do optional, minimalistic reference counting rather than having to use GC in a modern language or the STL's reference-counted containers (or something home-grown) in C++ would be a huge step forward for example. The locking annotations available with Clang and GCC have removed much of the value-add for using a language which does more hand-holding around synchronization.

If it isn't that age precludes the possibility of progress, then what's the problem with age? Code rot is an illusion.


> Can you expand on "shows its age"? Code entropy doesn't increase as a function of time alone. An ANSI C "Hello, World!" from 1990 doesn't show its age in the context of C today, for example.

I was referring to the particular C programming approaches of the 1990s, rather than any notion of bitrot.

If you were to explore a more complex ANSI C application from 1990, you would be more likely than not to find all state managed through globals, possibly significant use of goto, a slew of poorly documented and difficult to trace implementation functions, and an over reliance on exposed structure definitions and simplistic data models that would be exceedingly difficult to iterate on without breaking the code base.

Additionally, the code almost certainly wouldn't be re-entrant, much less thread-safe. You also wouldn't have found unit tests, although some people would include a commented out test () function in some of their sources.

> ... and is a little tickled by commentary elsewhere on this item about C programmers in the '80s and '90s that suggests everyone moved "up" from C to C++; very Microsoft!

My own position here is that we learned to write better C, not that we moved "up" to C++.

> An extension to the type system to do optional, minimalistic reference counting rather than having to use GC in a modern language or the STL's reference-counted containers (or something home-grown) in C++ would be a huge step forward for example. The locking annotations available with Clang and GCC have removed much of the value-add for using a language which does more hand-holding around synchronization

It is very difficult to maintain pure backwards compatibility with C while adding any significant functionality. This is what Apple discovered with GC, and when implementing ARC, they had to go the route of outright restricting certain previously supported usage, eg:

http://clang.llvm.org/docs/AutomaticReferenceCounting.html#o...

If we pick your idea apart in more detail, how would you represent reference-counted entities in this C environment? You couldn't do so without defining higher-level structures (such as objects, or glib gobjects), at which point you very quickly find yourself heading away from C.

This is effectively what Apple was forced to do to add blocks to C, and is also why it seems quite unlikely that blocks will ever see inclusion in the C specification.


> The ObjC runtime is poorly architected, shows its age, and is not sufficiently well designed to permit the implementation of features that would grossly simplify the use of the language.

The first two points are not defined in a sufficient way that would allow us to have a meaningful conversation about them. What does it mean to be poorly-designed? What does it mean to be old, and why is old bad? We cannot make progress here without semantic introspection. On the third topic, however:

> In support of this I mentioned a number of features that simply are non-implementable on the current runtime, including automatic handling of reference cycles

I'd like to examine this particular assertion in substantial detail, because it is apparent to me I have not communicated clearly:

1) First of all, the statement "automatic handling of reference cycles is not possible to implement because of the Objective-C runtime" is false. It is false because it has in fact been implemented, in spite of the Objective-C runtime: https://developer.apple.com/library/mac/#documentation/Cocoa...

2) I've made the general argument that many features are not useful on a 5w computer. Now I will make the specific argument that detecting retain cycles is not useful on a 5w computer:

> We have no heap scans, no whole-app pauses, and no non-deterministic releases. We really think that a smooth, fluid interface is important to our customers, and that great battery life is also important. And with that, GC is deprecated. - Apple, WWDC 2012 session 406, about 34 minutes in

Now extending this two-case line of reasoning to the general list of features: I do not believe (and you have provided no evidence to support) that the Objective-C runtime is the limiting factor for any feature you have listed. I also do not believe (and you have provided no evidence to support) that any feature is useful on a 5w computer. In this particular most-favored case, both of these problems apply, but I believe at least one problem applies to every feature you have listed.

> Somebody should tell Microsoft and the MonoTouch guys. Even Android's GC is usable, and it's an fairly primitive mark-and-sweep collector.

I think they already know:

> The occurrence of this is non-deterministic, and may happen at inopportune times (e.g. in the middle of graphics rendering). If you see this message, you may want to perform an explicit collection elsewhere, or you may want to try to reduce the lifetime of peer objects. http://docs.xamarin.com/android/advanced_topics/garbage_coll...

As I understand it, MonoTouch's optional SGen does a better job, but still uses a stop-the-world algorithm for "large" objects. Given the memory constraints of most iOS devices, lots of things are large relative to the amount of memory you have available. For example, my real-world iPad currently has 90MB in the free pool, which means that a single layer that draws to screen is 10% of the immediately available memory in pure graphics buffer allocations. (It is possible to increase the available pool, and in fact iOS will instruct apps to dump what they can if more is requested, but this is slow and also not being a good memory citizen).


The ObjC runtime is poorly designed in part due to the fact that it fails to cleanly and adequately represent the necsessary metadata to support new features, and thus requires repeated ABI breakage as Apple stumbles about trying to implement enhancements. As compared to other runtime designs, and as I've noted elsewhere in comparison to the JVM specification, this is a uniquely poor showing for a runtime.

To address your claim that the runtime allows for GC because GC was implemented poorly for Mac OS: The previous GC implementation had fatal flaws in performance and execution, required substantial runtime changes, and involved breaking the previous ABI and requiring binaries to declare conformance to a new one.

ARC required a lesser amount of runtime changes, but also failed to include basic bookkeeping to support handling of cycles in the future.

As for accepting Apple's assertions about GC at face value -- this is the same organization that proudly trumpeted their poor implementation of GC at WWDCs past, and migrated their entire set of Mac OS frameworks to support it. Apple makes serious mistakes and are often proven wrong.

GC at Apple died because of a poor implementation, political fallout from it, and lack of adequate investment in it. I find it disturbing and sad that all Apple has to do is state 'GC bad' (for internal Apple political reasons!), and suddenly the masses of Apple developers with no significant experience using GC -- much less implementing it -- are parroting the line.

You reference that SGen requires hinting; the fact is, it does work, and the required hinting is nothing compared to dealing with reference cycles in block heavy code. Regardless of that, Apple should be capable of bringing more computer science firepower to this problem than Mono is. Someone else in this thread has already linked to research into hybrid refcount GCs.


The way this argument is going is a very eloquent "he said, he said". I'd like to push it into a better discussion. Could you drop the rhetoric about "parroting the WWDC line" and "Apple stumbling about", the ad-hominem characterizations of developers who haven't implemented a GC, etc.? I'm going to require you to provide hard facts or examples to support your claims, and not present conclusions as arguments, and not muddy the waters.

> The ObjC runtime is poorly designed in part due to the fact that it fails to cleanly and adequately represent the necsessary metadata to support new features, and thus requires repeated ABI breakage as Apple stumbles about trying to implement enhancements.

The ABI was "changed" once. When I say it was "changed", I really mean that the new ABI was released for the first time on new architectures that did not previously have an ABI at all, and the old architectures' ABI did not change. Assuming that your objection to ABI "breakage" was compatibility-based in nature, this objection would be unfounded.

You have also failed to make a claim that any particular feature is both useful and unimplementable, and also failed to support such a claim. Please make and support claims, not conclusions.

> As for accepting Apple's assertions about GC at face value -- this is the same organization that proudly trumpeted their poor implementation of GC at WWDCs past, and migrated their entire set of Mac OS frameworks to support it.

To be fair, garbage collection was released before iOS was released. Apple's position is today, and has always been, that garbage collection is infeasible on iOS. What has changed between 2006 and 2012 is that it now makes a lot of sense from both a developer mindshare point of view and an Apple engineer cost/benefit point of view to make Mac OS be as close to iOS as possible, because that's where the money is. There are very good reasons why (e.g. non-deterministic finalization) it is impossible to share code and illogical to share engineers between ARC and GC codebases.

> GC at Apple died because of a poor implementation, political fallout from it, and lack of adequate investment in it.

This contradiction is baseless (e.g. unsupported), so on that basis I am not compelled to respond. It also contradicts my discussions with engineers who were involved in the decision, so if you were to provide a basis, I have first-hand accounts with which to refute you.

> Regardless of that, Apple should be capable of bringing more computer science firepower to this problem than Mono is.

And here is the fundamental disconnect: you believe that currently-known GC algorithms are "good enough" for general use on iOS. I do not. Apple does not.

Thinking it through logically, there are three possible root causes for this disagreement:

1. You and I could have a different conception of the hardware capabilities of iOS devices

2. You and I could have a different conception of the performance characteristics for currently-known GC algorithms

3. You and I could have a different threshold for user pain, so you might characterize a certain performance penalty as "acceptable" that I would characterize as "unacceptable."

We've danced a little bit around #1 and #2, and we can talk some more about those. But I think the real issue might be #3.

For example, the native screen refresh rate is 60fps. I would consider it unacceptable to skip five consecutive frames, and unacceptable to skip more than 15 frames as a sustained average over many seconds. Mono documents "major collections" to take a second, which fails both criteria. [1]

I would also consider it unacceptable that, upon returning to the run loop (or equivalent common synchronization point), more than 15% of garbage bytes remain uncollected. There's just not that much memory to go around.

At this point, I have laid out in very specific detail what I mean by "performs acceptably". Now, you can either A) tell me that my demands are insane, in which case we simply have a very different idea of what "responsive user interface" means, and there's really nothing more to be said to convince the other. Or, B) demonstrate that there exists some GC implementation which fulfills these requirements, and additionally demonstrate that it cannot be implemented with the current ObjC runtime.

I think this is the the healthiest way forward for the discussion, because it leaves open the possibility that we may learn something. If instead we return to the things I listed in the first paragraph, I think we both have better uses of our time.

[1] http://android.xamarin.com/index.php?title=Documentation/GC


Not sure what you mean with "zero support for runtime state inspection", can you elaborate? Objective-C supports inspection of instance variables and messages (name + send/return types), so once you are dealing with live objects you can find out pretty much anything you want.

There's even some support for rudimentary stack inspection, but that's more limited.

Yes, it's a dinosaur, but a mighty cool one :-)


Specifically "complex runtime state inspection (say, to support breaking reference cycles)".

There's no safe way to walk the object graph to break reference cycles, despite Apple implementing both ivar indirection (to solve the fragile base class problem), and implementing what should be self-describable block structures.

Information such as whether a particular captured variable/ivar is a weak or strong reference would have been very simple and cheap to include in the runtime metadata, but was -- like many other things -- not included.

In another comment in this thread I described the difference between Apple's runtime/language design and everyone else's:

This sort of hackish minimalism is the polar opposite of the careful considered and long-term focused design that Microsoft (and even Sun) applied to the core of their platform. Compare the completeness (and longevity) of the JVM Virtual Machine specification to the ObjC runtime. Over roughly the same amount of time, Apple has had to introduce complete ABI-breaking fundamental changes of to their runtime to implement extremely basic language enhancements. In contract, the JVM specification has not, to my recollection, been broken once over the course of 20 years. Additions have always been forward-compatible.


The weakIvarLayout field seems to do exactly what you're saying for ivars, though there is no equivalent for blocks. (Whether you can [safely] do it anyway depends, I suppose, on whether you're willing to inspect machine code; it's not that hard, though the suggestion does nothing to disprove your claim of hackish minimalism.)


Unfortunately, as I recall, the meaning of weakIvarLayout is different depending on whether the code is compiled for ARC, non-ARC, or non-ARC GC/GC-optional. Unfortunately I don't remember the specifics off-hand.

Also, yes, runtime machine code inspection is a ... possibility ... :) It's not that 'hard', but it's certainly harder than clean metadata, and it means maintaining individual introspection code for all supported architectures.


Perhaps a matter of taste, but it seems everything you dislike about the Obj-C runtime are the things I enjoy:

> It is effectively an extremely simple dynamic dispatch runtime

Yup! Just the way I like it. It's practically lisp-like in its minimalism. Just the minimum you'd need to get the job done.

> reliant on the OS memory management

This, actually, is one of my favorite points. Letting the JVM grab a giant allocation at startup is fine if all you're going to do with your machine is run JVM programs (i.e. a server), but it sucks when you've got multiple apps all trying to play nice together. The OS has a memory management system for a reason. I'd much prefer my runtime use the existing system, than re-invent the wheel (as it were).

> with reference counting

Right, well...I did mention that the lack of a GC kinda sucks. I suppose it's a trade-off that one must make if you are going to rely on the OS to manage memory. Still, I can't help but feel that some sort of opt-in system could be made to work at runtime.

> zero support for any complex runtime state inspection (say, to support breaking reference cycles)

Of course, the flip-side to this is that runtime performance is more predictable, since the runtime isn't attempting to do anything overly complex. I can see how you might argue this either way: more runtime introspection eases the burden on language implementors, but it also is a sunk cost whether you want it or not.

> The class registration system has hardly changed in 20 years

"It seems that perfection is reached not when there is nothing left to add, but when there is nothing left to take away"

> has no support for namespaces

I'm not completely convinced this is a problem domain that I want my runtime solving for me. The addition of two-level namespacing to Mach-O addresses the only time I can think that I definitely want runtime managed namespacing. If I want namespacing to be part of my object model, let me determine how it should work. (It's always mildly annoyed me that Java's reverse-DNS-style bleeds into almost every JVM-based language.)

> the type encodings available at runtime are complex-to-decode-strings

The type encodings are simple C strings!

> the whole thing is, quite honestly, the kind of rickety design you'd expect to emerge from C hackers in the late 80s early 90s

If by that you mean that the central message dispatch routine is hand-optimized assembly for each platform the runtime is available on... I'll have some more of that, please!

Edit: Oh, and about the "like" part...I was being mildly facetious. There are aspects of all 3 languages that I enjoy, but I doubt they'd have become as popular as they have on their merits alone. (Well, ok, given the landscape at the time, Java's popularity could be said to be fairly won...)


It's practically lisp-like in its minimalism.

So why not go the extra yard and make it properly homoiconic and reap all the benefits of such a rigid, simplistic syntax? The current syntax is the worst of both worlds - painful to read but disallowing macros.

I don't want Apple to turn Obj-C into Perl but Obj-C is badly in need of a little syntactic sugar. Thankfully some of this is coming in the new collection literals and operator overloads.


Have you seen Nu? http://programming.nu/index


No but it looks interesting. Thanks for the link.


> This, actually, is one of my favorite points. Letting the JVM grab a giant allocation at startup is fine if all you're going to do with your machine is run JVM programs (i.e. a server), but it sucks when you've got multiple apps all trying to play nice together. The OS has a memory management system for a reason. I'd much prefer my runtime use the existing system, than re-invent the wheel (as it were).

The OS memory management system is as general-purpose as possible. At the kernel/process interaction level, the APIs are thin abstractions on the underlying VM page mapping system. Below that, at the malloc level, malloc is designed to be as general purpose allocator as possible, without introducing complexity unsupportable in C, such as migrating allocations across generations.

The JVM's lack of ability to rededicate pages to the OS is a failing of Sun's particular implementation, and is not strictly inherent in the design of generational collectors.

Your argument dismantles Sun's architectural choice when used on mobile hardware, but does not address the relative merits of alternative allocation/collection schemes.

> In regards to the class registration system: "It seems that perfection is reached not when there is nothing left to add, but when there is nothing left to take away"

That response has little substance. It's not perfect, or even great, it's merely functional. I would be enthralled if Mach-O two-level namespacing worked with ObjC classes, but it can not.

> The type encodings are simple C strings!

They're C strings, not particularly simple. Try decoding (with lackluster documentation) a complex structure return value. Modern runtimes use much more cleanly structured, cross-referenced data here.

> If by that you mean that the central message dispatch routine is hand-optimized assembly for each platform the runtime is available on... I'll have some more of that, please!

No, that's not what I mean. You'd find just as much complex hand-optimized assembly code in other runtimes, likely more.

I'm referring to design choices such as the use of unnecessarily complex data structures with minimal if not outright missing API to access them, poorly abstracted implementation details (such as exposing C string type signatures as the highest-level representation of a type encoding), minimal/missing/poorly-defined metadata.

An example is the failure to encode type data in blockrefs in the first public release of blocks for ObjC, which made it impossible to implement imp_implementationWithBlock(), due to the need to differentiation between stret vs non-stret return types and select the appropriate trampoline. This required changes to both the compiler and the runtime, and meant that prior to those changes' introduction by Apple, it would have been impossible for an external entity to implement similar functionality.

This sort of hackish minimalism is the polar opposite of the careful considered and long-term focused design that Microsoft (and even Sun) applied to the core of their platform. Compare the completeness (and longevity) of the JVM Virtual Machine specification to the ObjC runtime. Over roughly the same amount of time, Apple has had to introduce complete ABI-breaking fundamental changes of to their runtime to implement extremely basic language enhancements. In contract, the JVM specification has not, to my recollection, been broken once over the course of 20 years. Additions have always been forward-compatible.

An example of the above would be the addition of non-fragile base classes through the use of minimal ivar access indirection. This required a wholesale breakage of the entire language, and thus could only be introduced on 64-bit Mac OS X and on iOS.

Apple's language and runtime design is hackish at best.


I don't buy the comparison. It's a lot easier to avoid breaking things when you're not compiling to native code, but compiling to native code is one of Objective-C's biggest advantages.


In contrast, I don't buy that dichotomy. The Objective-C runtime is runtime-heavy, generates a slew of runtime-interpretable data, and yet is compiled "natively" insofar as the body of functions-nee-methods is native.

Instance variable access from native code is done through indirection with metadata maintained on their type and offset, dispatch is done through indirection from native code using class metadata, instantiation is done through indirection using the global registration of class metadata.

The problem has consistently been in the non-forward-looking design of that structured data, not in the fact that they generate method bodies as native machine code.


Since I've been tired the last few days, here's a late reply:

Direct pointer offsetting of fragile base classes and statically generated copy/destroy code for blocks in lieu of type information would be fairly unthinkable in a managed language, whose bytecode probably wouldn't be low level enough to even express those designs; it would still have been a good idea to think ahead and use indirection for ivars and include full type information in blocks, but managed language designers never really had the chance to make such mistakes in the first place. For other Objective-C changes, like garbage collection, the changes to code generation would have been unreasonable to make "in advance", but it's hard to imagine an analogous change in a managed language that would require an ABI break. Though Objective-C could have done better, it's a bit unfair to compare it to something like the JVM.


Is your criticism the same for the newer 64-bit runtime? https://developer.apple.com/library/mac/#releasenotes/Cocoa/...


> Now, programmers "like" Obj-C because they are told to by Apple.

Not accurate in my case. I tried Objective-C because I was told to by Apple. In fact, as a Smalltalker of 15 years, I found the addition of the header files and everything from C and the square brackets quite distasteful at first.

I now find that Objective-C is a powerful and elegant language on its own merits.

If eero is to Objective-C as Coffeescript is to Javascript, then I suspect I'm going to love eero.


I can't agree on your first point, really.

I liked Java because it had a fairly large catalog of reusable libraries, and was decently easy to deploy back when I used it.

I loved C# because the language is really beautiful and fairly robust.

What makes you assume people can't pick tools by themselves?

On your second point, I think people (see http://rubymotion.com/) already realize that the runtime is really nice!


I was probably being unfair to Java. For C# I have no mercy (it's personal). I would argue you're the exception that proves the rule. For what it's worth, I was a huge Obj-C fan starting in 2002. The way that Obj-C's popularity matches the profitability of writing for the iOS platform in lock-step tells me that some people can pick tools by themselves, but most just do what they're told...

Oh, and did I mention that I'm on the MacRuby core team? ;-)


I know many other people who love C# actually!

So I think it's more a matter of "personal sphere".

Note that I don't use it anymore at all personally either.

Thanks for your work on MacRuby! Did I mention I love Ruby? ;-) (really - all of my apps are Ruby/CoffeeScript based).


> The way that Obj-C's popularity matches the profitability of writing for the iOS platform in lock-step tells me that some people can pick tools by themselves, but most just do what they're told...

Same exact thing happened with Ruby, and the herd followed. Is their any doubt?


Nowhere near the same. When Ruby was just becoming popular, any Rails developer could have said "This language is crap, I'm going to go write my web app in PHP/Perl/Java/Python".

Now try the same thing as Obj-C was just becoming popular: "This language is crap I'm going to go write my iOS app in..."

Well?


> Now try the same thing as Obj-C was just becoming popular: "This language is crap I'm going to go write my iOS app in..."

Java wasn't the only way to write apps, but Sun pushed it. And people followed.

C# wasn't the only way to write apps, but MS pushed it. And people followed.

Ruby wasn't an option until after 37 Signals put it out there and said "This is how you write web apps." And people followed.

> Now try the same thing as Obj-C was just becoming popular: "This language is crap I'm going to go write my iOS app in..."

You have Titanium, which uses JavaScript, and you have PhoneGap, which leverages existing web technologies. You have Adobe Air or Flash or whatever, which you can use to build apps with.

You can use Ruby.

The point is, these languages didn't become popular on their own. Maybe Ruby is the one most unlike the others, but still, I don't think anyone could seriously make the case that had not Rails appeared, Ruby would still be what it is today.


I don't know about you, but I appreciate that I don't have a garbage collector with coding for mobile apps (Not done any for OSX apps yet). You'll know what I mean if you work with large images (Bitmaps) in Android, then you'd wish you have a better craft at release memory (instead of recycle())


I am wondering when someone will implement a good Ulterior Reference Count or an Age-Oriented GC for iOS/OS X.

These are both hybrids of GC and reference counting that combine the strengths of both.

Basically, new objects tend to die quickly, so the overhead of reference counting is generally redundant, and GC techniques like copying collectors tend to excel. Old objects tend to stick around, so reference counting overhead is low. By using a hybrid of both techniques where they are strong, you avoid a lot of overhead: pointless ref-count changing, and pointless scanning of object graphs. The result is something with the high throughput of generational GC with only a fraction of the maximum pause time.

https://researchers.anu.edu.au/publications/29505

http://www.cs.technion.ac.il/~erez/Papers/ao-cc.pdf


I think anyone who thinks that GC/Memory Management is a "solved" problem hasn't been paying attention!

Oh, and thanks for the links!


Cannot really agree with your first point, as I came to Objective-C through Brad Cox's excellent book[1]. Apple has added quite a lot to the language and Cocoa is pretty nice.

F-Script is a nice language in the Smalltalk tradition built on the Objective-C runtime. If I was going to pursue something like this, I would probably work at making a clang-like front end for it.

1) Object-Oriented Programming: An Evolutionary Approach http://www.amazon.com/Object-Oriented-Programming-An-Evoluti...


What we found even back in the NeXT days was that programmers who had never tried Objective-C thought it really sucked, but most who actually tried it really, really liked after a short while.

The runtime really is great, and the fact that it is, essentially, the "C" runtime (~= native platform ABI) + one additional function (objc_msgSend()) is wonderfully minimal, and that this is all you really need to have a highly dynamic object oriented language is something even academia is only slowly figuring out.


While it looks interesting and probably has some use cases, the syntax puts me off a little. YMMV. Yes, Obj-C has many brackets. But I find them supporting reading by grouping relevant elements together. I assume if you dislike lisp like languages, eero might help in this regard.

What I particularly don't like are the tailing return types.

I also assume apple is actively working to reducing the Obj-C verbosity to some extend.

Regarding the website, I miss a "get started" link. How do I take it for a quick test-drive?

EDIT: It sais: "Eero is a fully binary- and header-compatible dialect of Objective-C". Does this mean, I can write a module in Eero and have it derive the correct header files for me? Or do I need to re-write the header file to be consumed by (legacy) Obj-C?


I've never understood why people like frontal return types. It seems to me that the most important piece of a function is its name, which 50% of the time will make it obvious what its return type is anyway. Also, certain languages such as C++ (and occasionally Java) have a habit of involving incredibly long return types that push the actual function name far off to the side, if not onto a second line.


In C++11 you can push the return type to the right, like this:

    auto foo(int x) -> bool;


Ewww. WTF?!

http://www.cprogramming.com/c++11/c++11-auto-decltype-return... provides a rationale: so you don't have to type in ClassName:: twice. Is that really worth this monstrosity? The function name is still not at the start of the statement. '->' is being horribly misused. Anybody know if there's a better reason for this? Did the committee really have nothing better to think about?


Yes there are much better reasons: http://www2.research.att.com/~bs/C++0xFAQ.html#suffix-return

        template<class T, class U>
	auto mul(T x, U y) -> decltype(x*y)
	{
		return x*y;
	}


Ah, that use case makes sense. I wish the syntax was congruent with C++, but that complaint has been done to death.


I agree regarding the syntax. Most of differences as compared to ObjC seem to be focused on simply removing characters, increasing ambiguity for the reader, without any adequate justification for their removal.

The ugliness of ObjC has little to do with brackets and semicolons, and a lot to do with the lack of higher-level functional constructs and type-system features.

A language that was ObjC-compatible and yet could succinctly express LINQ/Rx-level type/api complexity would be a genuinely interesting successor to ObjC.

Syntax aside, however, the emergence of ObjC-compatible languages is contributing to the pool of knowledge on how to produce one, provides a set of possibly re-usable code to do so.

[edit] Dug into the code. It looks like the author is actually forking clang outright. This is an interesting approach, as it allows you to swap in a new compiler that supports both your language, and objc, possibly interchangeably.

The way that clang is designed (and as I understand it, I haven't looked in great detail), it's pretty much impossible to use it as a library to backend your own front-end parser/lexer atop -- you have to fork clang itself to inject your own code in.

I'm undecided as to whether tying yourself to clang (instead of the underlying llvm) is a net win for an alternative language implementation. Thoughts?


I think Apple might want to switch to Ruby in the future. Some Apple employees are still actively working on MacRuby and it is actually possible now to create iOS apps with Ruby through RubyMotion.

http://www.rubymotion.com/


I won't decry having options, but I am not sure what Apple would stand to gain by switching the first-class language of their platform to Ruby. If you look at any RubyMotion code, it ends up being an almost line-per-line copy of the equivalent Objective-C code.

Given that the languages come from the same lineage, it is not even a big jump to switch between them from a developer's point of view. The thing that really stood out when I was learning Objective-C is that it essentially was Ruby, just with some C thrown in. I often wonder if people go in thinking Obj-C is some kind of worse version of C++ and then miss what the language really has to offer. While it is certainly not perfect, I'm constantly amazed at how elegant the design of the language really is.

Ruby could shine if Apple were to create a whole new set of APIs based around the language, but that would mean throwing away nearly 25 years or work. It would be a tremendous undertaking for what could be positive gain, but is just as likely to introduce a whole new world of problems, especially in the early years.

Official support for Ruby would certainly be welcome, but I don't see benefits in outright switching; not without also removing the Objective-C APIs from the equation and creating a whole new platform that centres around Ruby.


Actually, all of the Apple employees working on MacRuby have since left. Apple will not be moving to Ruby in the near future unless something drastic happens. That said, so long as the Obj-C runtime is documented and well designed (and it is!), then anyone that cares to can interoperate with Obj-C with a bit of work, a la RubyMotion.


I think it's more like you want Apple to want to switch to Ruby.

But it ain't going to happen. They need the drop down to C without the run around.


I also find the syntax a bit odd, including the return types. Plus the brackets for multiple arguments that appear in the interface but not in the implementation are inexplicably inconsistent. (Update: See below.)

I would also be very curious how well eero adapts to pure C code. Many of Apple's performance-critical APIs are C, as our many third party libraries. So being able to effortless invoke C within Objective-C is essential.

eero's design seems to neglect this. For one, goto is outright banned, eliminating a rather effective tool for elegant C based error handling. (http://stackoverflow.com/questions/788903/valid-use-of-goto-..., but please let's avoid a long tangent on the proper use of goto)

Update: My mistake, the brackets apparently define an argument as optional, which might be a convenient shortcut in some cases.


I thought 95% of the syntax was a dramatic improvement over straight obj-c, which I actually don't mind at all.

I admit, the one time I paused was when I saw the trailing return types. It was the only part that felt unintuitive.

By mere convention it felt odd, but I'm willing to explore a break with convention. Seeing how well-designed the other 95% of the syntax is, I'm giving the architect the benefit of the doubt, and hoping that this trailing return syntax ultimately proves MORE intuitive, despite my initial recoil.

I'm really curious now about the practical issues of putting Eero into real-world use (which, since we're talking obj-c here, means OSX or iOS development).


Why would you assume Apple is working to reduce Obj-C verbosity? The language has been around for decades and they seem to be doing pretty well with it as-is.


We're not assuming anything; they're actually doing it. For instance:

http://joris.kluivers.nl/blog/2012/03/13/new-objectivec-lite...

The language may be decades old, but clearly Apple is continuing to advance the compiler to improve the language's syntax and verbosity without breaking the runtime.


blocks and arc go a LONG way


The language is almost 30 years old with remarkably few changes, but we've seen a (relative) burst of improvement in the past 5 years. Properties, fast enumeration, and automatic refcounting all reduce the verbosity of Objective-C code.


Also, the string literal syntax for arrays, dictionaries, and numbers, and automatic property synthesis added in LLVM 4.0 - most welcome additions.


They add every year new features to the language like blocks and a lot more which is currently under NDA so I can't tell.

Why do you assume that Apple isn't working on his language? They are heavily developing it!


I'm not assuming anything... "assume" was used in the post I responded to, and asked why. I don't use ObjC myself, though I would say that "add new features" does not imply "reduce verbosity" in any sense that I can see.


Objective-C is hands down one of my favorite programming language:

- It's compiled, not interpreted. Even Java and C# are compiled into bytecode for a massive VM to interpret. Objective-C has no VM, it's pure binary + a shared library to implement the runtime. It also uses the very best compiler, clang, which gives incredibly helpful error messages, warnings and suggestions.

- It's a strict superset of C. Even C++ does not meet this criteria. This means any valid C is valid Obj-C and behaves exactly the same way, in any Obj-C file. You can drop down levels of abstraction for performance, and can even write assembly if that's what it takes.

- It has an amazing and fully supported debugger in the form of LLDB.

- It's fully dynamic, allows introspection, duck typing and even monkey patching (with a little effort; method swizzling). Everything is an object, except for native C types.

- It takes the right approach to memory management. Realizes that garbage collectors are abominations, and that managing memory is really the job of either the developer or the compiler.

- It has the most fantastic concurrency framework I've ever used, in the form of `libdispatch`. Now, to be fair, it's a C library that ought to work anywhere, but practically it only works well on Apple's platforms and using clang.

- Apple is moving in the right direction, cleaning up the language, removing annoyances and making syntax more succinct.

- Header files. I think I'm on my own in liking this—but I think headers are amazing. They're a succinct and standalone version of a documentation file that is extremely useful to both developers and the compiler. Use them to describe well the public interface to your class, and you don't even really have to write documentation anymore.

- Solid design patterns and convention-driven. These are getting better by the day, with the recent addition of closures to the language.

There's two major annoyances:

- It's often needlessly verbose. Not in the syntax, mind you, which is just fine, but in the naming conventions. For example, `NSArray` has a method called `enumerateObjectsUsingBlock:` instead of simply `each:`. Add that up to every single name, and you get pretty ugly code—and good luck writing it with anything other than Xcode's context-aware autocompletion.

- It's tied to Apple's platforms, and well not run on anything else. Now, I'm fine with Apple-specific frameworks and GUI frameworks (like QuickTime or UIKit) being Apple-only, but I'd really love to use the language to write a backend service, for which I'd only need Foundation & libdispatch, for example.


Your list reminds me quite a bit of the effusive language used by Apple engineers at WWDC, but does not reflect the larger context and objectivity of the industry outside of Apple.

If we inspect your point about compilation vs interpretation, we ought to note that this is a technical decision, not an implementation constraint. Mono compiles C#/F# into native code when targeting iOS.

If we look at lldb, I think your assessment is overly generous. lldb is a new, buggy, but promising debugger. Most languages -- including C#, Java, Haskell, OCaml -- have a standard and viable debugger, and this does not particularly set Objective-C apart.

As for libdispatch, it simply a closure-based thread-pool executor / IO framework, like you'd find in any other language. It's better than pthreads and serves as a nice interface on top of kqueue, but it's certainly not so novel as to be the "best concurrency framework ever".

For example, compare against Microsoft's Reactive Extensions; Rx allows one to implement concurrency declaratively, rather than imperatively with chained closures as is done in with libdispatch.

http://rxwiki.wikidot.com/101samples#toc3

Lastly, I wish to address header files. Everything in a header file could be derived automatically from the original code. This is done on almost every other modern language platform, and there's no reason it shouldn't be done here.

Furthermore, headers do not elide the need for documentation; Objective-C type system is not nearly expressive enough to fully articulate the constraints and invariants of an API, and undocumented APIs are inescapably and destructively ambiguous APIs.

Apple even sets a good example here by providing complete documentation for every supported API in the system.


"It's compiled, not interpreted."

The Objective-C calls are not really compiled. They are dynamically dispatched at runtime in a very equivalent manner to an interpreter or via reflection in Java. You can read about it here:

http://www.mulle-kybernetik.com/artikel/Optimization/opti-3....

Quite far from the compiled C calls also covered in the linked article.


The presence of a runtime doesn't negate the fact that code is compiled to native machine code, and all the benefits that come with that. Execution speed is quite a bit faster than bytecode-interpreted languages such as the JVM, which in turns is orders of magnitude faster than fully interpreted languages such as Ruby. The code is compiled directly for each architecture the binary supports (choices these days being i386, armv6, and armv7).

C code in an Obj-C method is compiled to the exact same machine code as it would be, were it in a C function. Constructs such as ifs, loops, return statements etc are no different than their C counterparts, are are just as fast. Obj-C method calls are simply compiled into calls to the ultra-fast runtime C function `objc_msgsend`.


The JVM compiles the bytecode into machine code, and it compiles the hotspots of an application into optimized native machine code.

Java execution is split into two pieces. First is the interpeter, which is used to execute bytecode prior to it being compiled. The second is the compiler, which implements runtime optimized compilation of hotspots into native machine code, including the ability to make assumptions regarding types and then uncompile hotspots if those assumptions prove false.

Before you say "ah ha, an interpeter!", it's also notable that the JVM's interpreter is not an interpeter in the traditional sense (ala Ruby). Rather, the JVM's interpreter is an architecture-specific interpreter that decodes JVM bytecode and spits out unoptimized direct machine code based on a standard set of machine code templates. This is fast, but there's not optimized compiler, and so it's not nearly as fast as the result of the optimizing compiler that is run over the hotspots.

All that said, JIT vs AOT is a technical decision with mixed advantages. In theory, there are optimizations that can only be performed via JIT, based on runtime analysis. The simplicity of AOT can often provide runtime performance gains simply by avoiding the runtime costs of evaluation and compilation. JIT of bytecode allows for binary/library portability across different machines without worrying about rebuilding or shipping multiple binaries. AOT is somewhat more difficult to decompile compared to unoptimized byte code.

Mono compiles down AOT for iOS, in theory you could do the same with Java. GCC did so with their optimizing AOT gcj Java compiler.


Java hasn't been bytecode interpreted in 10 years. The objc_msgsend function is not fast. Measure it and see.


To be fair, their still is a difference between Objective-C and C# and Java (which the OP was trying to contrast). Dynamically dispatched (i.e. a table look-up or pointer dereference) doesn't change the fact that the callee (the method to be executed after being found by the runtime) or the caller (code sending the message & perhaps all the runtime doing the lookup) are both compiled into executable code. This is surely different than how Java and C# are compiled/executed (in most cases) -- which I think was the intended point :).


Java and C# are compiled into machine code before execution. Also, Objective-C isn't table lookup or pointer dereferenced, it is more akin to a (cached) reflective lookup and dispatch.


You're right that method invocations aren't simply a table lookup or pointer dereference, but in the end it boils down to a jump to the address of some all ready compiled (machine) code which is what I was trying to make a point of.

For C#, the execution process is C# -> compiler -> CIL bytecode at compile time. Then at runtime CIL bytecode -> JIT -> machine code.

So sure, C# (and I believe Java is similar to C# in this regard) eventually gets compiled to machine code, but it's vastly different from what I believe to be the accepted usage of being a "compiled language." The exception to this being if you AOT compile your code, but in that case you lose sacrifice some language functionality.


Most developers are just interested in the compile-time type verification.


In which case it is no different from Java or C#: both also give you compile-time checks.


My comment is more about the programmers than it is about the languages.


The compiler and static analyzer nowadays can do a lot more than that, it can catch typos, dead ends, nonsensical statements, memory management issues (such as leaks or usage of a dangling pointer), unguarded implicit casts or assignments in if clauses, and a lot more. It's saved my ass countless times, catching bugs before they ever hit production.


It has an amazing and fully supported debugger in the form of LLDB.

I can't agree with this at all. Most objects show up as opaque with no members. You can't even browse the contents of collection classes. Compared to a managed language debugger it's positively stone age.


I think he means the LLDB console. You can inspect anything, however deeply you want. Create new objects. Assign them to variables. Run code snippets against your existing state at the breakpoint. Learn new APIs interactively. Reduce the need to compile/run as often. It's the closest I can get to a Smalltalk Workspace/Transcript. I agree that the GUI debugger is limited, and after living in the console for some time, next to useless.


I use lldb from Xcode, but I don't use the GUI part, because it's worse than useless. The command line is better, but not by THAT much. It seems to regularly get confused by debug info, so if it fails to tell you "no member info found due to forward declaration of struct XXX" (or something similar) when you're trying to print a local value of type XXX, you'll just get gibberish. Then sometimes it just dies when stepping over function calls.

Something I found today: "p/x $rip" will show you the PC, but "break set -a $rip" won't set a breakpoint there, because lldb can't figure out what $rip means.

The online help is also not very helpful.


- It takes the right approach to memory management. Realizes that garbage collectors are abominations, and that managing memory is really the job of either the developer or the compiler.

Please elaborate why Objective-C's approach to memory management is Right, and why GC is Wrong.


Because memory management is something that's known at compile time, and doesn't need to be evaluated at runtime. It's the same reason a compiler will essentially substitute the expressions `2 + 2` to simply `4` and save the extra CPU cycle. Most GCs do a barely acceptable job of managing memory, and often lead to hard-to-diagnose bugs. With manually managed memory, you know exactly what's going on and can easily inspect what's actually going on by diving into a debugger. You're also not wasting memory letting the runtime hold onto things you don't need anymore, or wasting CPU cycles trying to figure out what's still needed.

Additionally, and importantly, without manual memory management, you cannot safely use pointers and memory directly. The whole idea of playing with memory directly by doing `void buffer = malloc(100)`, and being able to do pointer arithmetic such as `(buffer + 99) = NULL` goes out of the window.

Lastly, Automatic Reference Counting is awesome in that it brings you all the benefits of manual memory management (not having to worry about making mistakes, and saving yourself a little typing) without any of the drawbacks.


After covering reference counting in my first reply, I want to tackle the pointer arithmetic question here.

Pointer arithmetic specifically, or direct memory access more generally, is bug prone and dangerous. Its valuable primarily in terms of performance, and is a terrible idea in terms of correctness and security.

In runtime-managed languages, efficient access to memory buffers is achieved by exposing an array type, and then performing basic bounds checking on array access. This is efficient enough for most purposes, is essentially what you achieve by using NSData, and is generally implemented in a way that is vastly more efficient than NSData.

Direct memory access in straight C will be faster, but it's also enormously error prone, and something I avoid if at all possible when writing ObjC or even straight C.

While there's no discounting the performance value of being able to operate at this level (or write straight assembly), it's rarely a useful thing to do when writing most application-level code, especially when weighed against the propensity for failure.

All managed languages allow for extension through C for the cases where low-level performance is required. ObjC has the easiest means of working with C, but that should be balanced against the fact nearly all of ObjC's ugly warts and pain points derive from being a strict C superset.


Reference counting is a form of garbage collection. It does spend CPU cycles trying to figure out what isn't needed, so your claim that it doesn't "waste CPU cycles" is not true. Its advantages over tracing GC are that memory can be reclaimed promptly, that it does a pretty good job of spreading out the CPU load in an incremental sense, and that it's simple to implement and reason about. The big downside is cycles; lots of large reference-counted codebases constantly struggle with cycles (for example, Firefox). Adding cycle detection to a reference counted system is not easy to do in a performant way and almost nobody that I'm aware of but Firefox even attempts to do it.

Without unsafe memory management, you can still do pointer arithmetic. See Cyclone's fat pointers, Go's slices, Rust's slices, etc.


I'd just like to touch on GC vs ARC/RefCounting, and leave the buffer mangling for another comment.

There are advantages and drawbacks to GC vs reference counting (whether manual or automated), but the disadvantages of GC certainly do not include doing "barely acceptable job of managing memory, and often lead to hard-to-diagnose bugs." I'd provide a more complete rebuttal, but lacking any specific examples, I can only definitively state that GC significantly reduces the likelihood of having to debug memory related issues, especially tracking down leaks due to reference cycles.

In comparison to GC, the main advantage of reference counting systems is that they are deterministic.

However, they do a lot of book-keeping that is expensive in aggregate and may not even be necessary in a garbage collected system. There are GC designs that mix GC and reference counting to achieve the best of both -- another poster in this thread provided a link to some papers on the subject.

The major downside to reference counting -- other than the expensive-in-aggregate book keeping mentioned above -- is, as mentioned above, that they can't handle reference cycles.

On iOS it's terrible easy to create cycles, and prior to the introduction of __weak, it was almost impossible to build thread-safe code that also involved a cyclic reference, due to it being impossible to invalid another thread's reference to your object (short of hacking out your own zeroing weak reference implementation, which is what people were forced to do).

This ease of creating cycles, especially with blocks and GCD, leads to a proliferation of careful and manual application of __weak designators on self-references. Instead of simply using an instance variable from a block, you must be certain to only access it through a __weak-marked references.

Additionally, you now have to worry about NULL-dereferencing! Example:

  __weak MyObject *weakself = self;
  _block = ^{
      // Whoops! weakself could be nil here. Should have used a property,
      // which would have just passed a nil argument to runWithTX. Whoops!
      [opManager runWithTX: weakself->_txid];

      // Also, weakself could be deallocated *after* the above call succeeds,
      // so what we really needed to do as a first step in the block was
      // capture a -strong- reference to weakself, and return immediately
      // if we'd been deallocated
      MyObject *strongSelf = self;
      if (self == nil)
          return;

      // Whoops, accidentally referenced self via _txid below,
      // causing a retain cycle
      NSLog(@"Running with _txid=%@", _txid);
  });
This is awful. If the language used a hybrid ARC refcounting with cycle breaking, that code would simply read:

  _block = ^{
      [opManager runWithTX: _txid];
      NSLog(@"Running with _txid=%@", _txid);
  });


I think you are a little confused as to how exactly ARC works. ARC is mostly compile-time and adds the memory management code automatically during compile time. So, the runtime overhead compared to a GC is much lower.

http://stackoverflow.com/questions/7874342/what-is-the-diffe...

Regarding reference cycles, it is not a difficult job at all for any decent coder to work it out. It is a design issue in your application if you think its hard.

If you use protocols and delegates correctly, you can manage it quite simply with the rule of storing delegates as weak pointers.

If you use blocks, you just have to avoid the case of both storing a block in an object and referencing the object from the block.

I had not used a Mac till about 6 months ago and in the last 6 months, I lead the development of an enterprise scale app for iOS dealing with the above issues quite easily by having a good architecture. I have previously worked professionally for around 4 years with Qt/C++ and found it extremely easy to make the switch to iOS IMHO.


I'm not confused as to how ARC works. Reference counting systems require a high level of bookkeeping for short lived objects as compared to GC systems.

Dealing with cycles is not complex, but it is error prone, verbose, and difficult in aggregate as compared to having cycles correctly and automatically handled.


> It's compiled, not interpreted. Even Java and C# are compiled into bytecode for a massive VM to interpret.

For Java you have interpreters, JIT and native compilers.

C# is always compiled (either JIT or AOT), and both Mono and Microsoft Research have native compilers available for CLR free binary distributions.


Agree with a lot, it's also my favorite, and not because any big company told me to like it. In fact, I found Objective-C even before NeXT, let alone Apple picked it up.

One nit: the language is actually not tied to the Apple platform (although Apple bought the name at one time and is definitely the biggest user). There are actually several runtime/compiler/Foundation combos, most prominent of which are GNUStep and Cocotron. I have used both and run my (non GUI) code on Windows, Linux, Solaris, heck even AIX.


My limited experience with GNUStep has been disappointing. It's missing too many useful libraries to be useful. I haven't used Cocotron, personally, though I imagine they're related. So while technically what I said is untrue, I believe that practically, it is.


"I'd only need Foundation & libdispatch."

"It's missing too many useful libraries to be useful."

libdispatch can be lived without or worked around. What else in your list is missing?


I completely agree about header files. Having all of the type signatures and argument names associated with a library in one place is great.


Header files are a relic from the 70's, which was replaced by sane module systems in the 80's.

Only stone age languages insist in using header files.


When using an IDE, that seems the type of thing that should be automatically derived from code rather having to be written and maintained by hand.

i.e. More suited to some kind of read-only overview browser than just another finicky text file.


Absolutely. I recently started working on an Android port of an iOS app of mine and it's a big time saver letting the tools take care of these details. Not to mention how much more powerful refactoring tools can be when they don't have to worry about a preprocessor.


Visual Studio will generate class declarations for CLR classes from the associated metadata, if you do a `go to declaration' on a system class.

Something similar could be done with Objective-C, judging by the output of the `class-dump' tool (see http://www.codethecode.com/projects/class-dump/). If you run it on an app's private frameworks - I hear that Xcode 4 might demonstrate the principle - you'll get back a surprisingly complete set of interface, method and ivar declarations. This suggests that if the will were there, we could do away with header files, even for libraries shipped as compiled objects. It works well for the CLR.


Most of the comments here are way off base, focusing on the relative worth of the objective-c language, rather than the value of Eero for developers who have no choice but to use objective-c, regardless of how good they think it is or isn't.

If you haven't already - I highly recommend following the link. RubyMotion was somewhat interesting, but really didn't feel like a massive improvement over plain old objective-c, especially once you factor in all the Cocoa APIs.

However, Eero looks amazing. The syntax used represents a vast improvement over straight objective-c or RubyMotion. Take a look - it's genuinely exciting.


The problem, as RubyMotion and Monotouch have both demonstrated, is that no matter how much you change the language your code is still dominated by calls into the Cocoa APIs, and that's where a lot of the verbosity and ugliness lies (IMO).


You are entitled to your opinion, but I will disagree.

The Cocoa/Cocoa Touch APIs are probably the best I have ever used. Rarely do I have to dive into the documentation thanks to the self documenting method names. The consistently and logically applied conventions (paired with autocomplete) mean I can usually use intuition to "feel" my way around. If ever I need to read the documentation, it is also some of the best around.


You get used to them, but they do have a tendency to look like someone from another planet wrote them, don't they?


I can definely see myself using this over objective-c if it proves stable enough for production usage.


Agreed. Much nicer on my eyeballs.


The examples struck me as looking a lot like Go and I found a few similarities:

- Local type inferencing (i := 100) is identical.

- No parentheses for control structures.

- Lack of semicolons.

- Ranges in array enumeration (albeit with different syntax).


Pretty rad that eero forbids variable shadowing. It would be neat if more languages start doing that in the future...

http://eerolanguage.org/documentation/index.html#noshadowing


Err, sounds like a weird thing to do, and as they mentioned, is more likely just a limitation of their inferrence engine. Preventing variable shadowing leaks implementation details from inside functions/blocks and mos tlanguages do just fine with appropriate warnings and so on.


Interesting, especially given how I was listening to John Siracusa's two Hypercritical talks from 2011 where he points out that replacing Obj-C (or any language) with bridges is insufficient. I hope he might weigh in on his thoughts of eero in the future.


It looks nice, but I can't help but wonder why not go all the way to Smalltalk instead?

    helper := FileHelper new.  "declare variable 'helper' via type inference"

    files := []  "empty array literal implies mutable"
    files addObject: (helper openFile: 'readme.txt').  "can group message in parens"

    files do: [ | FileHandle handle | "all objects are pointers, so no '*' needed"
	  self log: 'File descriptor is %@', (Number)(handle fileDescriptor) ).
	  self handle: closeFile ].

    ^ 0
Of course, you'd still have to solve the lack of syntax for defining classes and methods, but there are a couple solutions to that problem out there already.


A Smalltalk compiler for the Objective-C runtime exists here:

http://etoileos.com/dev/docs/languages/smalltalk/


Cool. Does it work with Cocoa on OS X?


Looking through the docs, it does look incredible, perhaps the Perfect® language for my tastes.

But the old man in me says it'll never stick.


Finally!


> Python-like indentation

Oh god, please no.


I had the opposite reaction.

I was initially thinking "another overly syntactic language", and then I went to the website and thought "Wow! This looks very nice!".


The syntax reminds me less of python and more like coffeescript. I loathe python but there's a certain .. je ne sais quoi with much of the operators and syntactic sugar removed. Almost like natural language!


That's the Smalltalk heritage of Objective-C shining through. Smalltalk was actually intentionally designed using Human Interface notions to be friendly -- even to the point of being used by grade school children.


This is something I completely agree with. However, at this point, having used things like gofmt, there should already be tools that enforce correct formatting. I don't use python very much... perhaps there already is.


Oh, but yes, yes! :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: