
The eero programming language - a dialect of Objective-C - basil
http://eerolanguage.org/
======
SeoxyS
Objective-C is hands down one of my favorite programming language:

\- It's compiled, not interpreted. Even Java and C# are compiled into bytecode
for a massive VM to interpret. Objective-C has no VM, it's pure binary + a
shared library to implement the runtime. It also uses the very best compiler,
clang, which gives incredibly helpful error messages, warnings and
suggestions.

\- It's a strict superset of C. Even C++ does not meet this criteria. This
means any valid C is valid Obj-C and behaves exactly the same way, in any
Obj-C file. You can drop down levels of abstraction for performance, and can
even write assembly if that's what it takes.

\- It has an amazing and fully supported debugger in the form of LLDB.

\- It's fully dynamic, allows introspection, duck typing and even monkey
patching (with a little effort; method swizzling). Everything is an object,
except for native C types.

\- It takes the right approach to memory management. Realizes that garbage
collectors are abominations, and that managing memory is really the job of
either the developer or the compiler.

\- It has the most fantastic concurrency framework I've ever used, in the form
of `libdispatch`. Now, to be fair, it's a C library that ought to work
anywhere, but practically it only works well on Apple's platforms and using
clang.

\- Apple is moving in the right direction, cleaning up the language, removing
annoyances and making syntax more succinct.

\- Header files. I think I'm on my own in liking this—but I think headers are
amazing. They're a succinct and standalone version of a documentation file
that is extremely useful to both developers and the compiler. Use them to
describe well the public interface to your class, and you don't even really
have to write documentation anymore.

\- Solid design patterns and convention-driven. These are getting better by
the day, with the recent addition of closures to the language.

There's two major annoyances:

\- It's often needlessly verbose. Not in the syntax, mind you, which is just
fine, but in the naming conventions. For example, `NSArray` has a method
called `enumerateObjectsUsingBlock:` instead of simply `each:`. Add that up to
every single name, and you get pretty ugly code—and good luck writing it with
anything other than Xcode's context-aware autocompletion.

\- It's tied to Apple's platforms, and well not run on anything else. Now, I'm
fine with Apple-specific frameworks and GUI frameworks (like QuickTime or
UIKit) being Apple-only, but I'd really love to use the language to write a
backend service, for which I'd only need Foundation & libdispatch, for
example.

~~~
spullara
"It's compiled, not interpreted."

The Objective-C calls are not really compiled. They are dynamically dispatched
at runtime in a very equivalent manner to an interpreter or via reflection in
Java. You can read about it here:

[http://www.mulle-
kybernetik.com/artikel/Optimization/opti-3....](http://www.mulle-
kybernetik.com/artikel/Optimization/opti-3.html)

Quite far from the compiled C calls also covered in the linked article.

~~~
SeoxyS
The presence of a runtime doesn't negate the fact that code is compiled to
native machine code, and all the benefits that come with that. Execution speed
is quite a bit faster than bytecode-interpreted languages such as the JVM,
which in turns is orders of magnitude faster than fully interpreted languages
such as Ruby. The code is compiled directly for each architecture the binary
supports (choices these days being i386, armv6, and armv7).

C code in an Obj-C method is compiled to the exact same machine code as it
would be, were it in a C function. Constructs such as ifs, loops, return
statements etc are no different than their C counterparts, are are just as
fast. Obj-C method calls are simply _compiled_ into calls to the ultra-fast
runtime C function `objc_msgsend`.

~~~
flatline3
The JVM compiles the bytecode into machine code, and it compiles the hotspots
of an application into _optimized_ native machine code.

Java execution is split into two pieces. First is the interpeter, which is
used to execute bytecode prior to it being compiled. The second is the
compiler, which implements runtime optimized compilation of hotspots into
native machine code, including the ability to make assumptions regarding types
and then _uncompile_ hotspots if those assumptions prove false.

Before you say "ah ha, an interpeter!", it's also notable that the JVM's
interpreter is _not_ an interpeter in the traditional sense (ala Ruby).
Rather, the JVM's interpreter is an architecture-specific interpreter that
decodes JVM bytecode and spits out unoptimized direct machine code based on a
standard set of machine code templates. This is fast, but there's not
optimized compiler, and so it's not nearly as fast as the result of the
optimizing compiler that is run over the hotspots.

All that said, JIT vs AOT is a technical decision with mixed advantages. In
theory, there are optimizations that can only be performed via JIT, based on
runtime analysis. The simplicity of AOT can often provide runtime performance
gains simply by avoiding the runtime costs of evaluation and compilation. JIT
of bytecode allows for binary/library portability across different machines
without worrying about rebuilding or shipping multiple binaries. AOT is
somewhat more difficult to decompile compared to unoptimized byte code.

Mono compiles down AOT for iOS, in theory you could do the same with Java. GCC
did so with their optimizing AOT gcj Java compiler.

------
jballanc
Programmers "liked" Java because they were told to by Sun. Programmers "liked"
C# because they were told to by Microsoft. Now, programmers "like" Obj-C
because they are told to by Apple. In that regard, this seems like a step in
the right direction...

But it's only a half step. If you look at what's happened to the other two
languages mentioned above, they have evolved such that their runtimes are now
more important than the languages they originally hosted.

What really needs to happen to Obj-C is for people to wake up and realize that
the runtime is actually really nice. Combined with LLVM, one could make the
case that the Obj-C runtime could compete with the JVM or the CLR.

(Of course, the one huge, massive, glaring omission is any sort of managed
memory... Well, at least there _was_ a garbage collector at one point.)

~~~
flatline3
First, I want to counter your use of "like" in quotes, as if programmers
couldn't have perfectly valid reasons to like these languages.

I like Java for its low-cost allocations, sane implementation of constructors,
and checked exceptions that prevent undeclared control flow behavior. The huge
trove of solid-quality libraries and a general community focus on unit testing
and documentation (perhaps in no small part driven by Apache) has been quite
valuable.

I like C# for its type inference, relational algebra ala LINQ, reactive
extensions, adoption of F#, and a general R&D focus on adopting language
design from the functional languages.

I like Obj-C for the ease of which I can drop to C and assembly, and the ease-
of-implementation inherent in Apple's GUI frameworks. That's about it --
there's a lot to dislike about ObjC.

All that aside, and given my own significant familiarity with the underlying
runtimes, I'm trying to understand what you think is so advantageous about
Apple's ObjC runtime. It is effectively an extremely simple dynamic dispatch
runtime, reliant on the OS memory management combined with reference counting,
zero support for any complex runtime state inspection (say, to support
breaking reference cycles), and now with some C-derived glue for closures and
compiler-emitable calls to support somewhat-automatic reference counting. The
class registration system has hardly changed in 20 years, has no support for
namespaces, the type encodings available at runtime are complex-to-decode-
strings, and the whole thing is, quite honestly, the kind of rickety design
you'd expect to emerge from C hackers in the late 80s early 90s, incorporating
none of the advances in the field from either Sun/Java or Microsoft/C#.

ObjC predates the work on Strongtalk that led to the modern JVM, it either
predates or fails to inherit from the language R&D outside of C, and predates
the CLR runtime work done by Microsoft. It's a _DINOSAUR_.

I don't really understand where you see the runtime advantage here, and I
spend nearly all of my time writing ObjC.

The only advantage I see to the ObjC runtime is that it's required to
interface with their native libraries.

~~~
jballanc
Perhaps a matter of taste, but it seems everything you dislike about the Obj-C
runtime are the things I enjoy:

> It is effectively an extremely simple dynamic dispatch runtime

Yup! Just the way I like it. It's practically lisp-like in its minimalism.
Just the minimum you'd need to get the job done.

> reliant on the OS memory management

This, actually, is one of my favorite points. Letting the JVM grab a giant
allocation at startup is fine if all you're going to do with your machine is
run JVM programs (i.e. a server), but it sucks when you've got multiple apps
all trying to play nice together. The OS has a memory management system for a
reason. I'd much prefer my runtime use the existing system, than re-invent the
wheel (as it were).

> with reference counting

Right, well...I did mention that the lack of a GC kinda sucks. I suppose it's
a trade-off that one must make if you are going to rely on the OS to manage
memory. Still, I can't help but feel that some sort of opt-in system could be
made to work at runtime.

> zero support for any complex runtime state inspection (say, to support
> breaking reference cycles)

Of course, the flip-side to this is that runtime performance is more
predictable, since the runtime isn't attempting to do anything overly complex.
I can see how you might argue this either way: more runtime introspection
eases the burden on language implementors, but it also is a sunk cost whether
you want it or not.

> The class registration system has hardly changed in 20 years

 _"It seems that perfection is reached not when there is nothing left to add,
but when there is nothing left to take away"_

> has no support for namespaces

I'm not completely convinced this is a problem domain that I want my runtime
solving for me. The addition of two-level namespacing to Mach-O addresses the
only time I can think that I definitely want runtime managed namespacing. If I
want namespacing to be part of my object model, let me determine how it should
work. (It's always mildly annoyed me that Java's reverse-DNS-style bleeds into
almost every JVM-based language.)

> the type encodings available at runtime are complex-to-decode-strings

The type encodings are simple C strings!

> the whole thing is, quite honestly, the kind of rickety design you'd expect
> to emerge from C hackers in the late 80s early 90s

If by that you mean that the central message dispatch routine is hand-
optimized assembly for each platform the runtime is available on... I'll have
some more of that, please!

 _Edit_ : Oh, and about the "like" part...I was being mildly facetious. There
are aspects of all 3 languages that I enjoy, but I doubt they'd have become as
popular as they have on their merits alone. (Well, ok, given the landscape at
the time, Java's popularity could be said to be fairly won...)

~~~
flatline3
> _This, actually, is one of my favorite points. Letting the JVM grab a giant
> allocation at startup is fine if all you're going to do with your machine is
> run JVM programs (i.e. a server), but it sucks when you've got multiple apps
> all trying to play nice together. The OS has a memory management system for
> a reason. I'd much prefer my runtime use the existing system, than re-invent
> the wheel (as it were)._

The OS memory management system is as general-purpose as possible. At the
kernel/process interaction level, the APIs are thin abstractions on the
underlying VM page mapping system. Below that, at the malloc level, malloc is
designed to be as general purpose allocator as possible, without introducing
complexity unsupportable in C, such as migrating allocations across
generations.

The JVM's lack of ability to rededicate pages to the OS is a failing of Sun's
particular implementation, and is not strictly inherent in the design of
generational collectors.

Your argument dismantles Sun's architectural choice when used on mobile
hardware, but does not address the relative merits of alternative
allocation/collection schemes.

> In regards to the class registration system: _"It seems that perfection is
> reached not when there is nothing left to add, but when there is nothing
> left to take away"_

That response has little substance. It's not perfect, or even great, it's
merely functional. I would be enthralled if Mach-O two-level namespacing
worked with ObjC classes, but it can not.

> _The type encodings are simple C strings!_

They're C strings, not particularly simple. Try decoding (with lackluster
documentation) a complex structure return value. Modern runtimes use much more
cleanly structured, cross-referenced data here.

> _If by that you mean that the central message dispatch routine is hand-
> optimized assembly for each platform the runtime is available on... I'll
> have some more of that, please!_

No, that's not what I mean. You'd find just as much complex hand-optimized
assembly code in other runtimes, likely more.

I'm referring to design choices such as the use of unnecessarily complex data
structures with minimal if not outright missing API to access them, poorly
abstracted implementation details (such as exposing C string type signatures
as the highest-level representation of a type encoding),
minimal/missing/poorly-defined metadata.

An example is the failure to encode type data in blockrefs in the first public
release of blocks for ObjC, which made it impossible to implement
imp_implementationWithBlock(), due to the need to differentiation between
stret vs non-stret return types and select the appropriate trampoline. This
required changes to both the compiler and the runtime, and meant that prior to
those changes' introduction by Apple, it would have been impossible for an
external entity to implement similar functionality.

This sort of hackish minimalism is the polar opposite of the careful
considered and long-term focused design that Microsoft (and even Sun) applied
to the core of their platform. Compare the completeness (and longevity) of the
JVM Virtual Machine specification to the ObjC runtime. Over roughly the same
amount of time, Apple has had to introduce complete ABI-breaking fundamental
changes of to their runtime to implement extremely basic language
enhancements. In contract, the JVM specification has not, to my recollection,
been broken once over the course of 20 years. Additions have always been
forward-compatible.

An example of the above would be the addition of non-fragile base classes
through the use of minimal ivar access indirection. This required a wholesale
breakage of the entire language, and thus could only be introduced on 64-bit
Mac OS X and on iOS.

Apple's language and runtime design is hackish at best.

~~~
comex
I don't buy the comparison. It's a _lot_ easier to avoid breaking things when
you're not compiling to native code, but compiling to native code is one of
Objective-C's biggest advantages.

~~~
flatline3
In contrast, I don't buy that dichotomy. The Objective-C runtime is runtime-
heavy, generates a slew of runtime-interpretable data, and yet is compiled
"natively" insofar as the body of functions-nee-methods is native.

Instance variable access from native code is done through indirection with
metadata maintained on their type and offset, dispatch is done through
indirection from native code using class metadata, instantiation is done
through indirection using the global registration of class metadata.

The problem has consistently been in the non-forward-looking design of that
structured data, not in the fact that they generate method bodies as native
machine code.

~~~
comex
Since I've been tired the last few days, here's a late reply:

Direct pointer offsetting of fragile base classes and statically generated
copy/destroy code for blocks in lieu of type information would be fairly
unthinkable in a managed language, whose bytecode probably wouldn't be low
level enough to even express those designs; it would still have been a _good
idea_ to think ahead and use indirection for ivars and include full type
information in blocks, but managed language designers never really had the
chance to make such mistakes in the first place. For other Objective-C
changes, like garbage collection, the changes to code generation would have
been unreasonable to make "in advance", but it's hard to imagine an analogous
change in a managed language that would require an ABI break. Though
Objective-C could have done better, it's a bit unfair to compare it to
something like the JVM.

------
angerman
While it looks interesting and probably has some use cases, the syntax puts me
off a little. YMMV. Yes, Obj-C has many brackets. But I find them supporting
reading by grouping relevant elements together. I assume if you dislike lisp
like languages, eero might help in this regard.

What I particularly don't like are the tailing return types.

I also assume apple is actively working to reducing the Obj-C verbosity to
some extend.

Regarding the website, I miss a "get started" link. How do I take it for a
quick test-drive?

EDIT: It sais: "Eero is a fully binary- and header-compatible dialect of
Objective-C". Does this mean, I can write a module in Eero and have it derive
the correct header files for me? Or do I need to re-write the header file to
be consumed by (legacy) Obj-C?

~~~
wsc981
I think Apple might want to switch to Ruby in the future. Some Apple employees
are still actively working on MacRuby and it is actually possible now to
create iOS apps with Ruby through RubyMotion.

<http://www.rubymotion.com/>

~~~
randomdata
I won't decry having options, but I am not sure what Apple would stand to gain
by switching the first-class language of their platform to Ruby. If you look
at any RubyMotion code, it ends up being an almost line-per-line copy of the
equivalent Objective-C code.

Given that the languages come from the same lineage, it is not even a big jump
to switch between them from a developer's point of view. The thing that really
stood out when I was learning Objective-C is that it essentially was Ruby,
just with some C thrown in. I often wonder if people go in thinking Obj-C is
some kind of worse version of C++ and then miss what the language really has
to offer. While it is certainly not perfect, I'm constantly amazed at how
elegant the design of the language really is.

Ruby could shine if Apple were to create a whole new set of APIs based around
the language, but that would mean throwing away nearly 25 years or work. It
would be a tremendous undertaking for what could be positive gain, but is just
as likely to introduce a whole new world of problems, especially in the early
years.

Official support for Ruby would certainly be welcome, but I don't see benefits
in outright switching; not without also removing the Objective-C APIs from the
equation and creating a whole new platform that centres around Ruby.

------
introspectif
Most of the comments here are way off base, focusing on the relative worth of
the objective-c language, rather than the value of Eero for developers who
have no choice but to use objective-c, regardless of how good they think it is
or isn't.

If you haven't already - I highly recommend following the link. RubyMotion was
somewhat interesting, but really didn't feel like a massive improvement over
plain old objective-c, especially once you factor in all the Cocoa APIs.

However, Eero looks amazing. The syntax used represents a vast improvement
over straight objective-c or RubyMotion. Take a look - it's genuinely
exciting.

------
basil
The examples struck me as looking a lot like Go and I found a few
similarities:

\- Local type inferencing (i := 100) is identical.

\- No parentheses for control structures.

\- Lack of semicolons.

\- Ranges in array enumeration (albeit with different syntax).

------
cageface
The problem, as RubyMotion and Monotouch have both demonstrated, is that no
matter how much you change the language your code is still dominated by calls
into the Cocoa APIs, and that's where a lot of the verbosity and ugliness lies
(IMO).

~~~
pirateking
You are entitled to your opinion, but I will disagree.

The Cocoa/Cocoa Touch APIs are probably the best I have ever used. Rarely do I
have to dive into the documentation thanks to the self documenting method
names. The consistently and logically applied conventions (paired with
autocomplete) mean I can usually use intuition to "feel" my way around. If
ever I need to read the documentation, it is also some of the best around.

------
huragok
I can definely see myself using this over objective-c if it proves stable
enough for production usage.

~~~
breckenedge
Agreed. Much nicer on my eyeballs.

------
jashkenas
Pretty rad that eero forbids variable shadowing. It would be neat if more
languages start doing that in the future...

<http://eerolanguage.org/documentation/index.html#noshadowing>

~~~
ufo
Err, sounds like a weird thing to do, and as they mentioned, is more likely
just a limitation of their inferrence engine. Preventing variable shadowing
leaks implementation details from inside functions/blocks and mos tlanguages
do just fine with appropriate warnings and so on.

------
fusiongyro
It looks nice, but I can't help but wonder why not go all the way to Smalltalk
instead?

    
    
        helper := FileHelper new.  "declare variable 'helper' via type inference"
    
        files := []  "empty array literal implies mutable"
        files addObject: (helper openFile: 'readme.txt').  "can group message in parens"
    
        files do: [ | FileHandle handle | "all objects are pointers, so no '*' needed"
    	  self log: 'File descriptor is %@', (Number)(handle fileDescriptor) ).
    	  self handle: closeFile ].
    
        ^ 0
    

Of course, you'd still have to solve the lack of syntax for defining classes
and methods, but there are a couple solutions to that problem out there
already.

~~~
tbe
A Smalltalk compiler for the Objective-C runtime exists here:

<http://etoileos.com/dev/docs/languages/smalltalk/>

~~~
fusiongyro
Cool. Does it work with Cocoa on OS X?

------
Apocryphon
Interesting, especially given how I was listening to John Siracusa's two
Hypercritical talks from 2011 where he points out that replacing Obj-C (or any
language) with bridges is insufficient. I hope he might weigh in on his
thoughts of eero in the future.

------
akaru
Looking through the docs, it does look incredible, perhaps the Perfect®
language for my tastes.

But the old man in me says it'll never stick.

------
gothy
Finally!

------
franzus
> Python-like indentation

Oh god, please no.

~~~
huragok
The syntax reminds me less of python and more like coffeescript. I loathe
python but there's a certain .. je ne sais quoi with much of the operators and
syntactic sugar removed. Almost like natural language!

~~~
stcredzero
That's the Smalltalk heritage of Objective-C shining through. Smalltalk was
actually intentionally designed using Human Interface notions to be friendly
-- even to the point of being used by grade school children.

