Further, Jose Valim (the language creator) is incredibly talented as a developer and has made rapid progress in iterating the language, and responds quickly to suggestions. The mailing list is also picking up momentum, folks are contributing ideas for how to make the language great.
Yuri (yrashk) has spearheaded the efforts around a package manager - http://expm.co/ which allows all Erlang Rebar packages plus Elixir packages. Still missing is "binary" package support but that's been a gripe about Erlang for a long time.
I believe in Erlang/OTP and I really believe Elixir has a bright future. In a few years there will be folks who build Elixir apps and don't bother to learn Erlang. This could really freak out the Erlang "old guard" - many are friends of mine - but they don't really care that much, as Erlang has has always been a niche musical instrument and Elixir is Erlang's ticket to Carnegie Hall.
If you're a student, check out our project ideas here: https://github.com/beamcommunity/beamcommunity.github.com/wi...
3) How is Elixir's first-class definition approach more useful than multi-stage programming (such as MetaOCaml: http://www.metaocaml.org/), principled syntax extension (such as CamlP4: http://pauillac.inria.fr/camlp4/), or second-class higher-order modules (such as OCaml's functors), each of which are inherently more amenable to automatic analysis?
(P.S. All these points seem to apply equally well to Joe Armstrong's "erl2".)
This complaint makes little sense, because macros are expanded entirely at compile-time. Reasoning about a macro is no harder than reasoning about a higher-order function: you reason about the macro function (which is a simple transformation on some tree data structure), and the generated function. Because a macro is always expanded at compile-time, you can always expand it and inspect the generated function.
Compare: what I might do in Common Lisp by writing a set of macros (and checking the expansions--just a hotkey away in my IDE) I'd do in Python by adding fields and methods to objects at runtime.
Except that C-style macro functions aren't "simple transformations on some tree data structure". They are textual transformations. Hygienic macros (à la Racket) I agree are useful and easy to reason about.
Elixir uses the latter; am I missing something?
2) Dialyzer works fine on Elixir beam files. You can specify type & spec attributes just like in Erlang.
If macros worry you unduly, then Elixir is not for you. I personally prefer to use macros than type lots of boilerplate code, but yes there are some downsides.
edit: Actually I should have said reading and typing - reading boilerplate is much worse than writing it. Judicious use of macros can clarify intent for someone reading the code.
2) Dialyzer is a complete, not sound program analyzer. i.e. it will never produce a false positive, but it may allow incorrect programs through. By making definitions first-class, Elixir's module language becomes Turing-complete, moving a whole new class of bugs into Dialyzer's "false negative" category.
This problem doesn't exist with second-class definition systems such as OCaml's, because the definition language is not Turing-complete: all definitions and modules may be checked at compile time.
3) I'm not sure where macros come into play? Are you using "macros" to mean "higher-order programming tools"? If so, you're misunderstanding me -- I use metaprogramming tools all the time; I simply prefer ones which aren't Turing-complete and are therefore amenable to analysis.
It sounds like you are more interested in "CS friendly" languages, rather than "get stuff done" languages. Erlang and Elixir tend toward the latter category.
I'm generalizing a lot, and that's not to say a language can't be both, but Erlang was born of a practical need, and I think that Elixir is similar: it's not meant to be the most beautiful language out there, it's meant to be a bit more user-friendly way of utilizing the power of Erlang and OTP.
In terms of a specification, have a look at Elixir's test suite. Given that it's still being hacked on, it's probably not set in stone.
It sounds like you think there's no overlap (despite your next paragraph).
Additionally: "CS friendliness" is more pragmatic than you think. Automatic refactoring and automatic code analysis are two huge pragmatic wins (think: saving developer time) that are made possible only by paying heed to analyzability of code.
Erlang and Elixir tend toward the latter category.
Erlang is my current favorite language. You may not realize it, but it is one of the most "CS friendly" mainstream languages that exist. The fact that it is almost entirely defined by "functional programming" + "actor model" is a big part of this. The existence of Dialyzer is a testament to its amenability to automatic analysis.
In terms of a specification, have a look at Elixir's test suite.
A test suite isn't a specification (though it's a lot closer than a tutorial): all computer languages contain infinities (e.g. expressions can be arbitrarily large); test suites cannot express infinities.
And a test suite certainly isn't a reference document. A good language reference document says "here are all the forms of the language, here is what they all mean". The Erlang Reference Manual is a very good example of such a document.
(I'm not trying to be a bear; "There is no language documentation" is a perfectly acceptable answer for me; it just means there is no way for me to discuss the specific benefits Elixir may have.)
Erlang has had quite a bit longer to produce such a document, though, hasn't it?
Why bother doing that when you're still fiddling with the language?
In terms of CS friendliness, I get the impression that Haskell is far more popular with that crowd in that it has more things of interest to people doing language hacking. Erlang has "functional" and "actors", but not a lot of other stuff that seems to be of interest to the CS crowd, like types.
> "There is no language documentation" is a perfectly acceptable answer for me; it just means there is no way for me to discuss the specific benefits Elixir may have
No way at all? I think you can tell a lot about a language without some kind of formal documentation.
To me, that sounds entirely backward :) I've never implemented a language I haven't already designed "on paper". (Why bother? With a good reference document, you can write and test programs in your head. Implementation is tedious; I only want to do that once.)
In terms of CS friendliness, I get the impression that Haskell is far more popular […]
It is. I don't really know why. Haskell has a lot of libraries and a good compiler, but it's otherwise pretty boring. Its type system is nothing special (OCaml and Mercury have very similar but more interesting systems, notably w/r/t subtyping), monadic I/O is one of the silliest contortions I've seen (uniqueness types in Mercury and Clean are much easier to work with), and laziness isn't all that useful (it is useful but I've never missed not having it).
(I say this with all respect to the Haskell designers: it's a very good language, and they successfully met their self-imposed goal of being fully functional; I just happen to think that that goal led to Haskell being ultimately boring.)
Erlang has "functional" and "actors", but not a lot of other stuff that seems to be of interest to the CS crowd, like types.
That's exactly it. It doesn't have a lot of stuff. The language is incredibly simple, EVERYTHING is explicit, it doesn't encourage first-class metaprogramming, and it ships with its own parser and pretty-printer. That's a CS researcher's dream. That's why Dialyzer (which is better than many, if not most, built-in type systems) can exist. You don't need a built-in type system if your language is simple and analyzable enough to admit a 3rd-party one.
No way at all? I think you can tell a lot about a language without some kind of formal documentation.
The answer in this case seems likely to be: Jose started fiddling around with it, and at the same time was learning what he could and could not do on Erlang's VM.
I guess I take a favorable view of experimentation and a dimmer view of being able to plan everything ahead of time. At some point in time, things need to be stabilized, but at the beginning, some contact and pushback from the real world and real usage may well impact the design of a system.
As far as I can see they're all basically the same thing. You enforce sequence by creating a chain of notional "real world" values. I can't say I've used Clean or Mercury extensively, but as far as I can see they just make you pass around that value explicitly. That doesn't really seem like it would be "easier to work with".
1. You dont need monadic versions of all non monadic functions. You dont need fmap, you can just use map.
2. It means there is a well defined order of execution (i.e. The order does not "emerge" as in Haskell, instead it is defined and verified by type system)
It is much much easier to work. In Haskell you feel forcedto separate io from transformation as much as posible. Often you end with either repeating ourself a lot (writing oth mondic as non monadic versions) ir you venture out in monad transformers, which imho is the "goto" of Haskell. (How to turn your code in unreadable unmaintainable spagetti). The cognitive overload is never worth it.
Ps. Uniqueness types are not just about IO. Rust uses them as well. They are about sharing. Haskells needs an IO () type because without a monad it has no defined order of execution, and it cant prevent you from "forking the world" when working with outside references.
If Haskell could prevent you (by using uniqueness) i doubt they would have used monads for IO. (Although they can be usefull in other areas).
Really? It seems interesting, but I'd simply choose not to use macros in my own code. Seems like they can be avoided in most situations. There's even a heading in the introduction that says 'Don't write macros': http://elixir-lang.org/getting_started/5.html
* unprincipled metaprogramming: any metaprogramming facility whereby module-level elements (e.g. delcarations, code, modules themselves) may partake in Turing-complete transformations (hence precluding complete analysis), may generate unsound code (i.e. what would be a compile-time error), or may modify code at run-time (i.e. through dynamic scope or global mutation).
* principled metaprogramming: not the above; i.e., all code transformations terminate (allowing automatic analysis), all generated code is syntactically sound (mostly this means not having to worry about escaping stuff), and code transformations don't modify existing code.
There are probably other things and some exceptions, but that's the jist of it.
eval(), C++ templates, C macros, and prototypes are all examples of unprincipled metaprogramming.
Parametric modules (OCaml), hygienic macros (Racket), and multistage programming (MetaOCaml almost) are all examples of principled metaprogramming. (I say "almost" for MetaOCaml because code generation expressions may not terminate; but this is neither required nor encouraged, and it is easy to guard against.)
I think only OCaml functors really qualify to what you call "principled", but then - it's not the most powerful kind of metaprogramming I've seen.
For example, in terms of this macro system, you can (and you could for a long time) define a simplest non-terminating macro like this:
Macro expansion has taken a suspiciously large number of steps.
Click Stop to stop macro expansion and see the steps taken so far, or
click Continue to let it run a bit longer.
The language is documented extensively under the Getting Started section of the website. It's a pretty easy read and really helps to understand the pieces of the language that make Elixir so powerful.
Why? Once it's no longer syntactically clear what's written in the host language and what's written in the DSL, it requires a very thorough understanding of both languages and their interactions to comprehend any given piece of code. And in a language where definitions can be modified at run time or have dynamic scope (it's not clear which of these two methods Elixir uses), it's often hard to tell what is or is not DSL code in the first place.
(Note: I write DSLs for a living. Except, they live apart from any host language, and thus don't allow arbitrary syntactic and semantic mixing and are thus highly amenable to automatic analysis, which is one of, if not the, primary benefit of a DSL.)
Is it wise/advisable to start with Elixir without knowing erlang, or is it better to learn erlang first?
I've been mainly doing web programming with Ruby and want to understand how Elixir is useful and why I would choose it over something else.
If you want something that's vaguely familiar, you might have a look at Chicago Boss. It doesn't do as much as Rails, but it's fairly easy to get up and running, and the guy who built it is very friendly and helpful: http://www.chicagoboss.org/
Big runtime, VM, no mutable state (really), no direct access to memory, pattern matching, exceptions, no side effects other than sending messages, no destructive assignment, single assignment variables - that's an incomplete and subjective list of Erlang features. How much of it is even comparable to what Go offers?
Sure it does: they're both programming languages that people use to get stuff done.
Erlang is far more robust than Go at this point in time, from what I can see, and much more mature - its history goes back some 20+ years. It has a bunch of stuff for distributed, concurrent, fault-tolerant computing that is, in some ways, the best in the business. However, it also brings with it an accretion of warts that it has brought with it over the years, starting with a syntax that most people do not find easy. Go is very new, still being worked on, and, as klibertp says, is quite a different beast in many ways.
My thinking is along these lines: if you want something rock solid, use Erlang. If you want something that's "pretty good", and getting better, Go might work out well for you.
I would be interested in knowing the downsides of using Elixir and its comparison with Go-lang.
It's a web framework developed and maintained by the Elixir core team, and I've had fun playing with it recently. It already supports a lot of the most common tasks when doing web development including request routing, parsing of parameters and cookies, and pre/post-request hooks.
I get the impression that the underlying VM is fairly Erlang specific, so that it's going to be difficult to do a language that doesn't come out looking at least something like Erlang, but I've never looked closely.