Given the extent to which the Erlang VM is optimized for immutable data, network transparency, and message passing, I can see why Clojure on the JVM and CLR would not have had the same success with the actor model as Erlang has.
For example, Elixir is a proper BEAM language that targets the BEAM directly in a compile step.... but Luerl is not: Luerl is basically a Lua interpreter written in Erlang. I would expect different trade-offs from Elixir vs. Luerl as such. For example, I have a project where I need a limited, embedded scripting environment inside of an otherwise Elixir project. Using a BEAM language seems to be difficult to impose the desired limitations for such scripting environment, but in Luerl it seems I can do just that... limiting the scripting to a-less-than-system-wide scope. Of course... I expect a fairly substantial performance penalty for that... but trade-offs :-).
To be pedantic, btw, Elixir compiles to Erlang Abstract Format, not the BEAM itself.
As a bit of a shameless plug, I have an entry for your list: https://otpcl.github.io/ (GitHub repo: https://github.com/otpcl/otpcl)
Luckily, computational parallelization is not a big challenge with new libraries such as:
However, immutability might still become a challenge in terms of resources/performance. Rust is often used to patch that with Erlang's NIF.
You are correct in describing the raw performance of BEAM, but I'm not sure it's relevant for a typical IO-bound Erlang/Elixir app. Also most large real-life apps would use NIFs written in a language capable generating native code, i.e. C/C++ or Rust.
It's pretty much the only system I know of that provides for massive numbers of fairly scheduled, preemptive green threads. To do so it makes trade-offs that are not generally attractive, but I'd be surprised if it didn't beat the JVM here.
Green threads is another matter as it's not a feature that currently exists on the Java platform (there are somewhat similar features in Java platform languages like Kotlin's coroutines), but there is currently a project (that I'm leading) to add fibers (AKA user-mode threads, AKA lightweight threads, AKA green threads) to the JVM: https://wiki.openjdk.java.net/display/loom/
Perhaps because it wasn't intended to be used for everything —
"What sort of problems is Erlang not particularly suitable for?
Most (all?) large systems developed using Erlang make heavy use of C for low-level code, leaving Erlang to manage the parts which tend to be complex in other languages, like controlling systems spread across several machines and implementing complex protocol logic."
> Most (all?) large systems developed using Erlang make heavy use of C for low-level code
But they use C... because Erlang is too slow. Erlang isn't slow because they use C.
This is actually an argument for the need to make Erlang faster - we've got proof it's not fast enough for what they want to do!
And the worst thing is that when you start to use C code it's them a self-fulfilling prophecy - you can't optimise your Erlang code because it's now lots of calls to native code. I wrote my PhD on this problem.
> But they use C... because
They use C because they think it's a good tool for some things.
Erlang can and does focus on solving problems like distribution, concurrency, and managing high-level issues in distributed concurrent systems because speed within a single sequential task is an adequately solved problem, and the existing solution can be leveraged in systems using Erlang for higher-level concerns.
So, no, it's not wrong to say Erlang is slow because of the use of C for speed in Erlang systems.
A great deal of it is certainly just resource poverty compared to the JVM or the various JS runtimes. It may have some nice high-powered resources dedicated to it, but I'm still very comfortable with pron's assessment that the JVM has literally 100x more resources poured into it.
There are also some decisions that I would consider suboptimal for performance in the original Erlang specification though:
1. It extensively uses linked lists as a data structure. It was very fashionable in functional programming at the time, but the performance impact has in relative terms gotten worse as CPUs continue to speed up relative to RAM. Erlang does recover some of this vs. other functional programming languages in that the process model tends to keep the linked list components closer together together in RAM because they'll stay in the process' arena rather than being spread arbitrarily out over RAM, so walking a linked list a couple times is at least very likely to fit well into L1, but this is still going to be a pervasive loss of performance.
2. Haskell has done a lot of work in how to reconcile immutability with performance, and I'd still say there's a penalty there. Erlang hasn't, and a lot of it wouldn't apply (Erlang is strict), so you're still getting ~1980s/1990s+whatever optimizations we could add performance on a lot of the immutable stuff. It does do some of the obvious optimizations like rewriting obvious recursive algorithms to use mutability internally, but in general you're still going to pay some penalty here.
3. This one may be a bit controversial, so let me first say I deeply respect Erlang as a design, consider it to have been very far ahead of its time, and that given the general understandings of programming language theory at the time, that Erlang is a staggering accomplishment. That said, with the benefit of decades of hindsight on the design, the Erlang type system is deeply suboptimal. From what I can see, the primary purpose of the design is to ensure that you can't pass references between processes so that there's no way to modify a process' memory from another. The way it accomplished this is with a type system that has no references in it. But because this matter wasn't as well understood then as it is now, it also overcompensated with making everything immutable, removing all ability to have custom user types, and yet at the same time, having a fully dynamic type system in which there is only one type: "Erlang term". It's not necessary to kill all custom user types; you can just kill the things that make them unable to be passed across the network, which isn't everything. You don't need to make things immutable, you just need to ensure that references can't be passed. You don't need to make everything dynamically typed so that you don't have to synchronize definitions of those types; there are other ways that this can be dealt with. There's just a lot of these little things where one could finesse the result but the Erlang design hits it with a sledgehammer.
There's a case to be made for that, too. A lot of the finessing would be a lot more complicated (e.g., Rust does a lot better job with managing mutability than Erlang, but look at the complication in the type system as a result; it doesn't come for free.) As language design goes, there's a lot of room for debate.
But in terms of the performance impact, Erlang ends up with the worst aspects of dynamic typing on performance, and with not all that many of the benefits of dynamic typing. (You get some. There's more dynamism than initially meets the eye in Erlang. For instance, where you say lists:append(L1, L2), lists and append are just atoms. You can say L = lists, A = append, and then run L:A(L1, L2), and do conditionals on that, etc. It took me a long time to learn that. But you still miss out on a lot of the dynamism of dynamic languages, and even what there is is often harder to manage. And also, you pay in the performance for this, too.)
An Erlang written for the modern era, but tuned a lot more for performance, isn't Go. But Go is probably close enough to inhibit success for any such effort; I observe that languages tend to inhibit the creation of things very close to where they are, but not the same. (Bizarrely, it seems to be easier to create something that is essentially a clone of a current language from a programming language theory perspective with an opinionated syntax gloss on it than to create a language that is mostly like another but with a couple of important PLT changes. I don't think I fully understand why this is, but the observation seems pretty solid. So, "Go + full process isolation" is unlikely to attract enough support to succeed, even though I'd personally love to see it.)
What are your thoughts on Pony (https://www.ponylang.io/)?
My sentiments as well.
> I think it's too opinionated to gain much attention in this climate
Possibly. Rust is pretty opinionated but it seems to be doing well. My feeling is that Pony's probably not going to "win" (i.e. be the main replacement for C/C++) but I'm hoping that, like Erlang, it's able to carve out a stable enough niche.
Moreover, from what I understand unlike BEAM, JVM does not prioritize low consistent latency but throughput instead.
I just feel they're designed for very different purposes. BEAM is most definitely not a general computing platform and would not perform well as such. In contrast, I'd imagine it'd be painful to develop concurrent stateful distributed systems on top of the JVM, regardless of the language.
Concurrent stateful distributed systems on the Java platform probably outnumber those on BEAM 100 to 1, but I agree that we absolutely love Erlang's relevant constructs, which is why we're bringing them to the Java platform.
The rationale for why Rich Hickey didn't go for the actor model are actually documented here: https://clojure.org/about/state
I agree with you, that the JVM also would have probably made it more difficult to incorporate and leverage, though Scala seems to have pretty successfully done that, and Fantom has gone that route as well successfully (though the language is niche) on the JVM. So I'm not totally sure either.
I don't think introducing the BEAM is the hard part, I think it's all about the language barrier.
Using the language for data transformations and such is a breeze but learning the concurrency primitives, message passing, OTP tooling and behaviors is where the real work is in my opinion, when building distributed systems on top of BEAM.
I've been able to introduce the BEAM in a large company (where I work) but the barrier has been they don't know it, they know very well the JVM and they don't care about the language if it can produce a runnable jar.
I would really use the JVM more if I could program it in Elixir.
I'm a shallow Clojure user, I like Clojure but every time I look into 'modern' Clojure code base I would see a lot stuartsierra/component. I know this is quite a library with high quality, but it's more like a sign of compromise - 'we still need stateful component eventually, although we already have 5 different ways to deal with the state, we need another one'.
Despite what Rich said about Actors, the state in Erlang/OTP is more well-modeled. The state(process mechanism) is not only playing well with immutable functional languages but also much more robust than OO languages. And it also largely simpifies the mental model - it removes the need of atom/agent/variable/object, etc.
Another neat thing, of course, no awkward loop/recur anymore.
> The points he makes are of course very good. For example, when no state is shared between processes there is some communication overhead, but this isolation is also an advantage under a lot of circumstances. He also mentions here that building for the distributed case (a.k.a processes and message passing) is more complex and not always necessary, so he decided to optimise for the non-distributed case and add distribution to the parts of the system that need it. Rich Hickey calls Erlang "quite impressive", so my interpretation of these writings is that they are more about exposing the rationale behind the decisions and the trade-offs he made when designing Clojure (on the JVM), than about disregarding the actor model.
That would be my understanding as well. Clojure and Erlang are ideologically quite close, just made with slightly different purposes. Erlang - to be distributed from the ground up. Clojure - to handle mutability on somewhat lower level and be on JVM, where mutability is the norm.
I personally like LFE a bit better, because of the more-direct mappings to BEAM (as you mentioned). Lisp-Flavoured-Erlang is a very honest name; it feels like you're writing Erlang, just a nice and consistent Lisp syntax.
That said, I think Clojerl is pretty neat. I tend to find Clojure's macro syntax a bit cleaner than LFE's Common-Lisp style (I still get tripped up on commas). Also, Lisp-1 semantics are a lot more reasonable...I still hate putting `funcall`s everywhere.
Any good recommendations for a Clojure book for a seasoned programmer but with little exposure to Lisps?
I really want to love Elixir but dynamic typing is a NO for me.
I maintain a library for converting Erlang Dialyzer messages to Elixir (Erlex ) which I hope to one day be able to retire with a stronger Elixir/Erlang communication (AST instead of string communication, e.g., and direct diffs between types).
> Clojure is a Lisp and as such comes with all the goodies Lisps provide. Apart from these Clojure also introduces powerful abstractions such as protocols, multimethods and seqs, to name a few. [...] It is fair to say that combining the power of the Erlang VM with the expressiveness of Clojure could provide an interesting, useful result to make the lives of many programmers simpler and make the world a happier place.
Also, at this point Clojure is becoming a kind of nice foundational language with multi-host targets. So if you know Clojure, you can now leverage the JVM, various JS runtimes, the CLR, now also the BEAM, etc. So there's that too. So basically, to someone already knowing Clojure it's a nice way to have access to more platforms.
Btw, use the iOS version a lot to learn
So not sure what were you saying about Elixir.
Though my opinion on the matter is deeply colored and probably unreliable because I'm one of those weirdos that thinks that Erlang syntax is actually quite nice for the most part.
Also, I'm in no position to complain since currently my technical life (what little is left not making slide decks) now revolves around Rust, Agda, and TLA+ (god help me).
Also I remember trying to do something with the pipe operator that I couldn't get working due to the syntax of the language but which would've been trivial in Clojure using one of the threading macros.
You might have wanted to say something else but this is how you sounded to me. Am I wrong?
Yes, there is a community around Clojure that would feel more at home with the BEAM if they could use Clojure there... and this is likely the answer to my first sentence... but once you get to a Lisp it seems like the leap is less far to another Lisp and LFE, I would think, is better established.
So I guess the real question would be, for those outside the Clojerl project, are the benefits Clojure more specifically on the BEAM worth the costs of foregoing a (possibly/likely) more mature Lisp that already exists for the BEAM?
They've done a wonderful job with Phoenix and Elixir. I think this feature liveview will be develop as good as presence feature and eventually will be good enough.
anyway i still don't see why i would need it with Clojure if i can have single codebase and language for backend and frontend and render whatever i want at either -end.
Otherwise, none of the Python semantics make any sense on the BEAM, so I'm not sure that's that helpful, but who knows.