"The primary motivator for Lumen's development was the ability to compile Elixir applications that could target WebAssembly, enabling use of Elixir as a language for frontend development."
I had to dig to lean what BEAM is :)
"Bogumil Hausman next machine was called BEAM (Bogdan’s Erlang Abstract Machine). It was a hybrid machine that could execute both native code and threaded code with an interpreter. That allowed customers to compile their time-critial modules to native code and all other modules to threaded BEAM code. The threaded BEAM in itself was faster than JAM code."
I now feel a little less bad that I couldn't figure out the acronym :)
I'll be honest I didn't think it would be possible to implement something like the BEAM's preemptive scheduler in WebAssembly until I saw the presentation. Turns out it's very doable with continuations. The timing should be right in the next couple years as both Lumen and some of the bigger components of WebAssembly maturity kick into gear. Similar high level language/runtime -> WebAssembly projects like Blazor(C#) are also pending features like more native control over the DOM without going through JS interfaces (see here: https://github.com/WebAssembly/interface-types/blob/master/p... for more technical details).
Even if client-side UI frameworks don't pan out for Lumen, I can see Lumen being valuable for edge-computation especially where concurrency and fault-tolerance are important.
As Lumen continues to develop we'll continue to experiment with ideas and see what works best. I actually have a dbmon demo built but Hans and Luke were hand compiling the source and the DOM ast was just too much which is why the presentation had a very simple div/p tag. Once the compiler is ready for use I hope to write more about the dbmon demo and show of some crashing table rows. Should be cool!
It's amazing so much has been done already considering the team only started last February. Writing all those BIFs can't be easy! Lumen is definitely a compelling challenge.
The tldr is that with Web Workers as the threading model that browsers are forcing WASM to use the memory footprint for BEAM processes is going to be larger. This is of course a concern for us. We've been talking about distributing a many processes across a few WWs but ultimately we'd like to see the vendors give up on WWs are being the implementation of the WASM spec.
WASM is a vm and JS is a language.
Are you asking about the pro and con of a transpiler over a vm solution?
I'm talking about compiling to WASM vs. compiling to JS. Calling something a VM or a language is mostly a matter of convention. x86 machine code can also be viewed as a language, and compilers certainly don't care. A compiler translates code in one language into code in another (there is no precise definition for "transpile", which is why compiler people hate that word; colloquially "transpile" just means that the compilation's target language is considered to be at a similar abstraction level to the source).
Why would anybody want to implement it in WASM? If it's so that it could run in the browser, then compiling BEAM bytecode to JS would achieve the same goal, be easier to write and would likely result in a smaller and faster deliverable.
> Also BEAM GC isn't some generic GC, there's tons of crazy work going on with it to work with the Actor concurrency model.
There's nothing crazy going on in BEAM. As far as VMs go, it's near the bottom in terms of sophistication.
BEAM GC isn't very crazy, it (like a lot of other parts of BEAM) is delightfully simple. Each process gets it's own stack and heap, and because if immutability, any references are always to older things, so BEAM GC is based on copy collectors: copy everything that's reachable from the current stack frame, then copy and adjust the references; when you're done throw away the old heap/stack. It's a smidge more complicated because BEAM uses two generations for the collection, and has reference counted binaries that are shared among all processes and require cleanup; but all of that is pretty straightforward too. Because of the language design, there's no need for advanced GC like in other languages; but if you transpiled to other languages, their GC would work fine.
If you want to push towards transcode, that's probably the most likely alternative. It's shorter too.
The GC aspect is a real issue though.
One major factor is the special semantics of execution in Erlang VM compared to most other languages. Of note are:
* Guaranteed tail calls. There is no other way of doing a loop in Erlang other than recursing, so this point is really important. Although I can certainly imagine making this work with a JS target, we would have to perform certain code transformations that would most likely make the JS JIT a lot less effective.
* Preemption of Processes. Processes are the name of the lightweight actors used in the Erlang execution model. Each process is only allowed to run for a certain timeslice before execution has to be preempted. Although, again, I could imagine implementing this in a similar way as I would implement guaranteed tail calls, this would fully break up the control flow of a function, further disadvantaging the JS JIT.
* The semantics of the language term system. While this is probably one of the smallest issues for a JS target, it would still require us to implement and frequently call into relatively complex term operations that are implemented on top of the JS object model. I can't imagine this interacting well with the above two points in terms of performance.
* Concurrency/message passing model. One of our goals is exploring writing more fully featured applications in Erlang and then running it in the browser. As such, having the same concurrency model as the BEAM is critical. Using webworkers and utilizing shared memory through SharedMemoryBuffer (reenabled in chrome, let's hope firefox gets somewhere with this soon too), we believe makes it possible to implement things in a way that behaves very similar to the BEAM.
We want to invest heavily in the compiler in order to generate the best possible WASM code, which, given we play our cards right in codegen, should be further optimizable by the WASM engine. This involves inferring types and generating specialized code. Since most of Erlangs BIFs (built in functions) accept a much narrower array of types than what a lot of other dynamic languages do, we can draw relatively good type information from that.
This above is all an answer specific to the WASM/JS question, but doesn't even touch on the further goals of the project. While WASM is our primary focus at the moment, we also support native targets.
As for compilation to JS, ElixirScript (https://github.com/elixirscript/elixirscript) is prior art here. I am not involved in this project, so I can't really answer to what design decisions they made, but it is interesting to compare with regardless.
Again, time will obviously tell how well all of this will work out, but from what we have seen thus far, our approach is fairly solid.
Native targets are another matter, of course.
Anyway, it would be interesting to see results. Beating BEAM's performance is not hard.
Thank you for your answer.
Interesting observation. The hype I've heard around Elixir and Phoenix makes it sound like BEAM's performance is amazing. But maybe that's only in comparison to Ruby (specifically the C Ruby interpreter).
The JIT enhancement and the new language features which shipped in various JS engines in last 3 years is commendable.
We've seen every growing complexity in building, maintaining, extending, and debugging client side applications. There are several reasons why this is but in our opinion how JS handles concurrency (the Event Loop) is a primary factor. Erlang's concurrency model is superior (also our opinion) in how a developer reasons about concurrency. We want to bring this power to building client side web apps.
2. I'm afraid of [it] being over-engineered, and hard to contribute to.
Lumen itself is just the beginning of what we hope is an eco system of tooling and potential framework pieces for building web applications. When building Elixir or Erlang applications developers are not expected to be contributing back to the BEAM or to even understand how it works. Lumen will be in a similar position. When we finally get to `mix install lumen` you will likely only be including Lumen as a dependency in your apps. Higher level functionality will emerge that use Lumen to compile to WASM. However, we highly encourage people with experience in Rust, compiler design, runtime design, Erlang/BEAM, or just general curiosity to get involved in developing Lumen :)
3. even Chris McCord said that in all his years of consulting, he's only gotten a handful of requests for these types of apps
Chris works here at DockYard, and this is because we don't put Chris on our PWA/SPA projects. While there is an obvious demand for LiveView to relieve people from needing a client side framework there are most definitely many clients and plenty of projects where client side apps make sense.
4.an alternative BEAM implementation that behaves like a regular VM rather than BEAM?
Not entirely, although some concessions need to be made when working within the security model of the web. I would very much like it is the browser vendors would give up the argument that Web Workers fulfill the threading requirement of WASM as I personally feel that is disingenuous. I'm hoping that as Lumen develops and (hopefully) picks up steam/influence we can start to see some of these features land for real.
5. Due to size issues I probably wouldn't want to ship the entire BEAM to my users
Correct, and we aren't advocating for this. Lumen is currently in a very early stage of development and we haven't even implemented any compilation optimizations like dead code elimination. There are many other WASM specific optimizations that could cut the footprint size by 50% and this is before we even gzip, which WASM is designed to respond well to despite being a binary.
I'll keep a running list of responses in this thread if more questions come up.
Some people also just don't like JS so there is that.
Another shallow comparison is that LiveView isn't meant to compete with modern progressive SPAs. While it can update the DOM with data from the server and is very good at doing that, it has limitations such as no offline support. Chris (the speaker in the videos) also notes that if your clients require very good latency or a very complex UI (think Google Sheets) then a JS framework will probably better suit your needs. However, if you're building out a majority of websites (CRUD type pages) that a lot of us web developers do, my personal opinion is that LiveView will be able to everything needed.
 Announcing LiveView @ ElixirConf 2018 - https://www.youtube.com/watch?v=Z2DU0qLfPIY
 LiveView Update @ ElixirConf 2019 - https://www.youtube.com/watch?v=XhNv1ikZNLs
I'm not trying to challenge you, I just work on VMs and I don't understand the thought process behind such an unusual decision, but that could be because I don't know much about WASM. What am I missing?
I really just don't get the "why", even after watching the ElixirConf presentation  about it. Like what is the use-case for this, specifically?
I'm sure they have their reasons for taking on such a project but as an outsider I would have liked to see more announcements that apply to what the majority of people are using Elixir for today (web apps). There's Nerves too which I'm not into personally, but there were things at the conf for that so I'm sure that crowd was happy.
For example, telemetry was announced over a year ago with a hint that it was being worked on next and will be implemented into Phoenix as a web dashboard in the future but there was no hint of that mentioned this year -- at least not in any of the Phoenix related talks and Chris' keynote. I also didn't see it in any of the other talk's titles. But telemetry is one of those things where if it existed in the form they talked about last year it would be one of the biggest killer features of any web framework available today (not a single one does what they proposed).
You can learn more at Arkadiusz Gil's talk on ElixirConf EU (https://www.youtube.com/watch?v=cOuyOmcDV1U). I also talked about it in my keynote (someone posted a video below) and Marlus also gave a demo at this ElixirConf of how we plan to use Telemetry in Broadway and how a dashboard may look like (https://www.youtube.com/watch?v=tPu-P97-cbE).
At the same time, please remember that whatever we announce is not a promise, and the best way to make it happen is to get involved. :) For example, Telemetry is out there, Phoenix, Ecto and Plug already use it, so there is nothing stopping anyone from implementing the dashboard right now!
That is very good to hear.
Is it still planned for Phoenix 1.x to have that /metrics dashboard as something that comes out of the box with little to no configuration or having to set up external tools?
The one mentioned in the 2018 talk you linked here: https://youtu.be/cOuyOmcDV1U?t=2147
Currently there is EEF Observability WG that is working on integrating OpenTelemetry, so it will not be integrated into Phoenix, but will be more general and more cross-language solution for monitoring.
Source: I am part of that WG.
And the end goal was to have a Phoenix web UI that would come out of the box that you could goto and see a bunch of really useful things about your server's health, the BEAM but more importantly app specific things like DB query times and all of those interesting stats you would want to see. The beauty of it was you would get all of this for free / nearly free in terms of the work you had to do as an app developer.
Did all of that get scratched for OpenTelemetry? Do you happen to know of a timeline when end users could expect to see the benefits of this new standard in Elixir apps?
The official website at https://opentelemetry.io/ doesn't show Elixir as being a supported language.
Erlang `telemetry` is something different from OpenTelemetry.
> Phoenix (and other libraries like Ecto, etc.) would use it
And this is a fact. Both Plug, Phoenix, and Ecto are using Erlang's Telemetry library.
> And the end goal was to have a Phoenix web UI that would come out of the box that you could goto and see a bunch of really useful things about your server's health
This will probably be implemented as an external library in form of OpenTelemetry as I said.
> Did all of that get scratched for OpenTelemetry?
No, OpenTelemetry and Erlang's Telemetry have different goals. Erlang's Telemetry is meant to be lightweight metrics logging library with API for defining exposed metrics. OpenTelemetry is the OTP application that will ingest these metrics and will expose them to the external services like DataDog or Prometheus.
> The official website at https://opentelemetry.io/ doesn't show Elixir as being a supported language.
[Not all languages are listed on the website](https://github.com/open-telemetry/opentelemetry-erlang).
It seems the momentum is currently on LiveView.
I wasn't at the event but I wasn't able to find Jose's keynote on: https://www.youtube.com/channel/UC0l2QTnO1P2iph-86HHilMQ/vid...
Seems they uploaded every video from the conference except his (at least at the time of writing this comment).
I asked him on IRC and he mentioned having video trouble during the talk. I really hope it was recorded and is salvageable!
Found it in my history, it seems they delisted it.
I guess it was unlisted because of all of the slide flickering, but to anyone who is interested in watching it, it's very watchable and you can make everything out.
Being able to ship a single binary (like with Go or Rust) sounds handy, but shipping a tarball hasn't been that much of a challenge either. Being able to run BEAM code in the browser also sounds handy, but due to size issues I probably wouldn't want to ship the entire BEAM to my users either, right?
Can someone with a more thorough understanding of Lumen share some thoughts on this?
However if an BEAM is viable we could do interesting things like React fiber equivalent could just be processes.
Anyone that wants to reduce the influence that Google and Facebook have on the Web should look into the Erlang/Elixir ecosystem.