Hacker News new | past | comments | ask | show | jobs | submit login
Lumen: An alternative BEAM implementation, designed for WebAssembly (github.com)
218 points by seeekr 50 days ago | hide | past | web | favorite | 73 comments



This is the gist of it:

"The primary motivator for Lumen's development was the ability to compile Elixir applications that could target WebAssembly, enabling use of Elixir as a language for frontend development."

I had to dig to lean what BEAM is :)

"Bogumil Hausman next machine was called BEAM (Bogdan’s Erlang Abstract Machine). It was a hybrid machine that could execute both native code and threaded code with an interpreter. That allowed customers to compile their time-critial modules to native code and all other modules to threaded BEAM code. The threaded BEAM in itself was faster than JAM code."

http://blog.erlang.org/beam-compiler-history/

I now feel a little less bad that I couldn't figure out the acronym :)


Well if you've never worked with Erlang or Elixir it's quite normal to not know that.


The readme should have a one sentence explanation of BEAM or at least mention "Erlang VM" in parenthesis.


My favorite part about the presentation was the correlations between the DOM tree and a supervision tree. If you've looked into the Scenic project (https://github.com/boydm/scenic) you know how this Process & Supervision model shows real promise for client-side UI concurrency. As your states are changing you don't want one bad state to crash and cascade elsewhere - supervision trees lets you contain and recover from failures gracefully.

I'll be honest I didn't think it would be possible to implement something like the BEAM's preemptive scheduler in WebAssembly until I saw the presentation. Turns out it's very doable with continuations. The timing should be right in the next couple years as both Lumen and some of the bigger components of WebAssembly maturity kick into gear. Similar high level language/runtime -> WebAssembly projects like Blazor(C#) are also pending features like more native control over the DOM without going through JS interfaces (see here: https://github.com/WebAssembly/interface-types/blob/master/p... for more technical details).

Even if client-side UI frameworks don't pan out for Lumen, I can see Lumen being valuable for edge-computation especially where concurrency and fault-tolerance are important.


Thanks, we are not entirely convinced if a 1:1 relationship of process to DOM element is going to be where we land ultimately but it most definitely is a fun and interesting experiment. And it does align well with many of the OTP concepts. It really comes down to performance numbers and further research to know what the best direction a rendering engine should take. Personally, unless a rendering engine in Lumen is doing something OTPish (i.e. supervisor trees) I don't see the reason why people would choose a Lumen solution over existing works. I think I mentioned it in the keynote but the performance is secondary to the architectural perspective that Lumen can bring to client side development.

As Lumen continues to develop we'll continue to experiment with ideas and see what works best. I actually have a dbmon demo built but Hans and Luke were hand compiling the source and the DOM ast was just too much which is why the presentation had a very simple div/p tag. Once the compiler is ready for use I hope to write more about the dbmon demo and show of some crashing table rows. Should be cool!


Yeah, at the very least I expect to see Processes used like Liveview when you need containers/reactors to state and/or run-time control for messages between components. I wonder if an approach like Svelte (https://github.com/sveltejs/svelte) where it compiles into DOM updating code can be enhanced with the concurrency the BEAM provides.

It's amazing so much has been done already considering the team only started last February. Writing all those BIFs can't be easy! Lumen is definitely a compelling challenge.


How many bytes each process consumes when mapped to a single DOM element ?


We've done some benchmarking to help understand the differences: https://github.com/lumen/threading-benchmarks

The tldr is that with Web Workers as the threading model that browsers are forcing WASM to use the memory footprint for BEAM processes is going to be larger. This is of course a concern for us. We've been talking about distributing a many processes across a few WWs but ultimately we'd like to see the vendors give up on WWs are being the implementation of the WASM spec.


While WASM makes for a convenient compilation target for languages like C/C++/Rust/Zig, isn't JS a better target for languages that can benefit from a JIT and/or a GC? You'd get better performance for less effort and a smaller deliverable. Why compile to WASM when JS seems, at least to me, an obviously superior target? What great advantage does WASM have that it makes it worthwhile to lose on three important fronts?


Wait what?

WASM is a vm and JS is a language.

Are you asking about the pro and con of a transpiler over a vm solution?

For one thing BEAM is the Erlang VM which Elixir and many other languages run on. Why would anybody want to implement it in Javascript? BEAM does a lot of crazy things, including a preemptive scheduler. Also BEAM GC isn't some generic GC, there's tons of crazy work going on with it to work with the Actor concurrency model.


> WASM is a vm and JS is a language.

I'm talking about compiling to WASM vs. compiling to JS. Calling something a VM or a language is mostly a matter of convention. x86 machine code can also be viewed as a language, and compilers certainly don't care. A compiler translates code in one language into code in another (there is no precise definition for "transpile", which is why compiler people hate that word; colloquially "transpile" just means that the compilation's target language is considered to be at a similar abstraction level to the source).

> Why would anybody want to implement it in Javascript?

Why would anybody want to implement it in WASM? If it's so that it could run in the browser, then compiling BEAM bytecode to JS would achieve the same goal, be easier to write and would likely result in a smaller and faster deliverable.

> Also BEAM GC isn't some generic GC, there's tons of crazy work going on with it to work with the Actor concurrency model.

There's nothing crazy going on in BEAM. As far as VMs go, it's near the bottom in terms of sophistication.


Exactly true! C, for example is a VM!


> Also BEAM GC isn't some generic GC, there's tons of crazy work going on with it to work with the Actor concurrency model.

BEAM GC isn't very crazy, it (like a lot of other parts of BEAM) is delightfully simple. Each process gets it's own stack and heap, and because if immutability, any references are always to older things, so BEAM GC is based on copy collectors: copy everything that's reachable from the current stack frame, then copy and adjust the references; when you're done throw away the old heap/stack. It's a smidge more complicated because BEAM uses two generations for the collection, and has reference counted binaries that are shared among all processes and require cleanup; but all of that is pretty straightforward too. Because of the language design, there's no need for advanced GC like in other languages; but if you transpiled to other languages, their GC would work fine.


Right, but can we stop saying "transpile"? :)


Sorry, language compression demands it.

If you want to push towards transcode, that's probably the most likely alternative. It's shorter too.


Or compile.


The browser's JIT engine is tuned for idiomatic javascript. It doesn't work quite as well when JS is used as a target language.

The GC aspect is a real issue though.


Well, it would be interesting to see some benchmarks, but it would need to work pretty terribly to lose to AOT compilation of an untyped bytecode to WASM, unless it's a really good compiler with some excellent type inference.


The longer explanation of what I was trying to say is that a JIT engine isn't a magic sauce that you can sprinkle on top of a language implementation to make it go very fast. As of today, making a good "cross language" JIT is still very much an open research question.

For example, suppose we want to make an implementation of elixir that runs on the browser. The most straightforward way to do this in a semantics-preserving way is to implement a BEAM virtual machine in Javascript, and use it to run untyped BEAM bytecode. The problem is that if we do this the Javascript JIT is only able to see see the control flow and the variables for the BEAM interpreter loop itself, and it isn't able to to peek into the "meta level" to reason about the control flow and the variables of the elixir program. As a result, we should expect that the Javascript JIT will be no faster than an AOT-compiled bytecode interpreter. In order to produce good code in this case we would need to use a JIT engine that understands and expects the code pattern of an interpreter loop, such as the one used in the PyPy JIT for Python.

The other approach you could try to take would be to compile compile the Elixir code more directly into Javascript, so that Elixir functions becomes Javascript functions, Elixir variables become Javascript variables, Elixir loops become Javascript loops, and Elixir objects become Javascript objects. This way, the Javascript JIT will have a much better chance of generating good code. But the problem you run into now is how to do this compilation while preserving the semantics of the original language. If the original language isn't similar to Javascript then preforming such a translation can be quite challenging, and the end result might be some weird-looking Javascript that won't necessarily run very fast either.


You don't need for it to be good, just to beat AOT compilation to WASM, which would likely not be very efficient. I don't see why JS is not similar enough to BEAM bytecode to achieve that.


If the solution is based around BEAM bytecode then WASM should beat Javascript. A WASM version of the bytecode interpreter is going to be at least as fast as a JS version of the bytecode interpreter, and the AOT version will have faster startup than the JIT version too.

The caveat is that this reasoning is only thinking in terms of raw speed of executing the bytecodes. Both the WASM and JS versions will still run the elixir bytecodes in "dynamically typed speed", as the JS JIT is not able to optimize at the "elixir level". Things get a bit more complicated if you add the garbage collector to the mix though because the WASM version can't reuse the highly-efficient Javascript GC.


Right, but I'm not talking about an interpreter but about compiling BEAM bytecode to JS.


Any proof on this ? Are you talking about any specific JIT engines or all ?


In my mind this is a multifaceted issue.

One major factor is the special semantics of execution in Erlang VM compared to most other languages. Of note are:

* Guaranteed tail calls. There is no other way of doing a loop in Erlang other than recursing, so this point is really important. Although I can certainly imagine making this work with a JS target, we would have to perform certain code transformations that would most likely make the JS JIT a lot less effective.

* Preemption of Processes. Processes are the name of the lightweight actors used in the Erlang execution model. Each process is only allowed to run for a certain timeslice before execution has to be preempted. Although, again, I could imagine implementing this in a similar way as I would implement guaranteed tail calls, this would fully break up the control flow of a function, further disadvantaging the JS JIT.

* The semantics of the language term system. While this is probably one of the smallest issues for a JS target, it would still require us to implement and frequently call into relatively complex term operations that are implemented on top of the JS object model. I can't imagine this interacting well with the above two points in terms of performance.

* Concurrency/message passing model. One of our goals is exploring writing more fully featured applications in Erlang and then running it in the browser. As such, having the same concurrency model as the BEAM is critical. Using webworkers and utilizing shared memory through SharedMemoryBuffer (reenabled in chrome, let's hope firefox gets somewhere with this soon too), we believe makes it possible to implement things in a way that behaves very similar to the BEAM.

We want to invest heavily in the compiler in order to generate the best possible WASM code, which, given we play our cards right in codegen, should be further optimizable by the WASM engine. This involves inferring types and generating specialized code. Since most of Erlangs BIFs (built in functions) accept a much narrower array of types than what a lot of other dynamic languages do, we can draw relatively good type information from that.

This above is all an answer specific to the WASM/JS question, but doesn't even touch on the further goals of the project. While WASM is our primary focus at the moment, we also support native targets.

As for compilation to JS, ElixirScript (https://github.com/elixirscript/elixirscript) is prior art here. I am not involved in this project, so I can't really answer to what design decisions they made, but it is interesting to compare with regardless.

Again, time will obviously tell how well all of this will work out, but from what we have seen thus far, our approach is fairly solid.


With the possible exception of tail calls (which would require a bit of work), I don't see how any of those issues won't be better served by JS than by WASM, even if you did work against the JIT, and for less effort and with more features. It's just really hard to beat decent JITs with AOT compilation for an untyped language -- and V8's JIT is more than just decent -- even when the JIT is having a bad day, not without some very powerful whole-program analysis, and that costs a lot of time and money. But V8 is especially well optimized for a "broken", or "preempted" control flow, as that's just how JS is written, with or without async/await.

Native targets are another matter, of course.

Anyway, it would be interesting to see results. Beating BEAM's performance is not hard.

Thank you for your answer.


> Beating BEAM's performance is not hard.

Interesting observation. The hype I've heard around Elixir and Phoenix makes it sound like BEAM's performance is amazing. But maybe that's only in comparison to Ruby (specifically the C Ruby interpreter).


Beam’s performance is amazing if you are interested in context switching micro processes.


As long as you're not actually doing anything in those processes ;)


My feeling exactly !

The JIT enhancement and the new language features which shipped in various JS engines in last 3 years is commendable.


Hi I'm Brian, one of the member of the Lumen team. I'll try to address some of the questions I've seen in this thread:

1. Why?

We've seen every growing complexity in building, maintaining, extending, and debugging client side applications. There are several reasons why this is but in our opinion how JS handles concurrency (the Event Loop) is a primary factor. Erlang's concurrency model is superior (also our opinion) in how a developer reasons about concurrency. We want to bring this power to building client side web apps.

2. I'm afraid of [it] being over-engineered, and hard to contribute to.

Lumen itself is just the beginning of what we hope is an eco system of tooling and potential framework pieces for building web applications. When building Elixir or Erlang applications developers are not expected to be contributing back to the BEAM or to even understand how it works. Lumen will be in a similar position. When we finally get to `mix install lumen` you will likely only be including Lumen as a dependency in your apps. Higher level functionality will emerge that use Lumen to compile to WASM. However, we highly encourage people with experience in Rust, compiler design, runtime design, Erlang/BEAM, or just general curiosity to get involved in developing Lumen :)

3. even Chris McCord said that in all his years of consulting, he's only gotten a handful of requests for these types of apps

Chris works here at DockYard, and this is because we don't put Chris on our PWA/SPA projects. While there is an obvious demand for LiveView to relieve people from needing a client side framework there are most definitely many clients and plenty of projects where client side apps make sense.

4.an alternative BEAM implementation that behaves like a regular VM rather than BEAM?

Not entirely, although some concessions need to be made when working within the security model of the web. I would very much like it is the browser vendors would give up the argument that Web Workers fulfill the threading requirement of WASM as I personally feel that is disingenuous. I'm hoping that as Lumen develops and (hopefully) picks up steam/influence we can start to see some of these features land for real.

5. Due to size issues I probably wouldn't want to ship the entire BEAM to my users

Correct, and we aren't advocating for this. Lumen is currently in a very early stage of development and we haven't even implemented any compilation optimizations like dead code elimination. There are many other WASM specific optimizations that could cut the footprint size by 50% and this is before we even gzip, which WASM is designed to respond well to despite being a binary.

I'll keep a running list of responses in this thread if more questions come up.


Something that I forgot to include in the keynote is that we are very adamant that Lumen not be viewed as competing compiler/runtime for the BEAM. We don't want that responsibility and we most certainly don't want any drama that could come about from the perception that we are trying to fork a community. We are 100% committed to the BEAM being the primary implementation for Erlang/Elixir and that we will follow as we target platforms that are currently unsupported by the BEAM. I can probably spell this out in the README at some point but I would considered Lumen broken if it ever decided to deviate from BEAM. Code that runs in Lumen should always run on the BEAM.


I'd like to also add that Lumen as a concept for bringing the functionality and development environment of the BEAM to building web apps is very much still an experiment. We are still months away from having something that people can develop with, let alone proving that this is a viable idea. We think it is and we hope as Lumen continues to be developed that our concepts/ideas begin to land.


On another note, it seems this tooling could also be used for native BEAM as well. Perhaps C/C++ NIF’s compiled to wasm for extra safety. Maybe as an alternative to HiPE as it appears to be somewhat dead, and using wasm jit’s might make sense. Exciting experiment!


Compiling to native in some form like WASI is in our roadmap. We are shying away from promoting this too much at the moment (but I don't want to be dodgy about it) as we want the current focus to be on Web Assembly. We know there will be interest an excitement about the potential of building native Erlang apps without the complexity of an underlying OS, but as it stands we cannot spread our resources too thin so we've opted to tackle WASM first then "go native".


Excellent to hear. It makes sense to keep your focus on the front-end, though the general idea of a WASM style "native" code would be very interesting for many of us as you say. There's one group running BEAM on bare metal using just an RTOS [1] but even just WASM code for hot paths would be handy instead of writing NIFs. It'll be fun to see how the project goes, best of luck!

1: https://www.grisp.org/


I would love to know a deep comparison of Elixir(LiveView or other libraries) Vs AngularJS or VueJS or ReactJS for building modern progressive SPA and how much productivity is achieved.


LiveView isn't meant to be a replacement for Angular/Vue/React. It is meant to address the many companies/projects that were once doing some light client side functionality with jQuery or prototype and have since gone full framework and are now finding themselves with a much longer time to implement what was once simple. The overhead that frameworks impose are meant to manage complexity but if the interactions are simple then that overhead is a liability.

Some people also just don't like JS so there is that.


Thanks for reply.


I can't give a deep comparison since I haven't done anything too deep with LiveView (yet!) but I would encourage you to watch the keynotes on LiveView[1][2] if you're really interested. I have spent the last few years working with React / Redux on a big SPA and I believe the entire thing could easily be supported by LiveView; we don't do anything fancy, its basically just a bunch of CRUD based pages.

Another shallow comparison is that LiveView isn't meant to compete with modern progressive SPAs. While it can update the DOM with data from the server and is very good at doing that, it has limitations such as no offline support. Chris (the speaker in the videos) also notes that if your clients require very good latency or a very complex UI (think Google Sheets) then a JS framework will probably better suit your needs. However, if you're building out a majority of websites (CRUD type pages) that a lot of us web developers do, my personal opinion is that LiveView will be able to everything needed.

[1] Announcing LiveView @ ElixirConf 2018 - https://www.youtube.com/watch?v=Z2DU0qLfPIY [2] LiveView Update @ ElixirConf 2019 - https://www.youtube.com/watch?v=XhNv1ikZNLs


if you want a shallow comparison. I'm a senior dev, and I hired a junior who went to a bootcamp to solve the problem of "bootstrap a react project" for me (honestly, I'm too old to figure out how to do that -- once the project is up I'm pretty productive in react). This took him about a week. I also gave him an instruction to "bootstrap a phoenix liveview" for me, and even though he's less good at elixir (which he learned on the job) than JS, he got something up and running in two days. He then later taught me how to stand up phoenix liveview, which amounted to "pointing me to the docs" and I build a phoenix liveview app from the docs (normally i have to watch a video).


I could be missing something important about WASM (or BEAM), but why are you compiling to WASM rather than JS? Seems to me that's losing on most fronts: performance, effort and code size. WASM is a reasonable compilation target for C; not so much for untyped languages with a GC, while JS is a decent compilation target exactly for such languages. What's the gain with WASM?


I think you're looking at this as a transpiler, it is not. We are implementing a runtime which most definitely wouldn't be performant or of a reasonable footprint size in JS.


Beating V8's GC and JIT would be extremely hard given they're of pretty good quality, and I don't even know if you could write a JIT in WASM (can you?). Do you think implementing an interpreter and a GC in WASM could be faster/smaller than exploiting one of the best JITs out there and easier/smaller than taking advantage of a built-in GC? Moreover, you seem to be giving up on dynamic code loading in exchange for worse performance.

I'm not trying to challenge you, I just work on VMs and I don't understand the thought process behind such an unusual decision, but that could be because I don't know much about WASM. What am I missing?


This question is better fielded by Paul, Luke, or Hans. I gave them the week off after the conference so I cannot promise they'll reply here on HN. If you want to contact me over email (brian at dockyard.com) I can try to have them follow up on this.


Thanks!


This isn't a complete answer to you're question, but one note. They aren't building an interpreter in wasm. They are compiling elixir to native wasm. Elixir code wouldn't be shipped to the client.


Of course, but compiling to JS seems like it could be faster, smaller and easier, and support more features.


Cool idea, cool technology, cool project.

I really just don't get the "why", even after watching the ElixirConf presentation [1] about it. Like what is the use-case for this, specifically?

[1] https://www.youtube.com/watch?v=uMgTIlgYB-U


Same. There was a quick slide joking about how this solves the fully offline client side browser app problem but even Chris McCord said that in all his years of consulting, he's only gotten a handful of requests for these types of apps.

I'm sure they have their reasons for taking on such a project but as an outsider I would have liked to see more announcements that apply to what the majority of people are using Elixir for today (web apps). There's Nerves too which I'm not into personally, but there were things at the conf for that so I'm sure that crowd was happy.

For example, telemetry was announced over a year ago with a hint that it was being worked on next and will be implemented into Phoenix as a web dashboard in the future but there was no hint of that mentioned this year -- at least not in any of the Phoenix related talks and Chris' keynote. I also didn't see it in any of the other talk's titles. But telemetry is one of those things where if it existed in the form they talked about last year it would be one of the biggest killer features of any web framework available today (not a single one does what they proposed).


Telemetry's core is done for a while and it has been announced at previous events. That's why we didn't put much focus into it this year.

You can learn more at Arkadiusz Gil's talk on ElixirConf EU (https://www.youtube.com/watch?v=cOuyOmcDV1U). I also talked about it in my keynote (someone posted a video below) and Marlus also gave a demo at this ElixirConf of how we plan to use Telemetry in Broadway and how a dashboard may look like (https://www.youtube.com/watch?v=tPu-P97-cbE).

At the same time, please remember that whatever we announce is not a promise, and the best way to make it happen is to get involved. :) For example, Telemetry is out there, Phoenix, Ecto and Plug already use it, so there is nothing stopping anyone from implementing the dashboard right now!


> For example, Telemetry is out there, Phoenix, Ecto and Plug already use it, so there is nothing stopping anyone from implementing the dashboard right now!

That is very good to hear.

Is it still planned for Phoenix 1.x to have that /metrics dashboard as something that comes out of the box with little to no configuration or having to set up external tools?

The one mentioned in the 2018 talk you linked here: https://youtu.be/cOuyOmcDV1U?t=2147


> hint that it was being worked on next and will be implemented into Phoenix as a web dashboard in the future but there was no hint of that mentioned this year

Currently there is EEF Observability WG that is working on integrating OpenTelemetry, so it will not be integrated into Phoenix, but will be more general and more cross-language solution for monitoring.

Source: I am part of that WG.


Previously it was talked about that Elixir would create a telemetry library and then Phoenix (and other libraries like Ecto, etc.) would use it.

And the end goal was to have a Phoenix web UI that would come out of the box that you could goto and see a bunch of really useful things about your server's health, the BEAM but more importantly app specific things like DB query times and all of those interesting stats you would want to see. The beauty of it was you would get all of this for free / nearly free in terms of the work you had to do as an app developer.

Did all of that get scratched for OpenTelemetry? Do you happen to know of a timeline when end users could expect to see the benefits of this new standard in Elixir apps?

The official website at https://opentelemetry.io/ doesn't show Elixir as being a supported language.


> Elixir would create a telemetry

Erlang `telemetry` is something different from OpenTelemetry.

> Phoenix (and other libraries like Ecto, etc.) would use it

And this is a fact. Both Plug, Phoenix, and Ecto are using Erlang's Telemetry library.

> And the end goal was to have a Phoenix web UI that would come out of the box that you could goto and see a bunch of really useful things about your server's health

This will probably be implemented as an external library in form of OpenTelemetry as I said.

> Did all of that get scratched for OpenTelemetry?

No, OpenTelemetry and Erlang's Telemetry have different goals. Erlang's Telemetry is meant to be lightweight metrics logging library with API for defining exposed metrics. OpenTelemetry is the OTP application that will ingest these metrics and will expose them to the external services like DataDog or Prometheus.

> The official website at https://opentelemetry.io/ doesn't show Elixir as being a supported language.

[Not all languages are listed on the website](https://github.com/open-telemetry/opentelemetry-erlang).


Current momentum is in liveview. I'm actually eagerly waiting for that to get up to a good version after the early adopters.


Yeah, LV is definitely worthy. I'm excited for it too. Once LV drops with debounce / file upload support that will be enough to get me to incorporate it into an existing project.


I think José Valim said in his talk that it was still targeted for Phoenix 1.5 but I'm not sure about the dashboard part.

It seems the momentum is currently on LiveView.


Oh nice.

I wasn't at the event but I wasn't able to find Jose's keynote on: https://www.youtube.com/channel/UC0l2QTnO1P2iph-86HHilMQ/vid...

Seems they uploaded every video from the conference except his (at least at the time of writing this comment).

I asked him on IRC and he mentioned having video trouble during the talk. I really hope it was recorded and is salvageable!


https://youtu.be/oUZC1s1N42Q

Found it in my history, it seems they delisted it.


Great find. Yeah that clears up a lot about Telemetry.

I guess it was unlisted because of all of the slide flickering, but to anyone who is interested in watching it, it's very watchable and you can make everything out.


There was a technical issue during Jose's keynote. The video kept dropping out from his laptop. I don't know for certain but I suspect that is the reason why his video isn't public. Maybe they're going to re-record it but I cannot say for certain.


I watched that and appreciate engineering effort put into the project. I'm afraid of being over-engineered (Maybe not), and hard to contribute to. It's complex enough that we probably better writing webassembly code by hand.


When used properly this might give the option to develop in Erlang for both client and server side.


I watched the ElixirConf 2019 keynote [1] as soon as it came up. While I was left excited about Lumen in general, I wasn't entirely sure what kinds of things Lumen would be most useful for.

Being able to ship a single binary (like with Go or Rust) sounds handy, but shipping a tarball hasn't been that much of a challenge either. Being able to run BEAM code in the browser also sounds handy, but due to size issues I probably wouldn't want to ship the entire BEAM to my users either, right?

Can someone with a more thorough understanding of Lumen share some thoughts on this?

[1] https://www.youtube.com/watch?v=uMgTIlgYB-U


I think they mentioned this in the keynote, but bytecode size is on their radar. To allow dead code elimination, they are not planning to support hot upgrades. This means that you wouldn't be shipping the whole OTP - only the parts that you're using.


What would hot upgrades entail in the context of Web and for example Lumen? What would its benefits or drawbacks be? Would it be useful? Would it raise security concerns? Just want to play with the idea. :)


If Wasm does become as pervasive as JVM i.e. a universal VM of sorts, there will be plenty of IoT equipment using it since the trend now for anything "embedded" that doesn't demand ultra low power consumption is to use a full Linux system à la Raspberry Pi. A company optimizing for a tight memory/performance budget and whose purpose does not require high uptime across the entire fleet can simply use traditional blue-green deployments or something.


You won't be shipping the entirety of BEAM. That's the point of this announcement and what makes this so exciting and plausible.


Interesting project.

Although it's viable to transpile Erlang or Elixir to JavaScript, without the BEAM runtime everything seems pointless.

However if an BEAM is viable we could do interesting things like React fiber equivalent could just be processes.


So, an alternative BEAM implementation that behaves like a regular VM rather than BEAM? Not saying it’s useless, but it’s sure idiosyncratic.


This is some of the best news this summer!

Anyone that wants to reduce the influence that Google and Facebook have on the Web should look into the Erlang/Elixir ecosystem.


I'm excited about this project too, but how does it "reduce the influence that Google and Facebook have on the Web"? It seems pretty orthogonal if you ask me.


Considering how much sway Google has over the WASM spec group I would say that Google still has considerable influence upon this project.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: