Hacker News new | past | comments | ask | show | jobs | submit login

Also the BEAM bytecode compiler and runtime are incredibly slow and unoptimized. It doesn't matter much for process handling and IO dominant workloads, but you would not want to run it with normal tasks.

An AOT compiler with better optimizations will run circles around BEAM on benchmarks.




Thankfully it is not slow! Last I measured it and its immutable data structures would beat Scala's immutable data structures in benchmarks.


Scala is pretty slow, though :-).


> Also the BEAM bytecode compiler and runtime are incredibly slow and unoptimized

Can you provide any citations for the BEAM runtime being unoptimised? In my experience it has been very carefully optimised over many years, generally prioritising latency over throughput.


Citations? I've looked at it, being maintainer of several fast and slow VM's by myself.

I don't see careful optimizations, it's rather sloppy. More like perl and python, unlike lua, php or v8.


It's widely known that it's IO focused and unoptimised around say maths use cases as it was built to be a zero maintenance telecoms platform


> zero maintenance telecoms platform

More like zero downtime maintenance.


I'm more of an observer, since I'm not actively using Elixir, or Erlang, right now. I read that BEAM now supports JIT compilation. Doesn't this solve the performance issues for the most part?

EDIT: Apparently not LLVM JIT but that's beside the point.


LLVM? Pretty sure they scrapped that for being slow, BeamAsm is a JIT written from scratch.

Edit: It actually uses part of AsmJit, not quite from scratch, my mistake.


Updated my comment


BEAM does have a JIT on some platforms (iirc, amd64 and aarch64), but it's not an optimizing JIT like you might be familiar with from Java's Hotspot and similar systems.

In BEAM Asm, the design is for the whole VM to either be interpretted (status quo) or native (JIT). In JIT mode, all the loaded modules are translated to native code as they're loaded; this needs to be fast or startup times are delayed, IIRC, there is an optimization path, but it's simple. There's no reoptimization of hot code paths later either; just the one time process.

The main benefit of this process is to remove a specific part of interpretation overhead, the instruction unpacking and dispatch overhead is eliminated. This can be significant for some applications and not for others, but it's really the main target, any other optimizations that happen are a bonus.


It does JIT with AsmJit (not LLVM).


Updated my comment




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: