
The BEAM Has Spoiled Me - clessg
https://gvaughn.github.io/2020/08/08/beam_spoiled_me.html
======
QuadrupleA
Article piqued my interest, but also left me scratching my head a bit -
everything mentioned is kinda vague / abstract.

Elixir/BEAM are "better" but better than what? Curious the author's prior
language experience.

What's he developing with it exactly? Native apps? Web stuff? Or just learning
/ experimenting?

Didn't really understand how the Elixir / BEAM approach is different from
typical threads, OS processes, or async programming models - maybe asking too
much from a short post :). But still would be interesting to have some
specific examples of how Elixir/BEAM is better.

~~~
ggv
Hello, OP here. Yes, I was intentionally abstract because I thought I was
writing to a smaller audience of people who had spent some time with
Elixir/BEAM. I never expected the attention this has received!

In my 25 year career, I've been paid to work in Lotus Notes, Java, C#,
Javascript, Ruby, Clojure, and now Elixir. My employer's system is basically
an integration platform. We perform background tasks on behalf of our
customers that access 3rd party APIs, our database, and then decide to push
some data to other systems. We do have a web interface that is mostly the
status of this background processing, plus a UI for when the users need a
manual action.

There's other great replies touching on your technical questions, but here's
my brief angle on it. BEAM processes are as isolated as OS processes, but they
are very quick to spawn and need very little memory. A typical developer-
caliber laptop computer can probably support on the order of a quarter million
processes. Their isolated nature is supported at all levels of the design of
the BEAM.

~~~
QuadrupleA
Thanks - appreciate the reply (and the other helpful replies here too). I've
done a fair bit of threaded / async stuff in Python / C / JS - sounds like
BEAM processes are more baked into the language & runtime and encouraged as an
architectural style, rather than an added complexity you bargain with when you
hit a single-threaded performance wall. Interested to do a deeper dive on it
sometime.

------
siscia
I use to make the same mistake of the author. It is not the BEAM, it is the
architecture that the BEAM force upon you to spoils it (the author).

There are deeper points in what the author says. Even if you let a process
crash in the BEAM, but you don't leave the system in a consistent, recoverable
state, the BEAM is not going to help you.

Once you realize this, and have a way to apply back pressure, you can get a
lot of the benefits of the BEAM in any language. (Not all, being a VM
carefully tuned and design for this pattern still means something, but still.)

When you realise that it is so much simpler to leave the system in a
consistent state and program very very aggressively, following only the happy
path without worrying about errors, you can apply the same pattern every
where. Of course within reason that perform can be impacted.

My takeaway is that the BEAM (and the community around it) force upon you a
very particular style of programming, and most of the benefits comes from this
style, not from the BEAM itself. You can apply the same style in any
programming language.

~~~
busterarm
The problem there is that your coworkers working in those languages who don't
have Erlang/Elixir experience have not internalized those lessons yet and
they're not going to understand why you're doing what you're doing when they
review your code.

Edit: I remember an interview that I had once with a team that was trying to
bring on a senior to improve their PHP code and they were really unused to the
code style I used to solve their problem. Their main issue was the way wrote
their code duplicated most of the (very large) request data unnecessarily many
times and forced PHP to add more chunks of RAM several times as it handled the
request. My code cut all of that fat, but because they didn't understand
that's what their problem was they wrote it off as "this guy doesn't know how
to code". They didn't even ask and I was glad to not move forward, ultimately.

~~~
tluyben2
This is a problem with 'theoretical' (books/uni 'experience') and 'IBM
thinking' (people are more expensive than adding more machines, so always
optimize for using less people time; just add machines... says the hardware
(and per core software license) vendor. Amazon AWS agrees!) people is that
they lack the real life experience where things do not fit in 'best practices'
(as in design patterns) or green field architectures from books/tutorials if
you don't want to scale up to 1000 nodes for 100.000 users. Optimizing this
(so you can run those 100.000 on 1 node, but let's say 2 for failover) might
not result in code which people might find 'understandable' from their point
of view. A lot of people with even years of experience, work in a kind of
'process' way; stacking up code the way they have learned and handle every
case as the same case with 'somewhat different logic'. They might not
understand your code or they might understand it but think it's not adhering
to proper best practices and standards. Their loss.

~~~
busterarm
I also think that while these engineers all went to schools where they got
both high level and low level programming under their belts, not enough of
them really made the connection to understand what their code actually makes
the computer do, aside from in the theoretical sense (Big O).

We're not really training enough truly full-stack engineers.

~~~
hinkley
I walked out of college having made a game of one-upmanship with a friend
(easy CS homework days, we’d compete against each other over runtime or
algorithms), onto a project whose UI was so slow you could watch the screen
paint itself. It was so bad I was ashamed to be associated with the project,
but I moved a long way so I stuck it out for a bit, until it was respectable.
I think I learned as much about performance tuning in the next six months as
most people do in their entire careers. A couple jobs later and I could write
a few chapters of an advanced book on performance analysis (in particular perf
logs tend to hide critical info in plain sight).

We mostly just don’t care, and too many of us have honed or at least
mislearned excuses for our behavior that business folks have no way to talk us
out of. Most vexing for me is when customers threaten to leave over
performance and the senior staff pull up some graphs that say we have done all
we can. It’s fairly straightforward to climb the ranks on such a project by
proving them wrong. These sorts often leave 2-3x (that I know how to find, who
knows what he real number is) on the table, which is often enough to satisfy
the customer. It’s often hard work but there’s nothing magic about it.

------
sergiotapia
Been programming in Elixir for years now, I'd say 19 out of 20 engineers I've
spoken to don't really leverage BEAM.

There's a significant gap between "I write Elixir" and "I leverage BEAM". I'm
in the former camp _still_. I just can't find the time/opportunity to dive in
and use it to my advantage.

Testament to how performant and easy it is to get things up and running in
Phoenix I guess. I believe once this secret sauce is spread far and wide,
we'll see uptake in Elixir the likes of which the world has never seen the
likes of which.

~~~
dmitriid
> I'd say 19 out of 20 engineers I've spoken to don't really leverage BEAM.

That's because most tasks we as engineers write rarely have the need for
distribution or parallelism. It's mostly "get data A, get data from B, mix,
save resulting data C". nd when you need to do that in parallel, you through
it in Kubernetes, or Apache Beam.

It's even more pronounced in web/microservices situations because the web
server takes care of all of that for you.

~~~
tluyben2
Apache Beam I see, but what has Kubernetes to do with getting and mixing data
in parallel?

~~~
dmitriid
You an automatically scale instances up until there are enough to process
incoming requests.

------
grantjpowell
The BEAM is great, but the Elixir language itself is really what spoiled me.
After having written ruby for years, I really enjoy what the Elixir language
has to offer. Pattern matching[0] and the pipe operator[1] eliminate a ton of
intermediate variables. Functional programming took a while to get used to,
but I'm really starting to like thinking immutably. Now that I'm more advanced
in the language, I'm starting to get into the Lisp style hygienic (and
unhygienic macros)[2]. The Elixir standard library is really well done,
everything is consistent and well documented. Elixir language level built in
libraries like ExUnit[3] and Embedded Elixir (EEx)[4] are very high quality.
For me the BEAM is just the cherry on top, I really like the Elixir language
itself!

[0] [https://elixir-lang.org/getting-started/pattern-
matching.htm...](https://elixir-lang.org/getting-started/pattern-
matching.html)

[1] [https://elixirschool.com/en/lessons/basics/pipe-
operator/](https://elixirschool.com/en/lessons/basics/pipe-operator/)

[2] [https://elixir-lang.org/getting-started/meta/macros.html](https://elixir-
lang.org/getting-started/meta/macros.html)

[3]
[https://hexdocs.pm/ex_unit/master/ExUnit.html](https://hexdocs.pm/ex_unit/master/ExUnit.html)

[4] [https://hexdocs.pm/eex/EEx.html](https://hexdocs.pm/eex/EEx.html)

~~~
jefflombardjr
Can you recommend any learning materials that were particularly useful to you?

~~~
AlchemistCamp
For me personally, _Programming Elixir_ and _Programming Phoenix_ , from the
Pragmatic Bookshelf, were both very useful.

Quite a few of my students have said great things about _Phoenix in Action_ ,
from Manning, and it looks to me like a fantastic resource for beginners.
_Elixir in Action_ is also top-notch and will show you what's special about
the BEAM.

~~~
bigbassroller
To those out there learning from “Programming Phoenix”, although is the best
resource for lay of the land of Phoenix, the content is getting out dated with
the rise of auth generator and the context model has also changed a lil bit.
It takes diligence in following along and referencing the documentation to not
get stuck.

Great book though.

~~~
AlchemistCamp
Anything written after 1.3 is almost completely applicable today.

The auth generator is just a new feature. Learning how the pieces work from
the book is still worthwhile.

My advice if you want to avoid confusion as you're learning is to just use the
same version of Phoenix the book does! (1.4 in this case)

After you go through it, you can follow the official upgrade guides in just a
few minutes.

~~~
mercer
I'll add that learning how to roll your own auth with the book and /then/
using the auth generator is a great learning experience.

------
gojomo
For those like me who may have been wondering, 'BEAM' is just the name of the
Erlang VM: "Bogdan/Björn's Erlang Abstract Machine"

------
dynamite-ready
I've written a decent amount of Elixir for a personal project, and have come
to a few conclusions.

For a start, it's very different. That will become self evident if you try to
learn either Erlang or Elixir, when coming from a language like Python/C++/etc
(and though Elixir looks a lot like Ruby, it's still more like Erlang, tbh).
So that can be classed as a negative.

Secondly, fault tolerance in Erlang/Elixir/OTP is not quite as simple as
commonly billed. Isolated processes and supervision trees can give you a lot
of mileage, but the model is not a silver bullet. I found I still needed to
create an error boundary to make an application 'Do the right thing' in very
specific error conditions (for example, not killing a running process, when
making a single, accidental RPC with the wrong arguments).

Apart from that though, I can't think of much else negative to say about
Elixir.

The tools for introspection are amazing. I recently debugged a production
issue by opening a console connected to a running system, and calling the
functions I suspected to be failing to see what they were returning directly.
Not sure if any other language has that facility.

Then there's all the stuff built into Elixir, like the test suite, the build
tool, and even an official linter / code formatter.

Erlang interoperability is also a brilliant design decision. Anytime I need to
use a library for a task, I look through the Elixir community first. But if I
can't find something suitable, there's often Erlang code out there with years
of Dev time behind it.

This is just a mere fraction of all I want to say about the language. And most
it is very positive.

~~~
gotts
>I recently debugged a production issue by opening a console connected to a
running system, and calling the functions I suspected to be failing to see
what they were returning directly. Not sure if any other language has that
facility.

Clojure has it.

~~~
dynamite-ready
It does get a little more involved in Erlang/Elixir though. It has a built in
GUI program called 'Observer' that can even print out a real-time
visualisation of your object and process tree. Amongst many other things.

------
princevegeta89
I still write Elixir only, and don't really leverage BEAM (not yet) but the
sheer performance of the system amazes me. Phoenix is very largely scalable,
and makes some really clever decisions in terms of architecture unlike
Rails/Django.

The learning curve has been a little steep and there is a paradigm shift but
I've encountered situations where it really ended up helping me navigate
through code with precision and clarity. I think Phoenix is the best candidate
for a Web Framework as of today, if one desires rapid development. The
standard library is rich, documentation is solid, and features like async
tests etc are an added bonus.

I'm using Phoenix for our side project and it is swift as a rocket.

~~~
bostonvaulter2
> I still write Elixir only, and don't really leverage BEAM (not yet)

Since you're using Phoenix you are already reaping the benefits of the BEAM
because Phoenix helps you use good patterns with regard to process isolation.
In many web apps you won't really need to deal directly with OTP for quite a
while (unless you're doing something rather specialized).

~~~
princevegeta89
Well, yeah, I'm aware of the fact that I'm leveraging BEAM. What I rather
meant was I haven't gotten creative with using the OTP ecosystem for my
project just yet.

Sorry I should have framed it better.

------
ridiculous_fish
How does BEAM handle system calls? If I have a million BEAM processes and each
one calls stat(), what happens?

~~~
cuddlecake
What do you expect to happen in other languages, when your program makes a
million calls to `stat()`? Is that something that happens often? What kind of
application needs that? Is there a language that solves this neatly?

(I'm genuinely curious, independent of my newbie elixir evangelism)

~~~
ridiculous_fish
This problem doesn't arise in a 1-1 language like C, because the thread
spawning will either fail or have obviously terrible performance. These
languages push you into using work queues, etc. immediately.

But in languages with green threads (like Go), you CAN spawn a million threads
and performance will be fine, until you make a syscall.

An example of how this can happen: I once wrote a Go tool that walks the
filesystem. I spawned a new thread for every directory, thinking that Go only
has ~N kernel threads so performance will be fine. I was shocked to see that,
in this scenario, it spawns a kernel thread for every green thread!

------
yawboakye
100% in agreement.

One nit: OTP instead of BEAM. The supervised processes and modules as
application is due to the OTP framework. What that means is that, it's
possible to have it in other programming languages as well. Or not, and
managed the application with some OS process manager.

------
amgreg
> Some teams are happy to use Elixir to write web applications, to fit it into
> their pre-existing microservice deployment model. I urge developers to reach
> for those bigger thoughts that fault tolerant lighweight processes enable.
> You can consider each process a logical microservice, but you don’t need the
> friction of deploying it separately, and serializing to/from json, and
> containers, and orchestration, and distributed logging, et.al. You can
> deploy an entire system of millions of interacting logical microservices in
> a single BEAM release.

This resonates with me instinctually, but I’d love to learn more. What
_disadvantages_ would there be in jettisoning real microservices in favor of
Erlang’s “logical microservices”?

~~~
dodobirdlord
Release independence? If one team has a regression and needs to roll back
their week's release, everyone gets rolled back too.

~~~
sho
From my experience with microservices, this is kind of a utopia that never
happens. What actually would happen is that everyone else's service is
dependent on the new feature that the problematic service delivered, so
they're all going to be rolled back anyway.

Maybe in very large, mature companies this happy land of backwards compatible,
fully independent microservices, deployed totally independently, is possible.
But in startups tearing forward at full speed with limited crew it's a pipe
dream. The better solution is for proper source control management, so
problematic changes that are actually independent can be reverted effectively,
leaving everything else alone.

~~~
dodobirdlord
It's certainly true that microservices can be set up in such a way that
they're really a distributed monolith. But you don't have to be a large,
mature company to apply microservice best practices. A foundational best
practice of microservice development is that each service has a staging
environment that uses the prod endpoint for every service it depends on, and
this is where you run acceptance tests prior to release. This way the staging
behavior before release aligns with the prod behavior after release.

The week N release for microservice A won't have a dependency on a new feature
in the week N release for microservice B, because the staging environment for
microservice A (at release N) calls into the prod environment of microservice
B (at release N-1), so the release-blocking tests wouldn't have passed.

It is a common and critical antipattern to have a single staging environment
across multiple services, where each service calls the staging environment of
each other service. This is a disaster waiting to happen for just the reason
you describe.

------
m463
I'm reading all this and wondering what BEAM is.

~~~
OkayPhysicist
The BEAM is the virtual machine that runs Erlang, Elixir, and a handful of
other languages. It's basically a system for managing a massive number of
small, independent processes that communicate through message passing.

------
glaive123
Is this a jekyll blog? Looks very sharp.

~~~
faizshah
It’s the Architect Github Pages/Jekyll theme: [https://pages-
themes.github.io/architect/](https://pages-themes.github.io/architect/)

------
whynotwhynot
Great coincidence since I wanted to attract attention of HN crowd earlier
today to another BEAM language: Lisp Flavored Erlang (LFE)

"[LFE] is the oldest sibling of the many BEAM languages that have been
released since its inception (e.g., Elixir, Joxa, Luerl, Erlog, Haskerl,
Clojerl, Hamler, etc.)"

I find this little example really cool:

[https://lfe.io/books/tutorial/concurrent/dist.html](https://lfe.io/books/tutorial/concurrent/dist.html)

I've drank the Lisp kool-aid when I had to learn Clojure, but LFE is so much
more powerful for building highly concurrent and distributed systems due to it
being based on BEAM and having zero interop overhead with Erlang, while being
a feature-complete Lisp.

