Hacker News new | past | comments | ask | show | jobs | submit login
Languages on BEAM, the Erlang virtual machine (github.com)
225 points by nkurz on Jan 20, 2017 | hide | past | web | favorite | 153 comments

In case you are wondering what to use as your next programming language, please consider Elixir!

It has a lot of advantages:

- it is an easy to use, well-designed functional programming language, that happens to boost productivity and be (a lot of!) fun at the same time

- it is modern, with modern tooling around the language (e.g. mix, edeliver)

- it has Phoenix, a modular web framework, ideal for APIs and traditional MVC, very modular

- it is rock-solid for a relatively young language

- stack traces are clean, readable, and point you in the right direction most times

- performance is decent on a single-core, but since it's basically Erlang, it scales better on multicore than most solutions out there

- it has a world-class concurrency model powered by OTP, but more approachable to the average programmer, lots of high-level concurrency abstractions built-in

- Umbrella projects for those who need or want a truly modular application without going all-in with microservices

- it is the only language besides Erlang proper, that runs on the ErlangVM that seems to be getting a lot of traction (or at least a lot more compared to the alternatives)

- the community is very welcoming and open, lead by José Valim and Chris McCord who are smart, pragmatic and very nice people

I love Elixir as much as the next guy, probably more, but I want to temper your enthusiasm a bit:

> it is modern, with modern tooling around the language (e.g. mix, edeliver)

Sort of. Mix is a nice package manager. Edeliver is a series of hacks on deliver, a five-year-old Capistrano clone. But don't expect Elixir to fit into 2015-class deployment systems just yet.

> stack traces are clean, readable, and point you in the right direction most times

Other times, the error happened in another process, all you see is a timeout, and it takes some spelunking. Not uncommon in my experience.

But I'll also add a couple of points:

- the docs system is fantastic, in particular the automatic generation of handsome docs websites, and the `doctest` system which allows example code in your docs to double as test code

- I'm not sure there is anything better than supervisors for isolating and recovering from errors. Defensive programming is one thing, but sometimes your RAM stick gets hit by a cosmic ray. Shit happens; have a plan.

I am not sure how you define 2015-class, but building a release with distillery, and running docker build on a Dockerfile that copies the tarball (that includes everything needed to run your app (ERTS, app code, NIFs, etc)) makes a docker container that you can deploy as you'd expect. If you want to use clustering it's a bit more setup, but I was able to get a simple phoenix app running clustered on Google Container Engine (Kubernetes) in about 4-5 hours, starting with zero knowledge about Kubernetes or Elixir deployment. I pushed my code to github[1], and I did a talk on it at our local Elixir meetup[2].

Now that I have it figured out it's totally painless to upgrade, especially because of the Kubernetes tools. Simply make a new release, build a new container, push it to your docker registry, and then do a rolling update with kubernetes using the new version of the container. You can also add/remove scale on the fly, libcluster handles it all.



It fits into industry-standard deployment systems, yes, but not without sacrificing some inherent benefits of the Erlang platform. In particular, hot upgrades and downgrades are no longer there, as Docker images try pretty hard to be immutable. Of course there are also the effects of flushing all server-local cache and/or memoization upon deploys too. Probably a bunch of other stuff I can't think of now also.

So far I've preferred to stick to the Erlang way of doing things because it let me be laziest and it hasn't failed me, but someday I would like to see tools that allow the Erlang way and the e.g., K8s way to work together.

On the other hand, most people seem never to touch hot upgrades, even in Erlang, preferring the traditional rolling update across a cluster of machines. A lot of libraries aren't built for it either.

Yes, hot upgrades sound incredible but the other advantages of docker seem to outweigh them. There is no reason you can't write a script to download the release and do the hotupdate but it feels like you are treating your containers as mutable then which is frowned upon.

Hey liked the video, did you upload your slides anywhere?

I haven't used these seriously yet, but when I looked at the options for deploying Elixir web applications (on a single host), I liked Gatling (https://github.com/hashrocket/gatling), ansible-elixir-stack (https://github.com/HashNuke/ansible-elixir-stack) and, with a few more reservations, Dokku (https://github.com/dokku/dokku) with the appropriate buildpacks.

On a side note, if you work with Ansible, HashNuke's project is worth a closer look even if do not end up using it. It's well-structured and shows how an Ansible role can do both server configuration management and deployment. The pattern works really well; the only downside to this particular implementation is that it has the server pull your code from a Git repository, meaning the server must have access to it, instead of having your development machine copy the code to the server. I'd prefer the latter for security reasons and to match how Ansible works. If that's relevant to your interests, I wrote a hacky Ansible task that shows what that might look like with the same deployment pattern: https://github.com/adhokku/adhokku/blob/master/tasks/deploy-....

Edit: Added a note about the single host.

>> Defensive programming is one thing, but sometimes your RAM stick gets hit by a cosmic ray. Shit happens; have a plan.

This is sort of my big reason for constantly wanting to sit down with erlang/otp/beam. No matter how good your Ada is sometimes you want software that can fall over elegantly.

> But don't expect Elixir to fit into 2015-class deployment systems just yet.

With what you've described -- a package manager that works -- it would seem that it fits pretty well. By 2015-class I'm thinking those systems, like buildpacks, that leverage dependency metadata to build a deployable tarball. What are the parts that seem missing to you?

I explain myself a lot better here: https://news.ycombinator.com/item?id=13447734

What kind of web developer needs to care about cosmic rays and bit flips? Genuinely curious here.

Any web developer whose software runs on computer hardware. Otherwise, you should be fine.

I still don't see why the parent commenter decided to include recovery from cosmic rays as an upside to using Elixir. I'm sure web devs everywhere are throwing out their current tools for this huge advantage! In any case, commodity server hardware already takes care of SEUs through ECC memory.

Memory corruption from cosmic rays may be more common than your intuition suggests. This article cites a 2009 study of a large cluster that found single-bit ECC corrections occurring about 1 per minute per TB, and uncorrectable double-bit errors at a rate of about 1 per year per TB:


Whether this is frequent enough to warrant consideration depends on your application, but there are definitely people here who care enough about how their systems recover after errors of this sort that it's not an absurd suggestion.

I use cosmic rays as an example of "errors outside the programmer's control", that's all. My point is that those errors will always exist, and it's desirable to have a backup plan ready.

at least those like NASA who use space-grade (radiation hardened?) microchips

That's not what I'm talking about at all.

My issue is that the original commenter listed "resistance" to bit errors as an advantage to using Elixir vs. other languages for web dev.

The question then becomes: who the hell needs bit flip resistance when building a web app?

I'd say it's a metaphor for more likely bugs, the gammaray bitflip being the most meta of them all. What do you think ecc-ram is for?

Exactly. ECC takes care of 99.99% of cases, so I don't see what using Elixir has to do with any of this.

Elixir looks interesting, but coming from many years of using Ruby, another dynamically duck-typed language is the last thing I need, and (in my humble opinion!) the last thing I think anyone should need in this day and age.

I have been using Go almost exclusively on the backend for the last 18 months or so, and it's a huge safety net. Go is nothing new, of course (it's basically Borland Pascal/Delphi -- which I also used for many years -- with GC), but it brings to the table a strictness that is very much needed (not to mention very good single-core performance).

Almost all the problems I encounter in large-scale Ruby programs have to do with wrong types or wrong shapes of data being passed around. It's an antipattern, but we have a lot of code lying around that uses Ruby hashes (dictionaries) instead of classes, because it's easy to do; but even if you create classes to wrap you data and validate the type of every single piece of data you pass around, the surface area is too large to cover perfectly. Not to mention that Ruby suffers from its own icky form of null pointer problem, the nil value.

And of course, static typing has many benefits -- not least allowing programmatic introspection (autocompletion, automatic refactoring etc.) in ways dynamic languages will never permit.

I know Elixir mitigates the lack of static typing with pattern matching, guards and the possibility of using @spec together with Dialyzer, but I'm not at all convinced that this is sufficient.

How about Crystal, then? It's largely Ruby syntax with most of it's OOP model, minus some of the metaprogramming. But it's statically typed with a good type inference system, and it's compiled.

That way, you get the expressiveness of Ruby (mostly) with the benefits of static compilation. It also has macros to make up for some of the missing metaprogramming.

I've been building an app on crystal and really enjoy it but have noticed some of the growing pains as it ramps up to 1.0.

Its great but may not quite be ready for production unless you don't mind fixing the occasional breaking changes. Its supposed to hit 1.0 this year.

Yes, Crystal's type inference looks really promising. I hope it will gain enough momentum.

But Crystal doesn't run on BEAM, which (I assume!) is something OP was looking for.

Right, but my post was in response to the one above about how we don't need another dynamic language because a statically typed language like Go is so much better at catching errors. Even though you have to write more code while being restricted in what can be expressed. Thus, Crystal makes sense, best of both worlds, and what not.

Of course if you're looking for functional on Beam, then neither Go or Crystal are good comparisons.

More importantly, a lot of this is diminished in importance in Elixir by supervisors. Its ok to just let code crash and reset in small chunks in Elixir. Code in Elixir should be structured to take advantage to the ability to crash and recover gracefully.

It's good to have error recovery built-in, but it does not mean one should take less care of correct code.

I believe static typing is just a must for current high-reliability software and going with a dynamic language just seems short-sighted.

The remaining languages are mostly JavaScript and Python, but the former survives, as JavaScript is mostly used for UI code and people expect less quality from GUIs and the ladder as it is status-quo for scientific computing and a swiss army knife, as we have so many libraries across so many domains.

Dynamic languages are excellent for prototyping (where you keep your constraints in you human head) and errors are not critical. There once was an argument for dynamic languages as they allowed for succinct code, but nowadays modern languages can be correct and succinct code, as we learned to exploit theorems about computation and type theory.

[/static hat off]

We are replacing all our python backend by golang and indeed static typing is a good way to well structure your data. Regarding reliability, I would prefer elixir for 3 raisons: #1 Functional programming. In go it's really easy to make error by sharing object reference in channel. And what is funny is that it's not so easy to make a deep copy of an object. #2 Goroutine leak and supervision is really difficult to follow. How do you manage it in production ? Thanks to erlang, elixir provides all you need to monitor your green threads. You can even hot fix issues. #3 Beam preemption is clearly a strong winning point.

Bonus point, go scalability is limited to your server. In elixir you can remove server boundaries.

For a new product, I will definitly bet on elixir.

No, static typing just gives a false sense of reliability to people who are not experts in reliability. There is even some research showing no correlation of static typing vs dynamic typing to bugs [1]. People tend to ignore it though.

[1] http://web.cs.ucdavis.edu/~filkov/papers/lang_github.pdf

From the conclusion in that paper: "The data indicates functional languages are better than procedural languages; it suggests that strong typing is better than weak typing; that static typing is better than dynamic; and that managed memory usage is better than unmanaged. Further, that the defect proneness of languages in general is not associated with software domains. Also, languages are more related to individual bug categories than bugs overall."

Yeah, that was a weird conclusion as their data suggests otherwise.

> No, static typing just gives a false sense of reliability

That is not true, there are real benefits and classes of bugs, that can be avoided before runtime. Historically there was the drawback that inferring these bugs has been computational infeasible for some programs, but nowadays the situation is much better.

IMHO the only positive aspect of dynamic languages is their syntactic beauty and existing large eco systems.

I admire Elixir's and Erlang's approach to concurrency and think this model (do not share data and engineer for failures that will happen in distributed systems) is great, but could be combined with static typing.

Dynamic languages tend to let you express yourself more easily and cleanly with less code. Paul Graham years ago wrote about how Lisp let's you write more maintainable code by being able to express yourself close to the problem domain. The power of DSLs is that it's easy to read and write code for that particular sort of task.

Another way to state the advantage of expressive languages is that the less code you have to right, the less bugs you will have. And the less code written, the less code needs to be maintained, the less needs to be read to understand what's going on.

Of course this all depends on writing clean code, not obfuscated hacks or undocumented messes with needless complexity. But that goes for all programming.

There have been some advances in language theory since Lisp, though.

In particular, modern type inference allows you to practically eliminate type declarations and approximate the terseness and expressiveness of dynamic languages. Recent languages such as Crystal and Nim have demonstrated that this is entirely feasible. (I mentioned Go earlier, but Go is, if anything, regressive when it comes to language theory, and not a good example of how to do things right.)

The Lisp example is only convincing if you've not seen what a really impressive type system can do; Haskell and the ML family come to mind, and Rust and Swift both seem promising in that regard, as does Crystal.

"Less code means less bugs" is a powerful meme, but it's a misleading metric in the context of type systems. Dynamic languages necessarily require less code since they eliminate information that otherwise allows you to fully reason at compile time about the program's future execution, thereby eliminating the ability to guarantee its safety; and the opposite is also true: "less information means more bugs".

Ultimately, it's less logic we want — less decisions that can go wrong. In Lisp, or any other dynamically typed language, you can pass a variable of the wrong type to a function and won't know it's wrong until you run the program — how did fewer lines of code help you? The equivalent program in a statically typed language might be slightly longer (though it could also be the same size), but the additional type declarations have made your program less buggy, not more.

> dynamic languages

Dynamically typed languages.

> In particular, modern type inference allows you to practically eliminate type declarations and approximate the terseness and expressiveness of dynamic languages.

Not of dynamic languages.

> require less code since they eliminate information that otherwise allows you to fully reason

The idea of Lisp is to have a lot of information at runtime/development time. One develops running software, not a program as text. The running software gives the feedback.

These discussions are thirty years old now. Since then everyone saw the type systems of the day as advanced and modern. Haskell is actually also quite old now...

Lisp and Haskell are simply used in completely different ways to develop software. I would also bet that there are much larger Lisp programs than Haskell programs in use. A large Lisp program can be several million lines of code, like some Lisp-based CAD applications.

> won't know it's wrong until you run the program

That's a big problem, indeed, if your goal is to ship code that was never executed.

> if your goal is to ship code that was never executed.

That's cute, but no. There's a lot of code that only gets executed once in a blue moon and if my experience[1] is anything to go by, this is where types really shine. (Before you say "tests": The amount of code bases I've experienced in dynamically checked languages with no meaningful test coverage is shocking. That, and tests can only show the presence of bugs.)

Types are also hugely valuable as compiler-checked API documentation, especially if side effects can be document as in e.g. Haskell or Idris.

(I'm obviously assuming non-trivial type inference as in e.g. Haskell or O'Caml. If we're talking anemic type systems like Java or C#, then the trade-off becomes a lot harder to justify, at least for me. In these cases you can indeed remove a non-trivial amount of "ceremonial" code and the type systems indeed have very few useful guarantees like non-nullness and such.)

[1] We're all trading anecdotes here, let's be honest about that.

But the whole point of type systems is that you can theoretically verify your code at a high level (though very few computer languages are able to approximate anything like a mathematical proof).

I wrote about this in another comment, in the context of writing tests.

In theory, practice is the same as theory. In practice, it isn't. I've seen only toy examples of some non-trivial program properties being encoded in type systems. It nowhere constituted proof of an entire program. Nobody actually does that. A lot can remain wrong in code which passes type checks. There is no "if it compiles, ship it". Not to mention that compilers can have bugs, and that hidden performance and resource problems can lie latent in high level languages.

How about the property 'this variable is never null'? You might call this trivial but it seems to be useful given that real-world programmers get it wrong frequently: http://stackoverflow.com/search?tab=newest&q=NullPointerExce...

Or the property 'this variable is never used after a free' in a language with manual memory management (Rust encodes this) or 'this program is free from data races' (also Rust)?

No one is arguing that strong static type systems make it impossible to write programs with bugs, but eliminating whole classes of bugs at compile time certainly makes it easier.

"less information means less bugs" is not the same thing as "less code means less bugs". Your entire argument there is a non-sequitur, no one who says "less code means less bugs" is talking about the mechanical aspect of typing out code.

They're talking about a combination of using battle tested code and writing at a higher level of abstraction.

I wasn't talking about "the mechanical aspect". And I did not claim anything like what your first sentence implies.

you treated the phrase literally.

I've never understood what people actually mean when they say things like "dynamic languages let you express yourself more easily."

I tend to find it significantly harder in dynamic languages. Yeah, there is way more flexibility in small decisions (like right now, is it easier to return a string or an integer or whatever) but it's not free. Your little decision interacts, directly or otherwise, with potentially the entire rest of the program that might ever exist. There is probably a best choice, and it's hard not to try to find it every single time. Doing the easiest thing right now might mean you need to go and adjust many other things. In order to know what is overall easier you need to keep the rest of the program in mind.

I find dynamic languages kind of exhausting to write anything but one-off scripts in. I can't make good choices without holding far more of the program in my head than I would in a static language. Sometimes I don't even realize I'm making such a decision because I don't have a correct or full view of the rest of the program. To make matters worse those errors won't even show up as being a problem at the decision-site.

I feel like I express myself much easier in a language with a strong static type system. I can write down clearly exactly what the model is, and then the compiler or types let me know when I try to stray from it. From there I can decide if it's better to adjust what I'm doing in the small or the large.

Because languages like Lisp, Smalltalk, Ruby give you greater control over bending the language to more closely match the problem domain, instead of jumping through extra hoops to express what's needed, and the result is a more readable program, because it's expressed in terms of what the program's trying to accomplish.

The entire history of computing is abstraction away from the machine, and offloading as much of the work to the machine, so that humans can focus on solving problems instead of worrying about lower level details. Dynamic languages tend to be better at that.

Now if you Haskell is the comparison, then maybe not. But Haskell has an advanced type system with excellent composability, so it's quite capable of expressing high level abstractions and domain specific code.

That's the general idea, but clearly not everyone agrees. Or perhaps, other concerns are considered more important.

F#'s got the types and the beauty factor IMO. Lacks lisp macros, but otherwise is pretty fantastic.

4.1 will officially support .net core as well. I hope to be able to use it for work some day.

This does not match my experience.

It's also commonly accepted — and, in my opinion, true — that dynamically typed languages move the burden of verifying your program to tests. I waste a lot more time in Ruby tests throwing bad data at my code than in Go, whereas in Go I can concentrate on real use cases, because static typing eliminates a whole class of abuse. In other words, the language forces me to do my own type validation at runtime and in tests.

Moreover, one big reason why static languages are more robust is that static typing changes one's entire approach to handling data. For example, in Ruby it's common to just do JSON.parse(raw_string) and then pluck out whatever it parsed (and again, the burden is on the programmer to verify that the parsed data structure is the correct one). The equivalent in Go is to declare a schema in the form a compile-time struct annotated with the field names that you expect as the input; the unmarshaler will catch type errors and blow up if something goes wrong. In other words, there's a completely different methodology: When Go has parsed your data, you know all the types are satisfied, whereas Ruby is stuck with duck typing. Maybe someone clever has written a strict JSON library for Ruby, but I don't know of one. Ruby libraries tend to be very un-strict.

The dark side of duck typing is that anything you don't explicitly recognize can get ignored. If you have a method with "def foo(options = {})", and someone does "foo(non_existent: 42)", nobody will typically complain. This sort of thing is why ActiveSupport has "assert_valid_keys", and why Ruby now has proper keyword arguments (which only solves the problem of valid keys, not valid data). This stuff is hard to catch and leads to code rot, which is a big maintenance problem that exacerbated by the fact that if you change or remove an argument like this, you won't know if it fails until you run the code, and "grep" is the only tool you can use to find out who actually uses the code.

I understand where you are coming from, there are certainly cases where there is too much flexibility and there is a general need for better quality culture, always writing unit tests with good test coverage, etc. Not all languages do it well and many could benefit from some extra optional compiler strictness. However, you still have comparable bug rates for both very expressive dynamic high level languages and lower level ones, making quite a big difference in total number of bugs for the problems of the same complexity. Basically having to write say 3x less code in something like Perl than in Go, to solve the same problem leaves you with 3x less bugs and takes a lot less time.

Well, there is https://github.com/ruby-json-schema/json-schema

Pretty convenient for input validation in runtime or output validation for tests.

Well, Ruby will also crash on malformed data; exception handling makes it trivial to recover and continue.

Using supervisors to recover by retrying allows you to correct for instability (network faults, oversaturation, locking and such) where a retry might actually fix things. What I am describing is something that cannot be retried in the first place; the next attempt would try to feed the exact same piece of wrongly-typed data back into the supervised process that would then fail again, and so on.

Ruby doesn't have anything like OTP, but these are two different problem spaces. Elixir, as far as I can tell, is open to malformed data/types at runtime in exactly the same way that Ruby is.

I like failing fast in the exceptional case where things are out of my control, and OTP is a great platform for doing so, but most of the crashes I get in Erlang/Elixir are purely programmer error, and would be trivial to catch statically.

> Using supervisors to recover by retrying allows you to correct for instability (network faults, oversaturation, locking and such) [...]

Using supervisors to retry operations is going to make your system unstable. Supervision tree is not about handling semi-expected faults, it's about starting things at boot time and restarting them in a predictable manner (along with any dependencies) should any of them crash unexpectedly.

I agree. It was the parent comment that advocated supervisor trees as a way to mitigate runtime type errors.

What? No, not at all. Typing errors are programmer errors and are not recoverable. A supervisor is useful for restarting processes that have intermittent failure, and type-based errors are not an example of that.

"another dynamically duck-typed language"

Elixir actually has things like behaviours and protocols instead of Ruby-style reliance on duck typing.

While i really love static typing, saying that Go type system help here is... plain false.

Go type system, like its predecessor Java, C and company do not help for that. These are not sound type system.

At best they are help for the compiler to decide how to allocate memory or to try to get polymorphism.

Another major point for me is maintainability. When you are trying to figure out the type of the object that was passed in to many layers of functions across files in a legacy codebase, it's a nightmare.

> stack traces are clean, readable, and point you in the right direction most times

I really have not found this to be the case. You'll get obscure unmatched function clause errors coming from the opposite side of your program, that can eat up most of the day trying to track down, ending up having to carefully look at the function guards deep in library code to attempt to figure out where things broke. And not to mention the parse errors - completely unhelpful coming from languages like Elm and Rust. And then there is the huge overuse of metaprogramming - just use functions dammit! And how you have to go trawling through the source code to figure out what `__using__` is going to dump into your module. And cargo-culted Ruby syntax that's pretty hostile to FP...

I am curious if you or any of the other Elixirists(not sure what the community is called) spent time learning Erlang first or jumped right to Elixir? If yes would you recommend spending time with Erlang first or is not necessary? I guess I don't know if there is an Interop similar to Clojure and Java.

Time with Erlang is not necessary beforehand but do go in with the knowledge that there is a lot of interop given you can call Erlang code in much the same way as Elixir code.

Elixir made a decision early on to not deliberately wrap Erlang libraries that were perfectly sane as they stood. For example, to get a simple queue construct in Elixir you just call the Erlang queue module.

Once you're up and running with Elixir then there is nothing stopping you delving deeper into Erlang and OTP.

Thanks I've been completely fascinated/mystified by Erlang and OTP so this might be good stepping stone with its own rewards. Cheers.

I learned at least the basics of Erlang first, mostly because I knew of Erlang's existence before I knew of Elixir's. I'd recommend it if only to help shake the "this feels like Ruby and I'm going to write my Elixir code like Ruby code" mentality.

Also, I personally vote for the term for "Elixir user/enthusiast" to be "alchemist".

They call us alchemists!

Ah good to know. I like it!

You talk a lot about how easy and approachable it is to an average programmer. Can you say anything to an experienced above average programmer as to why Elixir should be my next language?

P.S.: If you're learning a new language, as a deliberate practice exercise, learning that language should be hard to you, or it won't be as effective in making you better.

First : functional programming — and strangely enough the data structure used for it (Clojure has collection, Lisp has, well, list, and Elixir is a mix a both for that) Second : actors and the various paradigm around it. Third : real (working) macro Fourth : Erlang/BEAM. Fifth : no parentheses discomfort, which could be a thing for you. No excuse to have fun. Sixth : the Erlang/Elixir tooling (dializer for static check, mix, exrm, distillery, edeliver... and you even have a type system with typespec)

This is not a hard language to get. That's not it's purpose. It's practical if you want to adhere to the actor system, functional everywhere and so on.

What would be the advantages from me just going straight into Erlang? I've got experience with Clojure and F#/OCAML, but I've never used the actor model, and i'd like to have a BEAM VM language under my belt. I can't choose between Erlang or Elixir or LFE.

Does Elixir have step through debugging (breakpoints, etc) with accurate referral back to source code? Once one gets used to a modern, civilized debugging environment it is very difficult to go back into the wild.

The debugger is pretty good when using `iex`. However, you don't get "step through" debugging like you do with Python's `ipdb`, only breakpoints.

Yes, see iex (Elixir's interactive shell) and its pry function. If you're familiar with Ruby's pry or Python's pdb, it's very similar.

Yes the whole VM come with debugger, dynamic tracing and all by default. This is also distributed by default.

You forgot:

- good marketing

And that's not a bad thing -- it means the core team and users of the language are excited about it, which can be a good thing.

There are no big companies behind Elixir so the marketing is pretty much a community thing done by people who like the language.

Discord is using Elixir, and is pretty large.

In another thread, their CTO claimed they have more users than Slack.

There are big companies using Elixir; I think what qaq meant is that Elixir isn't itself developed by a big company.

I'm curious about the performance of Elixir code compared to Erlang. I guess I'm what I'm really asking is how different is Elixir from Erlang in regards to the paradigm, and what performance costs does it pay for it?

There is no performance overhead, Elixir is equivalent to Erlang in that they both compile down to beam overhead.

This is the answer I was hoping I wouldn't get.

I understand that it runs on BEAM, much like I understand that IronPython runs on the CLR.

However, IronPython's performance is slower than C#'s performance due to the dynamic nature of the language, even with the special builtin support that MS built in.

What I was asking is whether or not Elixir's performance characteristics were significantly different from Erlangs.

The semantics of Erlang and Elixir are very similar, as they're both dynamically typed functional languages, so the performance difference should be negligible.

They are not only similar they are in fact the same. Elixir compiles down to Erlang and the Elixir libraries are built using Erlang and OTP "underneath". Also the the BEAM, the Erlang VM, is designed to run Erlang so it is difficult to make an efficient implementation of a language with different semantics on top of it. It is truly Erlang all the way down.

alright, thanks :)

No, they are the same, there is no basic difference at all.

> I understand that it runs on BEAM, much like I understand that IronPython runs on the CLR.

Those are different cases. Your question is more like "what is faster: C or Fortran?" than "what is faster: C or Python?".

> However, IronPython's performance is slower than C#'s performance due to the dynamic nature of the language

No. It's slower because IronPython is an interpreter, while C# compiles to byte code, not because one is statically typed and the other is dynamically typed.

And what does it have to do with the fact that Python is interpreted by IronPython, which obviously slows down code execution?

the DLR was specifically created to avoid that approach, IronPython, et al, haven't used such an approach since the DLR was released.

It dynamically generates IL which is then JIT'd.

It's great to see this list. As an Erlanger there's one thing that's started to bother me of late, and that's how subdued and declining the community feels. There are a handful of passionate and capable people making great strides to improve things (rebar3 is excellent for instance), but the momentum seems to be all behind Elixir these days.

Most of the action around Erlang (conferences, etc.) seems rooted in nostalgia not in finding ways to connect to new users or new use cases. Looking at the speaker list for Erlang Factory 2017 I'm trying to figure out why there's nobody from Whatsapp, MZ, Facebook, Amazon, Visa, Goldman Sachs, etc., and why so many of the speakers are repeats of years past (in some cases multiple times over multiple years past).

Despite having a ~15 year head start, and some really high-profile use cases, it seems like the Erlang powers-that-be haven't figured out how to capitalize on much of that, and it's a real bummer for people in my situation (and I'd argue the industry as a whole is worse off for it too). I'd really like to have my team use Erlang, but I'm feeling increasingly irresponsible suggesting they invest in it instead of Elixir.

Erlang Just Works(tm) for me, I have a problem and I write code to solve it, no whiz-bang new features or frameworks needed. That said I do partially agree, in particular advancing the state-of-the art in regards to inspecting and debugging would be great. Rebar3 is indeed excellent!

I totally agree. It just works for me too. That's why I prefer it.

But it needs new users and new use cases to stay relevant and to be defensible as an investment of time, money, and people in companies and enterprises that are willing to depart from the traditional strongholds of Java or whatever the latest thing out of Google is unfortunately. To say nothing of the more risk averse institutions, which are vast and numerous.

Given how great it is I'm often at a loss for why it's ecosystem feels stagnant and much of its evangelism (what little there is to begin with) appears unable to build much excitement. :-(

Anyway, back to Erlanging and hoping more people discover how pleasant, simple, and robust it can be.

The programming style encouraged by Erlang/BEAM is a good multi-computer style as well as a good multi-core style. Everything is passed by value -- there are no pointer or reference types to put in messages. If you want a shared value then it has to be a key into a database.

If a company gets its major apps onto BEAM one could imagine a fairly straightforward auto-scaling story. As the number of scheduled BEAM processes grows for a given app, the need for CPU and memory could be expected to grow more or less linearly with it.

I'm really sad about the fate of Reia. It could have been the OO like language that migrated millions of developers to concurrent functional languages. Potentially much more effective at that than Elixir, which looks like Ruby at a distance but it's very different when one sits at the keyboard.


I read that processor makers are betting everything on multicore. More and more cores.

Languages with great support for these hardware features will be the languages of the future and today.

Elixir is definitely a great bet, you'll max all your cores and not break a mental sweat about it.

Really compelling stuff and great developer UX.

It's there now!

I'm diving into Elixir now to compare it with Golang for a new web app backend I'm working on. I really enjoy the functional paradigm but from what I've seen deployment can be a bit annoying. Whereas with Go you can compile and drop it easily into production.

Does anyone have experience with this?

A non-autoscaling deployment is quite easy to accomplish. Build releases with Distillery[1], and deploy them with Edeliver[2]. You get zero-downtime upgrades and rollbacks for free, as well as nearly automatic server clustering. I gave a talk about this last year in New York[3].

A deployment that can autoscale is harder to accomplish, but still not impossible. AFAICT, most people doing this are building releases with Distillery, wrapping those releases in a Docker image, then using the deployment and orchestration tools of their choice. Monica Hirst gave a talk about doing this with AWS CodeDeploy last year at Empire City Elixir as well[4].

[1] https://github.com/bitwalker/distillery

[2] https://github.com/boldpoker/edeliver

[3] https://www.youtube.com/watch?v=H686MDn4Lo8

[4] https://www.youtube.com/watch?v=mRspOj_sRgo

Deploying releases has been fairly painless in my experience, but I also have had my head in Elixir and releases in general for a long time, so I may not be a great sample. You get a tarball containing everything the app needs to run, and all you do is extract it and run the boot script - other than entirely self-contained binaries like Go produces, I don't think it gets much better. Deploying without releases is probably a lot more painful though, due to the number of dependencies required, particularly web applications (Erlang, Elixir and Node). I recommend anyone deploying Elixir applications look into releases - it makes life much easier (imo).

I'd mostly agree, and the packages you've written (ExRM and Distillery) make releases super easy to build, start, and stop.

The main benefit of Go, I think, is that it can cross-compile builds. Erlang and Elixir don't have any way to do that, so if you develop on Mac and deploy on Linux you'll need to build your release in a VM or on a build server running Linux.

You actually don't have to bundle the runtime, but you can also specify an "alternate ERTS", i.e. one that has been compiled for the target architecture, and bundle that one instead. This is how you "cross-compile" releases.

You can include any erts in your target system, so you can take one built on linux, copy it to your mac and use it only at the time when you construct the release tarball.

I thought BEAM was a bytecode VM so why do you need to cross compile anyway? Is the bytecode different on different architectures?

Some libraries contain native dependencies, and those have to be cross-compiled. You also need a cross-compiled runtime if you are bundling it and not using one already installed on the target.

Releases include the Erlang byte code and the VM as well which needs to be compiled for the platform it will run on.

But why would you want to recompile the VM when you've just changed your program?

And it looks like you can cross compile BEAM anyway. http://erlang.org/faq/implementations.html. It's just a C program isn't it so why wouldn't you be able to?

The ERTS runtime (VM + libraries) are taken from the host and are not built. That said, it is possible to build the runtime for other platforms via cross compilation. [1][2]. I've done this for ARM, but have not tried to generate a release using the cross-platform build.

The beams themselves are cross-platform bytecode, but in practice beam files are not distributed on their own (except for patching). Typically you'll ship them as part of the released package which includes ERTS, your application, and all required dependencies (system libraries, other apps/libraries, and any native dependencies you've introduced). It excludes anything you don't declare as necessary from the final release package

In practice, people will usually use either the official release tooling or third party tooling to create platform-specific releases.

[1] http://erlang.org/doc/installation_guide/INSTALL-CROSS.html [2] http://www2.erlangcentral.org/wiki/?title=Cross_compiling

You're not recompiling the VM, you're just including a copy of it in the release tarball, so that you can copy the tarball to any machine with the same architecture and run it regardless of whether Erlang or Elixir is installed globally on the machine.

The goal of a release is to bundle together everything you need -- your program, the VM that runs it, your config files, and any instructions necessary to upgrade/downgrade from another version of your code.

He means why not just include the apropiate vm as a binary for any given arch.

But why build the VM for every release then? What is it that needs to be cross compiled if you're just copying a VM build into a tarball? Why not copy an existing build of it in?

The VM is not built for every release, it is copied into the release. The fact that it isn't built for each release is why cross-compilation of releases is not possible.

I must be an idiot because I still don't get it. If the VM is just copied into the release, not built, why can't you do that on macOS for deployment on Linux by copying a Linux version in? Why do you need to do it in a VM or Linux build server?

It's probably only that no one's written the tooling for that yet, because it's easy enough to set up a Linux VM or build server.

The tools (relx, exrm) can do that.

I don't think it's common to build and deploy from your laptop once you hit a certain team size -- with builds going through Jenkins &al. -- but maybe this really does a make difference for smaller teams.

Deploying to production is a pain in the ass to setup the first time and has a lot of gotchas. I know, I've done it.

It is getting better but right now the few things I can recommend the most are:

• learn what an "OTP release" actually is and what it means. This alone will save you days of your life.

• do rolling restarts, doing live upgrades is difficult and not necessary for the vast majority of applications.

• read the prod.exs config, there's a commented out setting you'll need to enable. I forget the name of it, but it's on my github "tehprofessor" with the repo "flying with Phoenix" (its probably outdated in many ways but that setting is essential).

• remember the application is compiled and then run, so you really won't get any runtime configuration (likes Rails' environments).

• Docker can alleviate build pain a bit, as it stabilizes your environment-- I don't use it myself but it makes building a bit more pleasant from what others have told me.

• you'll very likely need to include all the applications, including ones listed in your dependencies (it's a pain in the ass), in your mix.exs applications list if you're using exrm.

... it's been a couple months as I've been on a coding hiatus for health reasons (and a broken arm) but I believe most of the above should still be very true.

That said and even with all the bitching I've had about it, I still love Elixir and Erlang. Though I've been moving more to pure Erlang as time goes on (FWIW).

Edit: clarity and note about loving me some elixir/erlang.

edit: my experience has been with Erlang deployments, I don't yet know what the landscape is like for Elixir - at least the first part of my comment below can be disregarded. I would assume it's similar to deploying an erlang application, but I don't know for sure.

I think the second part is still possible, since if it can be done for Erlang it can be done for Elixir.

-- Original post --

I haven't found production deployments to be particularly painful, I'd be interested to hear what kind of troubles you've run into.

Runtime configuration is definitely possible - you can deploy configuration files with your application and determine which one you want to load. It's not built-in, but it's a few lines of code to do it manually. You can also always use environment variables, which is arguably a better approach.

I do not agree that it is a pain in the ass. It's not more or less complex than many other languages. Also, the tooling has improved a lot in the recent months and edeliver + distillery work great, even without much previous experience.

We have Elixir/Phoenix in production on Heroku (with Heroku Pipelines) using a custom buildpack [0]. It's been great. We'll probably look at Docker. We use CircleCI, and this setup script [1] was a great starting point.

0: https://github.com/HashNuke/heroku-buildpack-elixir 1: https://gist.github.com/joakimk/48ed80f1a7adb5f5ea27

Distillery is highly recommended https://github.com/bitwalker/distillery

I totally agree. It takes a bit of trial-and-error for the first time, but works perfectly fine once you have figured it out. I recommend going with edeliver + distillery for the best experience, plus it is the recommended way of deploying Phoenix as of today.

Where do you plan to deploy? Its easy enough to deploy to Heroku, and I imagine dockerizing an Elixir app is also a straightforward process. If, on the other hand, you are taking a more traditional approach and just deploying to a Linux box/vps there are solutions for that too (i.e. Chef, Capistrano).

It should be pretty far down in the list of things that influence your choice, imo, since it's basically a solved problem.

My company's first choice is always Azure since outside of my team we're C# head to toe. Though after skimming through a few deployment articles I might push for a Heroku deploy.

I can honestly say I've not been this excited about a language in quite a while. Working with Go is pretty straightforward... but it's a complete bore. The more I dig into Elixir the more I desire to use it full-time.

I think that with proper deploy automation (e.g. Ansible) and the right approach (avoid hot-code updates unless your REALLY need them) you are going to get a pretty solid deploy experience, no matter what your target deploy platform is. Just throw in an HAProxy for rolling deploys and you are ready to go!

If you need any specific help, feel free to contact me!

You might want to look into edeliver https://github.com/boldpoker/edeliver

Yes, deployment in Elixir is painfull. We use distillery and edeliver, but plan to try move to docker

I build releases with distillery and deploy them with Docker, the combination has worked really nicely for my projects.

Disclaimer: I'm the author of exrm and distillery, so I'm probably biased.

You are are not biased just experienced with the tools. Huge thanx for distillery.

Thank you for distillery, it's a very nice tool

I don't see why it should be painful. What were the issues you were facing?

I found that besides hot-code updates, there is not as much magic as one could think, and once you "grasp" the idea of a build machine, it should be pretty straigt-forward. Throw in an HAProxy for rolling deploys and you have a sweet zero-downtime solution!

Feel free to contact me if you need someone to help out (from some free general tips and tricks to supporting your team as a consultant).

As far as deployment goes, I've had great success with flynn.io and an Elixir buildpack.

Flynn is amazingly easy to deploy. It provisions DO/AWS/Azure then bootstraps a single node or cluster literally at the push of a button. Then you just add it to your repo and push (provided the buildpack is already set up)

Even without exrm (Distillery nowadays), I've had quite a bit of success with using Docker, complete with hot patching. I come from a Linux administration background, though, so I'm a bit more open to abusing Docker's semantics in horrific ways.

I run erlang in production; our deploy is push code, compile, l(foo) from debug shell. (push code and compile driven by Makefile). It doesn't follow the OTP deployment model, but it's easy.

hi, here's a talk i gave called "how beams are made"


follow the commits on this repo from the beginning to se how a fairly complete language is built


So besides Elixir and Erlang, is anyone using any other language in this family in production?

I think LFE is at least not unheard of, but anyway, that's not really the point here probably ;)

I of course prefer either LFE or Erlang, they are much simpler, Elixir has a bit too much fluff for my liking.

I've experimented with LFE, but I've generally found that if I want a Lisp, I'm better off with a more traditional Lisp. I also very much prefer using a Lisp-1 (like Scheme).

Joxa and CSCM are on my to-do list now (especially the latter, since it's supposedly a proper R7RS on BEAM) for that reason, but I haven't gotten around to giving them a serious go yet.

Hi Omri! No.

There are other languages, just read the reference.

Heh. So, I'm doing smart grid stuff, and looking to see just what tools the sector really, really needs.

For some things: Rust. For other things: BEAM/OTP

If you're interested in getting chocolate in your peanut butter, check out Rustler: https://github.com/hansihe/Rustler

OOh. That does cover it. Where I need safety assurance: rust. Where I need resilience for crashes: BEAM. This does fill in the gaps.

Snarly! Suspected as much.

I'll just leave this here, especially for Elixir fans:


Erlang community is really bad.

I went to a couple of meetups at OpenX. It's majority older programmers.

The topic of the day was how to attract new programmers to Erlang.

I've stated perhaps we need to emulate Ruby and have some killer framework such as Rails. Their community is very vibrant.

The response was no, stay the course, we don't need to do anything. What's the point of this talk of attracting new programmers to Erlang if the consensus was stay the course and do nothing?

This way back before Elixir was on the radar.

Stopped going to that Erlang meetup, the programmers were out of touch.

I am mighty glad that Elixir and Phoenix is taking off. Shows them that I was right. Either Erlang community change their mindset or stay the same.

I wanted to ask if any of you have found a market for Elixir devs and/or what you think the state of it might be.

There was a lot of chatter in a post a few weeks ago about Ruby shops adding Elixir to their toolkit. Erlang remains a niche, sadly, but Elixir looks poised to be mainstream.

I feel like Erlang is already mainstream; it's just mainstream in a different setting than Elixir (namely, Elixir seems to be targeting web development, while Erlang has traditionally targeted telecom and related fields). Erlang feels like a niche in comparison because there are a lot more companies that need websites nowadays than there are companies that need to build their own nine-nines-capable telecom systems.

It's developing I've seen a number of large companies starting adoption on a small scale and searching for devs. GoPro, Adobe, Pinterest, Rakuten, Bleacher Report is a major user and has open positions. I am sure this is not a complete list.

3 days later, 146 comments and no love for lua on the beam?

yes! and it's awesome! (=

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact