Hacker News new | past | comments | ask | show | jobs | submit login
An Elixir adoption success story (thegreatcodeadventure.com)
268 points by lucis on June 29, 2021 | hide | past | favorite | 200 comments

We use elixir 24/7 on all projects. All the new programmers that ever worked with us never knew Elixir in the first place.

And all of them picked it up in a couple of weeks to a level where they could start making changes to code.

I think we are overestimating the amount of time it takes to learn a new language.

The hardest thing to grok with FP is immutable data.

Once you get past that, you're rolling.

But the speed and concurrency is no laughing matter. Miles and miles and miles ahead of ruby, python, etc in that matter.


Spin off a background process from a web request where you don't care to get back something.

Basically eliminate Redis or caching.

Just need Postgresql/Mysql.

If you're wild eyed you can use Mnesia without the databases.

Run jobs across a cluster sanely with a good job library that only needs Postgresql.

The story goes on and on. Unless you have tons and tons invested into what you're doing right now, it makes a lot of sense to start to spin up things on the edge of your monolith or SOA with Elixir.

New projects should be started with Elixir.

The idea that it's "hard to find programmers" -- does not really stand up. Because anyone who can't grok a new programming language in a short time, is not really a good programmer.

> The idea that it's "hard to find programmers" -- does not really stand up. Because anyone who can't grok a new programming language in a short time, is not really a good programmer.

This this THIS! I have a profound philosophical disagreement with CS professors (usually professors who retired from industry into academia) who think that part of the job of the CS curriculum is to equip students with a language and think that learning multiple languages will dilute their focus or some bull crap like that. Learning new languages is not hard! It is much better to learn how to learn new languages, than it is to learn one language really really well. Learning multiple languages will give you the mental tools to use other languages to their fullest extent.

We had an assignment in our junior/senior networking class in college that was to be done in Python.

Someone raised their hand and said we've never had a class about Python, how are we expected to do the assignment? The professor said "If you can't figure this out, you really shouldn't be able to graduate". The dude groaned, but I really agreed.

Now the same thing happens in job postings all the time. Companies looking for specific things rather than smart people. It happens on the job seeker side too. I understand not being interested in something, but say a job was in Elixir. I wouldn't hesitate to apply, but you'll see tons of responses on twitter that are like "oh I'm a JS guy" or something. Who cares! You'll figure it out.

Well, I kind of agree but you also need a home base language, one that you wrote a lot of code in to drill those ideas into your head. Without writing a lot of code and coming across various types of computing problems, you won't be able to go past syntax in other new languages.

If I know how closures work, then I can pick them up in Lua or Go or JS very easily, all I need is to figure out the syntax, I already know the computational pattern. On the other hand to get to know how closures work I need to write a lot of them and use them in many scenarios.

You kinda need that 'one' language to get the point where you can learn multiple languages well. My cs courses kinda sucked, but one good thing they did was to teach me C, C++ and Java pretty extensively.

At some point, that ‘one’ language can just be the one you work with at your job. It’s not required to like this langage. My main langage is Typescript in the front end. I hate it with passion (tbh, it’s the JavaScript part and the terrible ecosystem, ts is pretty smartly designed) and it will never be a requirement (on my side) for my next company to use typescript. Even working on front end is not a requirement although I mostly did only that for years.

And what made me a better Typescript programmer was to hack some F#, C#, Elixir, Rust, Python… Not even real projects but just little katas and a lot of documentation readings.

In my experience, becoming specialist in a langage can even be dangerous if your work doesn’t consists of « framework code » because the more you know about the machinery, the more you are tempted to write code that nobody else will understand. Of course it’s a trend I saw, not a general rule.

> learn how to learn new languages

that's unfortunately not taught at university. people who are already quite bright will pick this skill up by osmosis or naturally. People who dont know how to learn often struggle.

at no stage in the k-12 education curriculum does learning how to learn gets taught systematically.

This is true.

I have had java professors never explain subtype relationship or interfaces properly.

Without understanding basic concepts that underlie programming languages, mechanically teaching syntax (with language specific OOP buzz words), is ensured to only confuse students when learning new languages.

Rather teach them how polymorphism works, what's a virtual function and a function pointer, any student worth their salt will understand why things are the way they are.

> Without understanding basic concepts

it's even more meta than that.

Learning to learn is not easy, but can be taught. It requires some introspection - what it is that you don't understand about a subject, as well as _why_ you'd be learning it.

In university classes, the professors aim is to teach the material (despite them mostly not wanting to do teaching, but merely a requirement to exist as a professor) - so a lot of them just teach the materials directly (aka, present facts).

These sort of teaching style is only good if the student is already a sponge and can remember the material. For students who aren't interested, but their course requires it, the learning is stunted because there's no context for which the student can grab hold of to learn the material.

And then, the order in which the material is presented makes a lot of difference - teaching OOP, in my experience, tends to start with inheritance, how the language (like java treats) inheritance, and so on, as the course goes along introducing more of the java language.

But they don't teach the surrounding context, like how it compares to C++, or haskell, or LISP. Learning like this is like learning how to walk on stilts.

I can't say I fully agree. Maybe for scripting languages. Some projects require knowing the intimate details of a language and its sharp edges, and that knowledge can take years to accrue.

Wholly agree.

Really, after you know one language well, you know them all instinctively. Picking up Elixir coming from Python was very smooth for me.

> after you know one language well, you know them all instinctively

This isn't strictly true. Elixir, while functional, isn't completely pure and has a dynamic type system which definitely lowers the learning curve.

Developers who know one language well are likely equipped to pick up a language like Haskell faster than even a mathematician, but I can't imagine that a Java one trick would pick up Haskell (or APL, or Scheme, etc.) "instinctively"

I don’t really understand how a type system in and of itself raises the learning curve unless the language commonly uses esoteric types. You still have to understand your data structures in dynamically typed languages; does it really add much cognitive overhead to declare what they are?

A common example I use is an Abstract Syntax Tree, which is essentially a mutually recursive polymorphic tree. In Elixir you might model this with a tuple labelled by an atom. With Haskell you'd use ADTs and pattern matching, which are easy to pick up because they're first class in the language but they're still something to learn. In Java, your understanding has to be a bit deeper; you'd have to learn about abstract classes. In C, you'd need to understand forward declaration and tagged unions.

Honestly I can't think of an example of anything which is harder to model in Haskell's type system, but I think the Java and C examples demonstrate that you do have to learn different type systems to model more complex examples.

I totally agree that type systems end up reducing cognitive overhead in the long run though.

A static type system often requires a deeper understanding. It won't compile if there's a small type error, by contrast the same error in a dynamic type system either doesn't exist or doesn't show up until runtime

I honestly struggle to understand what can be a « small » type error. I would be happy to read an example.

By definition, this can only happen if you use the wrong type in the wrong place : it can only be a huge bug or there is some dark magic I don’t understand.

I know a lot of langages manage to let you use strings as numbers and vice versa but I don’t see another case where you would voluntarily pass a wrong type. And this is not incompatible with string typing as long as the langage provides auto-casting.

There's a lot of intricacies that can happen. Let's say specifying an int with the wrong precision.

This can lead to a runtime error, or it can make no difference whatsoever. Enforcing that consistency in the compiler is powerful but is also another obstacle to someone who's just trying to get a simple example working.

Just because there is a type error by no means qualifies it as a huge bug. That's why JavaScript and the web stack in general is successful. It fails gracefully. Instead of a rendering a website completely unusable, an error perhaps only renders a button in the wrong place.

Contrast this with Scala, where a subtle type error refuses to build at all. If you're green, hunting down the source of that error can be very arduous.

Failing at compile time can be very powerful but it can also add another obstacle to overcome.

I understand what you mean because I come from there (started with PHP and Python more than a decade ago). So maybe it’s just a preference bias on my side but I never thought that a compiler preventing errors added difficulty. On the contrary, I remember that PHP was painful to debug when you sometimes had nothing shown in the browser, no errors at all (tbh, it’s old like maybe PHP 3 or 4) then the same thing happened in JS.

It never happened since I switched to strictly typed langages and tbh, you rarely compile type errors if you have an IDE that shows in bright red that you are wrong.

The few times last years I had to write some Python, I really felt uncomfortable and it was really hard not to make mistakes and not to trust my IDE auto completion.

My brain is now plugged on « if I write something that is not proposed, it must be red ».

But again, I understand your point and maybe it’s just me.

Imho, the type system should either be really strict and strong like in Scala or it should very loose like in TypeScript. Anything in between is just a nuisance.

Btw, there are other tools than compilers that can analyze your code for errors. Try PyCharm

Indeed. Common wisdom says that types REDUCE the cognitive overhead.

It depends a lot on the level of information you are aiming to encode in your type system. It is quite obvious that some basic types reduce cognitive overhead significantly, by helping you remember the exact data structures that you are encoding.

But once you start adding lots of details, such as trying to specify the exact type of a stable sort operation, the overhead starts increasing dramatically. Another example I really love in this area is trying to code a type-safe MxN matrix multiplication that keeps track and validates measurement units for each element in each matrix (that is, it allows a cell in the result matrix to be 0.3m^2*s, but not 0.3m^2 + 0.1m). You'll find that few libraries, even in Haskell or F#, attempt such a thing, just because the types become infuriating and obscure the linear algebra.

I find it an immense reduction in cognitive overload. You don't need to scan and understand everything surrounding a line of code to understand what's happening if you know the types.

I'm not saying you're going to be productive immediately, but the fundamentals are there and (with effort) you'll be able to get there.

Don't really think this is true. For example had quite a hard time picking up kdb+/q, and none of the tens of languages I had previously encountered really helped much.

you don't suppose there's something specific about kdb/q that makes its difficulty an outlier?

Excel is easy, but if you were restricted to only using the keyboard to use excel, it wouldn't be so easy anymore.

Erlang/Elixir and kdb+/q full-time developer here.

I didn't had difficulty learning kdb+/q. But that's maybe because I had an early exposure to APL (I read a chapter on it in a book as a kid).

Also I had a previous work experience in HPC, data-parallel computations and vectorization/SIMD using the array-programming languages such as Matlab, Octave, R, Julia, etc. And GPU Compute using CUDA/OpenCL.

The really hard to learn part of the kdb is a proprietary frameworks around it. You need them in order to develop a production-grade systems.

I guess that tends to happen at lousy universities.

Most of the ones I would consider worthwhile attending, do have a polyglot curriculum.

Wow am I grateful for my CS curriculum now, I didn't know such professors existed. Our first class focused on learning multiple languages, we did Rex, ISCAL, Java, and Prolog. It was intentionally designed to give you a feel for the scope of programming language paradigms. The follow up course was data structures in C++, and after that every class pretty much used whatever language the professor felt like. The department's official stance was that what mattered was concepts and not languages and any major could get proficient enough in any given language where they may not be able to develop good code in the large, but they could learn the core concepts the class was pushing.

Except C++, fuck C++. My favorite line from a professor was "Any course taught using C++ always turns into a course ON C++."

> learn how to learn new languages


So many people talk Erlang / Elixir insane concurrency … but I’m struggling to reconcile the talk with what I find in stress testing and benchmarks results.

Can you comment on why so many benchmarks show Erlang / Elixir performing significantly worse than other languages like Go.




EDIT: why the downvotes? I’d rather you post a comment to me so we can have a health dialogue.

Just woke up so multiple things.

First of all some context. The BEAM community does not engage heavily with these benchmarks compared to a lot of others communities. That means that this code is definitely not optimised. On top of that, benchmarking makes hard to do what the BEAM community love above all else, which is to put the thingy in prod and then tweak the performance based on what you discover there. The reason to do it so much is that the BEAM come equipped for that.

Second point is that as pointed down there, the BEAM optimise heavily for latency and graceful degradation. One important point is that if you look at the std deviation on latency and the latency itself here. They are super low. What this mean is that the BEAM system will still be highly reactive under load. And interactive. But also that we are nowhere near the max load they can deal in this benchmarks!

If you wanted to compare you would need to overload the machine so that this latency climb. A latency so low mean that the machine is not overloaded. It may be at 100% CPU but it is totally coping ok. The BEAM is basically exploiting the machine to its limits better.

The reasons are multiple but basically you should consider that these benchmark for the BEAM are not sweating it. And that means that this load is just a normal day for a BEAM setup, not one that would make you auto scale.

In that light, it is a far different picture, no? Happy to discuss

This comment lower point to this too in a slightly different way that i deeply agree with https://news.ycombinator.com/item?id=27683999

Someone else alluded to this, but Erlang as a runtime prioritizes -latency-. Most languages prioritize -throughput-. These are fundamentally at odds with each other (oversimplifed, but - to maximize throughput you want things queuing up; to maximize latency you don't). This also isn't to say different versions of libraries and things don't muddy it some too, but look at the latency tail.

From those techempower benchmarks, look at elixir-plug-ecto. 2.8 ms max latency, with an average of 1.0ms latency. That average latency is very respectable (but not the lowest), but that max? That max IS the lowest.

What do you want from a web server? Something super fast for 99% of requests, and then takes orders of magnitude longer for the worst case, or something that is somewhat slower on average, but stays predictable?

And as someone still else alluded to, those have to do with the runtime characteristics. How easy is it to write reasonably performant, correct, concurrent code? Erlang (and Elixir) makes it very easy; I would argue easier than any other language.

The Erlang ecosystem is centered around creating tools for dealing with concurrency. Concurrent programming concerns itself with trying to represent a certain set of semantics in which a program must deal with multiple requests that overlap in time. A concurrent program could ultimately be executed on a single logical processor via some form of time sharing. It just has to be able to deal with multiple requests which can overlap in time rather than neatly coming one after the other. derefr's comment that is a sibling to yours is a good example of pointing out Erlang's strengths when it comes to expressing concurrent semantics (notice nowhere in derefr's response is there any talk of performance).

However, Erlang, as you have correctly pointed out, is not very good if you only care about parallelism. Parallelism is not concerned with semantics, but performance; it concerns itself with trying to make a given computation faster by using multiple physical hardware resources but maintaining the same semantics as a hypothetical non-parallel version.

Your argument is a cogent one for why Erlang is not great if you only care about parallelism. Being able to scale a program to use 20 cores is no good if the same program written in another language could beat the pants off that program using just one core. However, that is an independent concern from concurrency.

Would you say then that Erlang has low latency and low standard deviation of how fast it will respond to any request under any amount of load.

It might not have the highest transactions per second, but for those transactions it completes - it’ll do it fast and with the same low latency.

There’s other languages that can perform more transaction per second but as load increases, their latency and standard deviations grows exponentially.

(This is seen in the stressgrid benchmarks).

I'm not the best person to ask about Erlang's performance characteristics. I know them in broad strokes from light reading about the BEAM VM but I've never written any Erlang that has made it out to production, only for the smallest of hobby projects. So I don't know, e.g. what the P99 latency of a given Erlang program might be.

That being said, my point was that concurrency is an independent topic from runtime performance (including latency), which is the domain of parallelism. Erlang's big selling point is it makes it possible to write concurrent programs whose functionality (not performance) would require oodles and oodles of code and discipline in some other languages.

EDIT: On reflection I do think if what you're talking is catastrophic latency overruns caused by cascading queuing failures then Erlang can indeed help make it easier to avoid those pitfalls.

> Would you say then that Erlang has low latency and low standard deviation of how fast it will respond to any request under any amount of load.

Yes, that's exactly correct. Erlang/Elixir server apps aren't the fastest around but their latency is very predictable and they remain responsive under load, unlike programs written is most other languages.

That's the main selling point of the BEAM VM.

I'm curious how weird that's going to get with the JIT, as the variability in processing time becomes more variable.

It's likely there will be some more wildly varying latency figures until all hot paths are JIT-ted, kind of like it is with Java.

All code is compiled to its final machine-code form at start-up, so there's no warm-up. It's closer to an AOT compiler than most JITs.

Thanks for the correction, I was operating under a false assumption.

Good to know!

> Your argument is a cogent one for why Erlang is not great if you only care about parallelism. Being able to scale a program to use 20 cores is no good if the same program written in another language could beat the pants off that program using just one core. However, that is an independent concern from concurrency.

I think this sells it short. Serious software has to cross machine boundaries at some point. There are pieces of code and styles of writing them that work really well on one core. They may also do okay on one machine. But once you have to go to two machines they become a huge burden. A liability.

Part of the way people like IBM used to make tons of money was off of people who had software that could not cross the one machine threshold. They sold really, really big machines. They sold unobtanium that let you continue to scale your one machine vertically. Like $20,000 hard drives (1990's dollars) that were essentially battery backed RAM, meant for things like putting your WAL on to speed up transaction commits per second.

Speed doesn't mean much if it's taking you in the wrong direction.

the stressgrid benchmarks look bad because they forgot to turn off the "spin your cpus" setting. That optimization is really important for some lower-performing platforms of yesteryear (which the erlang VM does need to continue supporting) but basically irrelevant for modern platforms, and actually bad, if you're getting charged CPU credits. However, it's a command line switch away. I don't know if the default has been changed in more recent BEAM versions (I feel like they did, around the time they were changing defaults to favor cloud deploys, like setting up concurrency to detect cgroup vcores instead of machine vcores)

The stressgrid folks did a RCA and posted the explanation in a subsequent, but most people rushing to shoot down erlang/elixir don't get around to reading that article.

Extracted comments from the article:

> we discovered that Elixir had much higher CPU usage than Go, and yet its responsiveness remained excellent. Some of our readers suggested that busy waiting may be responsible for this behavior.

> c5.9xlarge and c5.4xlarge instances show similar results in responsiveness, and no meaningful difference with respect to busy wait settings.


What is the "spin your cpus" setting exactly you're referring too? Would be good to share for other to know.

Also, note that your link shared from stressgrid was published BEFORE the benchmarks links I shared from stressgrid. As such, wouldn't those benchmarks take into account your explanation?

busy wait spins your cpus.

I don't know for sure, but the benchmarks you linked do some more RCA, which involves looking into why cowboy2 is not great (tl;dr HTTP2 support is hard)

also, here is how stressgrid can POC (possibly too dangerous for prod?) to 100k cps (which is on par with what Go can do in the article you linked), you could recompile kernel and configure ranch differently, would probably need to be proven out more:


Your comment:

> "the stressgrid benchmarks look bad because they forgot to turn off the "spin your cpus" setting"

stressgrid blog post:

> "When running HTTP workloads with Cowboy on dedicated hardware, it would make sense to leave the default—busy waiting enabled—in place. When running BEAM on an OS kernel shared with other software, it makes sense to turn off busy waiting, to avoid stealing time from non-BEAM processes."

stressgrid seems to recommend the opposite of what you mention, taken from the blog post you shared [0].

Am I misunderstanding?

[0] https://stressgrid.com/blog/beam_cpu_usage/

You are. I'm saying turn it off unless you're on legacy hardware that is known to need it (most people are in the cloud). Unless it's already off by default (I don't know off the top of my head)

The perf looks bad because they are also looking at CPU usage.

Getting to 100k rps is unrelated to busy wait.

I feel like you're making an orthogonal statement related to cpu usage and the wait/spin settings for "cloud hardware".

CPU usage has no baring on transactions per second. So why are you bringing it up.

Spin/wait settings: how is "cloud hardware" different than "legacy hardware"? Both are servers located in a data center. Furthermore, the blog post goes to say that if you aren't running any other non-erlang services (you box is dedicated to Erlang) that you should KEEP the default setting of spin/wait since it won't cannobalize other services running on that box ... which is exactly how these benchmarks were tested.

The fact of the matter is, Erlang had lower transactions per second than most of the languages benchmarked. My insight from this HN discussion is that where Erlang shines is having consistently low latency/response time. It's not the fastest, but it's the most consistent even under load.

That's still really understating erlang's advantages though. Erlang is a lot easier to write working, custom concurrent and distributed systems in because of the design of its primitives and runtime. Erlang has a bunch of different advantes all related to controlling disparate pieces of software under one roof, so it's not easy to summarize why I love working with it in one comment.

In general, I think blog posts like this do a pretty bad job of explaining it. At some point I'll publish my take and hopefully it'll reach the front page here.

Elixir does not a fantastic computational story. That's why it has NIF's to bring in things like C or Rust to deal with the math stuff.

Languages like Go, C, Rust will always beat Elixir/Erlang in the computationally intensive benchmarks.

You would chose Elixir/Phoenix/Erlang for the concurrency and networking story.

But note, all of the benchmarks I posted in my parent post(especially the first two) are concurrency workloads - not numeric. And Erlang still performed noticeably worse than other languages.

> Erlang still performed noticeably worse than other languages

I think you need to define worse here...unpredictable spikes in latency will give you plenty of headaches when trying to guess how much hardware you should throw at a service. Erlang's consistent latency here is what I would choose above everything that benchmark shows for almost every problem I've ever solved.

Going fast at all costs is not a desirable trait for my software and I suspect it isn't for most peoples software. I want predictable behavior that operates gracefully under extreme circumstances.

In the 200Xs, the BEAM VM had a clear performance advantage in dealing with high-concurrency network loads. It has never had a raw performance advantage in terms of the bytecode that it implemented, that is, the Erlang/Elixir layer (it was generally faster than Python/Ruby, but that wasn't saying much, especially back then, but it had clear performance disadvantages vs. C/C++/Java), but it had a superior internal runtime that could make up for that in performance benchmarks, as long as you didn't try to run too much BEAM bytecode. Much like how NumPy is very fast, as long as you don't try to run too much pure Python with it.

However, since BEAM doesn't have access to unique CPU instructions that nobody else has or anything else, and since a lot of focus across a lot of languages has been put on that problem, that particular advantage has waned, and Erlang to my eyes has indeed been outright passed on this front by multiple languages. In the 200Xs, I did not see people talking much about NIFs as a solution for performance; that talk has started as an effort to keep up with things like Go and other languages that have taken advantage of BEAM's lessons and explorations of the space.

Personally, while I think a lot of the hoopla surrounding Erlang/Elixir isn't wrong per se, I do think a lot of it is outdated. They'll say "We do X and nobody else does!" but while that may have been true 10-15 years ago, it isn't anymore. There's no performance reason to pick Erlang/Elixir over Go, for instance, and if you take the models of memory access back to Go there isn't a huge organizational reason either. What Erlang/Elixir force you to do, you can voluntarily do in other languages too. And I think that's become true across a lot of other of the putative "advantages"; it isn't that Erlang/Elixir aren't nice in some ways, but I do wonder how much of the recent push is stemming from people who are experiencing some of these capabilities for the first time and trusting the Erlang/Elixir storyline that they're unique, when in fact they are increasingly just table stakes for a new language nowadays rather than special characteristics unique to the BEAM family of languages.

> What Erlang/Elixir force you to do, you can voluntarily do in other languages too.

This argument is older than dirt. I don't need Java, I can do all of this stuff in C (voluntarily). I don't need Y, I can do it in C++ voluntarily.

You don't control your team. It doesn't matter what you are willing to volunteer if it doesn't work unless the whole team does it. If it only works if everyone has to do it, that's not voluntary now, is it?

Every boundary that is by agreement only will constantly be pushed and pushed. Every time someone wants to leave early, or there's a production issue or a customer deadline, or they just don't want to. It's why size and speed are a constant fight at some places. Everything you fix is counteracted by ten other people who just don't care.

Either the system has to enforce the rule or your coworkers become enforcers. That's a shit job to begin with, and doubly so for introverts, who will either do too little or too much in the face of boundary testers.

'What Erlang/Elixir force you to do, you can voluntarily do in other languages too'.

Like immutability! :P

Jokes aside, I agree for basic concurrency you can get pretty far with other modern languages. I think it's what brings people's attention to Erlang/Elixir, but I don't think it's the most important differentiator. It also isn't the one that Erlang's community (I can't speak to Elixir) really touts, except as one that is easily understood by those outside of it.

The real benefit is fault tolerance. Everything about Erlang, the concurrency and distribution stories included, is built around fault tolerance. You need concurrency and distribution to be fault tolerant (can't have one bad process choking out others; can't have one bad machine taking the service down, etc). The immutability, the supervision tree, those also are about fault tolerance.

I've written production systems in Go. It scaled better with way less tweaking than the JVM based stuff we'd written previously required. But it wasn't nearly as resilient to failure, or as predictable, as the Erlang stuff I've run in prod was.

The funny thing is that precisely what got me into Go was replacing an Erlang system that was constantly falling over despite quite considerable efforts with a Go system that ran on a fraction of the resources, ran much more quickly, and by comparison was rock-solid. I just ported the essence of supervision trees in to Go and was off to the races.

This is part of what I mean; Erlang doesn't have access to special CPU instructions that make supervision trees possible. They're just code. There's no reason you can write them in other languages. It's not even particularly hard, unless you insist on exactly matching all the accidental details of the way Erlang implements them instead of implementing their essential details in a manner idiomatic to the base language.

"It's not even particularly hard"

Define hard here? Because there is a lot of bookkeeping involved, and, yes, to get some of the effects you have in Erlang that are necessary for reliability, you'd basically have to create your own runtime atop Go. I.e., yes, if you just want to "if this process fails, restart it", you can do that trivially in another language, but "if this process fails -kill these others and restart them from a known good state-" is devilishly hard, given that Go has no way to kill a running goroutine. You can send a message along a channel of "hey, you should stop", but if code execution in that goroutine never gets there, you have no guarantees.

And while the CPU doesn't have special instructions, the VM -does-. Exit signals are guaranteed in the language spec; a colocated supervisor is guaranteed to be able to both detect a failed process, and to be able to kill others. Go offers no such tooling, let alone such guarantees. I'd be quite interested if you say you did that; I suspect it was, as mentioned, just "hey, if an error comes out of this response channel, restart the goroutine". Possibly also a "here's a channel we can send 'kill' commands on, downstream goroutines should check it occasionally to see if they should terminate". A lot of bookkeeping, no guarantees.

I think you're making a mistake a lot of Erlang/BEAM/etc. (let me call it just Erlang after this) advocates make, which is to conflate Erlang's solutions to problems with the only solutions to problems. Almost no software is written in a language that has the ability to actively kill running threads externally. This is not a catastrophic problem that causes the rest of us to routinely break down in tears; it is a thing that occasionally causes bugs, merely one on a long list of such things. On the scale of problems I have, this isn't even in my top 50. When systems are written with that understanding, it's only a minor roadbump. So the fact that I don't have Erlang's exact solution to that problem isn't even remotely worth me switching (back) to Erlang for.

Is it a problem? Yes, absolutely. Is it a problem worth spending extremely valuable language design budget on? Heck no, and the fact that Erlang does is a negative to me, because what they gave up to get that capability is way more important to me.

Does my solution exactly match Erlang? No. Of course not. But it gets me 90% of what I care about for 10% of the effort, and in the meantime I get the other things that directly impact my job on a minute-by-minute basis, like an even halfway decent type system (it's not like Go's is some sort of masterpiece here, but it's much better than BEAM's), which Erlang sacrificed as part of its original plan. I understand why they have the type system they have, and what they got out of it, and I'd rather have a decent static type system and solve those problems another way, which happens to be the conclusion pretty much everyone else has come to as well. Again, genius thinking in the 1990s, way ahead of everyone else, don't let my current assessment of Erlang diminish the fact I deeply respect what it did in its time... but not a solution I have much interest in in 2021.

The idiomatic solution for GoLang would be to use kubernetes, but it will come with the increase in the operational complexity.

> The funny thing is that precisely what got me into Go was replacing an Erlang system that was constantly falling over despite quite considerable efforts

As the meme goes, Will Ferrel gets a deep puff of smoke from his cigarette and says "I don't believe you".

I agree that Go has better type system and Erlang/Elxir/BEAM languages in general. Absolutely. But I think you're showing bias and were already looking for an excuse not to use Erlang. That's completely fair but I think you are non-fairly misrepresenting the exact merits of the decision.

> This is part of what I mean; Erlang doesn't have access to special CPU instructions that make supervision trees possible. They're just code.

Everything is "just code", dude. Yours is no more an argument than technologically advanced aliens visiting us and exclaiming "How come you don't have super-alloys that can withstand atmosphere entry without losing atoms? It's just chemistry!"


I am getting the vibe that you're one of those super-programmers that can tinker with everything and the computers have no secrets from them. That, or you are bragging too much.

And you should know something else -- I am not a huge fan of Elixir these days. I work with it for 5 years but it's showing some cracks and lack of community (and core team) attention in critically important areas for the ecosystem's advancement -- like compiler instrumentation, or tooling to modify code in an automated way -- not to mention the dynamic typing. I get it, it's far from the magic many fans are making it out to be.

But you are discounting very real advantages in a very dismissive manner.

For the record, if one Rust runtime gains most of Erlang's OTP capabilities tomorrow then I'll switch to Rust for 100% of my work next week. But nobody has surpassed OTP's capabilities.

Finally, I'll also agree that we don't need 100% of OTP -- that much we are in complete agreement on. But as you yourself pointed out, cancelling running background threads is still mostly an unsolved problem so just scoffing at a technology that has mostly achieved that is a very uncharitable take that makes me question your other arguments and wonder if they are not emotional arguments.

I am by no means a super-programmer. On my non-humble bragging days I'd say that I'm only very slightly above average. But I've tried to duplicate OTP in at least 3 other languages and I failed miserably every time (Java was one of them). So yeah, don't just say "meh, I can invent OTP everywhere else". No. You really can't. If you can, open-source this effort and I'll donate, I promise.

the elixir community is quite credulous. particularly when it comes to circa 200X era wisdom. people talk about the vm being the key to elixir/erlang and talk up things like lightweight green threads, message passing and the garbage collector but the truth is these are all of fairly low quality compared to other competing languages/implementations

the real key for erlang and later elixir's success was pervasive async io. this was a genuine advantage during erlang's peak but languages, runtimes and libraries like go, node and nio have caught up and surpassed the erlang vm

the truth is without that advantage almost everything in the erlang/elixir world is worse than more mainstream alternatives. there's some exceptions -- ecto is pretty good because it has one of the best written connection pooling implementations i've ever seen. i think both languages are pretty good and i still write the occasional thing in erlang for my own satisfaction but the world has moved on and elixir and especially erlang haven't innovated enough to have kept up

> these are all of fairly low quality compared to other competing languages/implementations

Seriously? You say this without providing examples?

I'm not aware of any other programming environment that has BEAM-like processes.

Not Go, not Java, not any other system I know.

almost every language has lightweight cooperative threading (or green threads) available these days. go calls them goroutines, c# and ruby fibres (altho i think ruby removed them, ultimately?), python has threadless, rust has tokio, julia has tasks, the jvm has like 4 competing implementations in akka, kiilim, quasar and project loom. windows and linux both have built in os level support for cooperative multitasking (the fibre api and the context api, respectively) that any language can use

these all have slightly different semantics and characteristics, but it's simply not true that beam is doing anything unique here. pervasive links between processes are a somewhat interesting wrinkle of the beam implementation but even that is achievable in other languages with little work

what made green threads work in erlang (and later elixir) was async i/o. in other languages green threads would block on all i/o calls and had no opportunity to yield whereas on the erlang vm all i/o calls would effectively yield while waiting. today nearly every language has async i/o (in libraries if not pervasively) so green threads are much more accessible

i don't say this as an elixir hater or whatever. i genuinely respect erlang's place in history as a populizer of some of these concepts. when i say the elixir/erlang community is credulous and prone to exaggerating the differences between those languages and other more modern languages (particularly when it comes to implementation) i don't say it dismissively as a reason to abandon those languages. i say it because without an impetuous to keep improving elixir and erlang are going to become increasingly irrelevant. it would be a shame if the 'elixir is different and unique' attitude led to complacency and stagnation

I am not going to engage you on the details that you got wrong (Akka not covering 100% of OTP's guarantees is one example) but I'll just point out something else on a higher, more pragmatic level.

Of all 7-8 programming languages I worked actively with for my almost 20 years of career, only Elixir apps had predictable and stable latency even under load. So you know, at one point I stopped caring how does the BEAM do it or why aren't the other languages/runtimes doing it. I just started going to the technology that gives me this.

Is Go faster? Feck, absolutely yes! But its 95 percentile latency is spiking through the roof under load while a Phoenix/Ecto raises its median latency by no more than 20-30% (worst I've seen is +300% when from median latency 15ms the app went to 60ms and that's only in the 99th percentile of requests) even when the hardware is close to toppling over.

I feel that the raw muscle power of languages is vastly over-represented. I'd love to have a Rust with OTP's guarantees because at some places it's literally 1000x faster than Elixir, absolutely. But in a world where we have to choose raw power versus predictable performance (even if that performance is lower than what we can get in other languages) then I'll choose the latter any day.

And I am not alone in this. Many teams are choosing Elixir for exactly this reason.


One thing I'll agree with is that other languages have taken notice and are working hard to catch up with the OTP. I'd welcome them in the club once they are there because I hate language wars and I gauge technologies based on their merit. But they are still not there, sadly.

i'm not trying to convince you (or anyone, really) to stop using elixir. i am however encouraging you to engage deeper with the beam vm and it's actual properties and compare that honestly to what else is out there. what is it exactly about the beam vm and elixir that lets it achieve these latencies and why is this not achievable in other languages? simply saying the beam vm is better than other implementations isn't an answer to that question

Don't get me wrong, I'd LOVE doing that but my work time and employer priorities doesn't allow it yet (and might never). And I am starting to get really sick of extending my work time to my free time as well.

> c# and ruby fibres (altho i think ruby removed them, ultimately?

Ruby did not remove Fibers (in fact, they’ve been recently, in 3 0, enhanced to optionally be nonblocking—that is, automatically yielding on any operation that wpuld block.)

Ruby removed continuations from the core (moved them to stdlib) after adopting independent Fibers way back with 1.9; continuations were the previous mechanism for similar lightweight concurrency.

What about functions as a service, like AWS lambda?

If you use Node as an example, your code is JIT compiled to machine code, any single request can fail, and you can scale to any number of requests without thinking about the underlying OS or VM.

Async/await will allow you to do a “blocking receive” like Erlangs processes.

Nah, BEAM is not suited well for that. It has a big startup time. BEAM is designed for long-running daemons, not "wake up, do small amount of work, get shut down".

For something like this I'd choose Rust or OCaml due to their insanely fast cold startup time (if the program is a CLI tool).

Erlang/BEAM is not there yet and it might not soon be.

Wasn't suggesting to use the BEAM.

OP said he did not know any other "BEAM like environments", but AWS and GCP are "BEAM-like" systems in that they allow you to use distributed hardware computers to achieve scale and fault tolerance.

This sounds true on the surface and many people have argued that e.g. Kubernetes is "OTP but for distributed nodes" but I remain skeptical. The devil is always in the details and I haven't heard many people being very pleased with Kubernetes.

Admittedly Google's Cloud Run is very easy and nice to use though. And fairly cheap.

First of all, the other programming environment like it is Pony. Although that's barely a C-list language right now, very young, one to keep an eye on though.

I'd also cite the real technology I'd use today, which is a heterogenous set of services in whatever languages I'd like, hooked up by a high-quality message bus. This is the real technology that drives the at-scale Internet. You basically get Erlang's reliability out of that setup when used properly, and you don't need Erlang to do it. In fact you can get a touch more than Erlang's reliability because I find in practice 1-or-n delivery to be much more practical than Erlang's 0-or-1 delivery. It's basically the same enviromnent Erlang gives you integrated, except decoupled, and since all the pieces are decoupled, while Erlang has sat on the same effective point in this space that it picked out 25 years ago, all the decoupled components have been iterating and evolving over that time frame and are now better than what Erlang offers in its integrated package.

Second of all, if in 2021, around 25 years after Erlang and easily 15 years after Erlang has been generally known as a B-list language among language designers, almost nobody else has seen fit to copy it... maybe it isn't that great of an idea. Rust does something completely different, and in my opinion, strictly more useful, albeit at the cost to programmer complexity. I moved to Go from Erlang roughly 8 years ago, and I'm happier, because it turns out "general community good practices + channels" is fine, and also means I can go faster, and get a nicer language in the meantime. All the other modern languages are coming with some sort of concurrency story; it's table stakes for any language born in the last 10 years, if not the last 15.

For the mid 1990s, it was sheer genius. For 2021, it's a very brute-force, inelegant solution to the problem that nobody's very interested in copying. While in the 1990s concurrency was a nightmare and Erlang legitimately had a claim to a better solution, in 2021 there's a good 3 or 4 things I'd use before dropping back to Erlang as a solution. Concurrency is much less of a problem than it used to be, through a combination of various things, and the proposition of burning so much of a languages design budget on that problem is a lot less appealing than it was 30 years ago. Erlang really needs to adopt Go-like channels for some of what it's doing (not in replacement for the processes, but for some of the things they're not very good at), the ~10x slowdown for general logic is a real kick in the teeth in 2021, the lack of backpressure in the Erlang message model becomes a big problem at scale, and lots of other little problems I'd have if I had to go back to it. (Yes, I've been reading the release notes. If I weren't I'd have a couple more things to add.)

Erlang/Elixir/BEAM isn't leading the pack anymore. They're a cut behind in most ways now, but the community still thinks they are leading, ensuring that none of the lessons learned by other communities can filter back into the Erlang/Elixir/BEAM community.

Even if I disagreed with you on another comment I'll have to say that I find myself much more in agreement with you here.

Erlang / the BEAM did indeed make a lot of good innovations and I can only be angry at myself for being an idiot pressured by employers and never looking beyond it all for something better (until 5+ years ago anyway). But I agree that some of it is starting to show cracks.

In terms of language design, Erlang (and Elixir) aren't anything special. I can't fall in love with syntax anymore because I've literally never seen a language I completely like (LISP included, although it + OCaml are fairly close to ideal languages if you don't stray too much off of the beaten path and venture into their more arcane constructs, of which OCaml sadly has plenty).

To clarify, I believe Elixir is one of the most solid contenders for writing highly available and reasonably performant Web / GraphQL server apps but the lack of compiler apparatus tooling, tooling to modify AST and a few others are definitely starting to hurt it. Having standardized introspection in the language helps it reach higher levels, e.g. have tools that can manipulate an existing project a la like TreeSitter and/or SemGrep can modify/query language-specific constructs. Elixir doesn't have that and I am starting to get annoyed with it because of that.

RE: Using an external messaging bus makes sense but let me point out something important that seem to be often not said in discussions about Erlang / Elixir:

The BEAM gives you a lot of good training wheels and the truth is that at least 90% (if not 98%) of the commercial projects out there don't require much more than that. As shared in the other comment, I was able to get away with not using Redis for a long time and had zero trouble. I only yielded after we needed to share various message queues and events/streams with other apps (not written in a BEAM language).

So I'd say the BEAM ecosystem gives you a lot out of the box, plus the Elixir community is small but fairly dedicated and they have libraries of excellent quality. But, as you alluded to, when you need to throw those training wheels off, other much more dedicated and focused technologies like Redis do exist and we should reach for them after the circumstances change enough.

Would you agree with those assessments?

I'm not disagreeing with your results, but you should be using https://github.com/giltene/wrk2 based benchmarks to avoid coordinated omission errors in measuring latency.

The new JIT does improve the computational story a little, but yes it's a little like Python where you use it for orchestration and then offload the work. That said new systems like NX do make the 'offloading' part significantly cleaner for some applications.

Here’s a good one to get into deeper comparison with Python, Go and Elixir. It’s one of the few that I’ve seen that does a good job of showing more than just straight line speed.


Note that Erlang OTP 24 (the latest release) includes a JIT for the first time. It only runs on x64 but should significantly improve performance on that platform. For some workloads people are reporting as much as a 40% improvement. I would expect to see some improvement in those benchmarks as a result.

OTP 25 will also include JIT support for ARM64

Concurrency vs performance, not concurrency is performance.

> The hardest thing to grok with FP is immutable data.

A lack of understanding how concurrency works can really screw you up in Elixir.

> Task.start(...)

I almost never use task, because I like certainty. I've been bitten too badly by people using it who didn't understand it. I don't like having to explain `Enum.each(tasks, &Task.start/1 )` is going to screw up your order of operations to someone who doesn't get it.

> Basically eliminate Redis or caching.

It's easy to start off thinking that, but then you can end up spending a lot of time maintaining subpar libraries you've written in-house that don't have the nicer features of the thing you replaced. Also, caching can turn into quite the memory hog.

> Run jobs across a cluster sanely with a good job library that only needs Postgresql.

It's all fun & games until you have to undo everything so you can containerize the apps.

> If you're wild eyed you can use Mnesia without the databases.

I've used DETS & hate it quite a bit, I really don't want to be married to Mnesia.

> New projects should be started with Elixir.

I love Elixir, but I wouldn't use it for everything, and it's got a lot of sharp edges that can get you into a lot of trouble.

>> Task.start(...)

> I almost never use task, because I like certainty. I've been bitten too badly by people using it who didn't understand it. I don't like having to explain `Enum.each(tasks, &Task.start/1 )` is going to screw up your order of operations to someone who doesn't get it.

I don't understand this. In any language with any concurrency support there's a point where you spin up a bunch of threads to do something. That's a useful capability and developers should understand it. If you don't know Elixir then you need to learn what Task does, but that's true in any language.

> In any language with any concurrency support there's a point where you spin up a bunch of threads to do something.

No, that's not true. Think javascript (and I hate javascript): you can do concurrency even though you literally don't have a way to spin up a thread in the browser.

You actually can spin up a background thread in the browser now using web workers[0]. Although, like you mentioned, you don't actually need threads for concurrency. The event loop handles concurrency even in a single threaded environment.

[0] https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers...

Erlang was originally designed to have a great concurrency model that ran on a single core CPU! Preemptive scheduling is super cool!

> Task.start(...)

Uhhh, `Task.async_stream` is much superior in almost any way -- you can specify maximum concurrency, timeout and if you want the results in the original order. You only have to append `Stream.run()` at the end and boom, you got parallel work's results handed to you in a variable assignment.

Using naked `Task.start` is only marginally better than spawning raw OS threads and I avoid it like the plague. Thought this was common wisdom but apparently it isn't.

> It's easy to start off thinking that, but then you can end up spending a lot of time maintaining subpar libraries you've written in-house that don't have the nicer features of the thing you replaced. Also, caching can turn into quite the memory hog.

All true, although `cachex` and `ane` are extremely well-done libraries that have carried me a long way. I only gave up on them when we had to integrate the Elixir server app with apps written in another languages at which point we started (ab)using Redis' streams and normal caching abilities.

> It's all fun & games until you have to undo everything so you can containerize the apps.

Also true but that depends on your app and DevOps requirements. A lot of businesses don't need auto-scaling of their Elixir apps. They just conservatively buy a good enough static hosting and it serves them extremely fine. For 5 years working with Elixir I have never seen hosting that was bogged down by Elixir CPU constraints. 98% of the time it waits on I/O, be it disk or network.

> I've used DETS & hate it quite a bit, I really don't want to be married to Mnesia.

Agreed. This was a good idea 20 years ago, nowadays I'll reach for PostgreSQL or sqlite3 without a second thought. No need to reach for a homegrown half-database where scaling and actual querying become a problem the moment you reach out of hobbyist app territory.

> I love Elixir, but I wouldn't use it for everything, and it's got a lot of sharp edges that can get you into a lot of trouble.

Sadly I'll agree with this as well. I love Elixir and it will have a special place in my heart all the way to retirement, but I am seeing its downsides and I am not religiously advocating for it. Nowadays it's much more likely for me to reach for Rust for new projects, especially if the project isn't a web app (and even there `actix_web` 3.X and `rocket` 0.5 are super solid and fast as well).

> But the speed and concurrency is no laughing matter. Miles and miles and miles ahead of ruby, python, etc in that matter.

People always make this comparison, which I feel is a bit weak/unconvincing. It's pretty easy to beat Ruby or Python in concurrency; most language runtimes do it.

Something most people might not realize is that Erlang/BEAM is miles ahead of even Java/the JVM, when it comes to operationalizing its concurrency — that is, building services that are robust in the face of heterogeneous workloads and misbehaving clients.

Ever tried to build a RESTful web service, that performs unpredictable-runtime tasks in response to user requests, but which budgets that runtime such that requests are hard-killed (and their resources freed) after a deadline, and/or if the client closes their TCP socket — even if they're in the middle of some some CPU-intensive hot loop — and which makes sure to "push down" that failure into resource-handles like DB sockets, such that the DB "sees" the failure and gives up on its related long-running CPU-intensive task as well?

This is a Hard Problem in Java, with a huge number of little considerations involved: lots of calls to Thread.interrupted; async requests; CompletableFutures; moving any parallel streams to their own explicit ForkJoinPools, etc. You can see the Project Loom team slowly giving the Java stdlib a working-over in this style, but even after their work becoming Generally Available, library authors will still need to do all this stuff as well.

In Erlang/Elixir, meanwhile: 1. the accepted TCP connection is a process, 2. the web request runs either inside it or in another process linked to it, such that either one dying kills the other, 3. any concurrent Tasks get linked to the TCP connection process that spawned them as well; 4. any DB calls temporarily link the checked-out DB connection to the request process as well; and 5. adding request deadlines is as easy as calling a function to spawn a deadline timer pointed at the spawning process's own head.

Essentially, idiomatic Erlang/Elixir code gets this type of robustness for free, with zero additional lines of code. The moment you have an Erlang web server, you have a robust Erlang web server.

(Erlang/BEAM is also miles ahead of Golang in operationalizing concurrency, for separate reasons, mostly to do with goroutine resource leaks. Separate topic, but I just figured someone would ask.)

And then there's the fact that per-process heaps mean most short-running request-response type workloads can get away with never making a garbage-collection call, instead just allocating a pre-sized arena with each process, doing work, and then tossing away that arena when the process quits. So, as long as you've built your Erlang/Elixir app idiomatically, doing each task in a process that doesn't survive the lifetime of the task — then you'll never see the same sort of gradual "putting off GC for later" asymptotic allocated-memory climb that you see in a most GCed languages.

This comment has made me realise that one of the key ideas of the Erlang runtime (over other concurrency implementations) is the link. For those of you not familiar with it, rather than just starting a process independently, you can start a process linked to your current process. If either of you die, the VM kills the other one.

This sounds like a fairly simplistic primitive operation, but what it means is that you can start a whole set of processes that are logically linked together (e.g. because they handle one web request) and if anything happens they all get killed together. This means you get almost the simplicity of error handling of throwing an exception in a single thread of code, but with all the concurrency, parallelism and locality of reference that you get with multiple actors doing the work.

It does sort of seem like the majority of people who understand this end up at the sort of company that is willing to try out a new tool.

I think in a lot of ways StackOverflow has changed this dynamic. Previously I benefited from memorizing all of the API functions that return slightly different responses than their peers. This one returns null instead of an empty array. This one mutates the second argument. This one has n^2 complexity, and so on.

Now, after doing Java, Javascript, Ruby, lots of bash, a bit of Python, and a few sundry other things, I don't trust my memory. I go check every time, because it's cheap enough to find the comments that say "be careful".

What I need most is a toolbox full of Important Questions. Important Questions scale logarithmically across languages and many problem domains. Important Answers do not.

elixr vs clojure? f#?

Clojure if you need JVM integration. F# if you need to run on Windows (the BEAM runs under Windows, but it's to the best of my recollection not a point of emphasis). Elixir if you want a robust concurrency story and a rock-solid way of handling errors without lots of code cluttering up your business logic.

I'd pick Erlang myself, but Elixir is compelling.

Just to clear up a common misconception, you can run F# on Windows, Linux, or macOS. I worked at a company for two years writing F# and never touched a Windows machine.

I’ve had no issues running the BEAM on Windows.

I see it’s running now using WSL; I don’t believe that was the case when I worked with Erlang years ago, but it’s pretty fuzzy at this point.

Not sure about that particular point, but I will agree from experience that working with Erlang on Windows years back was painful, and that it is relatively pain free nowadays.

Many of these things are not exclusive to elixir though. Scala and Akka come to mind

I find it strange how always the "mainstream" and "not popular enough " arguments comes up in any Elixir related post.

Does tech have to be "mainstream" to use it? I would say it doesn't.

Take Erlang, or OCaml, or F#, I wouldn't call those mainstream but they are great languages that solve real problems, and people using them seems generally very positive about them.

All you need is enough popularity and enough companies using and backing it for it to be maintained. I think Elixir has this.

Can't speak to the hiring situation but I wonder how problematic it can be given all the stories of how easy it is to onboard people and plenty of posts and comments of devs wanting to do Elixir. Even if, it depends on your situation, you don't always need a big workforce to do big things, see Whatsapp for example.

FWIW I think that this is because it was the main selling point of adopting Go instead for many companies. I was at a company 6 years ago that was looking at introducing a language with a better concurrency profile. It was between Elixir and Go.

Go ended up winning out because “Google makes it more mainstream”. That was the crux of the decision.

Don’t get me wrong, Go has plenty of selling points but there were a lot of supporting articles making that claim.

Denouncing Elixir as “not mainstream” is probably just a result of the built in defensiveness from people who argued against it for a while.

Elixir’s fantastic. It has made me a much better programmer.

I mean being mainstream helps. More blogs, more Stackoverflow answers, more battle tested libraries. A Java or Python business are 100% positive they will find devs 20 years from now and the ecosystem won't die. Php, Go, Ruby? Lets say 80% sure. With Elixir its very hard to say what's going to happen. Its quite possible it will go the way of Perl eventually. I don't know what F# or Ocaml are good for but if you are starting a business you need a pretty compelling reason to use those instead of something mainstream. Being mainstream helps a lot I wouldn't underestimate it.

My point is that there is a place for technology that is not-mainstream and that there are plenty of (successful) projects build on things that are not mainstream.

In my opinion "mainstream" is not a good factor to select technology on. The factors you mention can be important, but then select based on those not on "mainstream" or not. If you need longevity, pick something based on that, if you need tons of resources, select on that, but not a lot of projects need to live 20 years, and plenty of people prefer well written docs over an abundance in stack overflow answers.

Also mainstream does not mean longevity. Take for example Python 2->3, that was quite stressful to say the least. Plenty of once mainstream languages and technologies are dead or slowly dying, it's very hard to predict.

Continuing on the longevity point, it might be good to note Erlang/BEAM — that Elixer runs on — is older than most languages you mention, still actively used, maintained and developed. So I could very well argue Elixir/Erlang is as safe a bet as any of them.

Lastly I don't think it's true that Elixir is niche enough that this should be a discussion point at all. If you have giant companies running and backing Elixir/Erlang, it's pretty safe to say its battle tested enough and will be around long enough for a lot if not most projects.

Mainstream isn't everything, I wouldn't overestimate it, there are better ways to choose technologies.

Elixir is gaining momentum, though.

It just made it into top 50 TIOBE index. Projects like Nerves for IoT, Nx, etc are giving it life in new applications and industries.

It's never going to be a top 5 language, but it's on its way to become a well-known language with a strong community.

> but it's on its way to become a well-known language with a strong community

I agree it has a strong community for its size. It remains to be seen what happens 5 years from now when the hype settles (It arguably already is happening). Some languages reach sizes where it's impossible to die out even though they are declining (PHP for example. Probably Ruby as well). Perl seems to have gone extinct though, probably due to competition in the web sphere and also poor decision making (Perl 6 etc). It's impossible to know what's in store for Elixir; I do see a lot of love and excitement by Elixir devs so that's a reason to be optimistic I guess.

I'd honestly argue the hype already settled. Elixir peaked in hype and mind share near 2016, where everyone was calling it the Rails killer. There were articles about it every day and the language burst into the scene with incredible force.

Since then, the community has matured, many have either moved out or settled with it. It's gaining momentum but the strong and steady kind, not the overhype kind. There isn't this infatuation over it anymore as we used to see those days.

Elixir is now in the strong, mature community stage. It's not hype anymore, it's the period of subdued stability and steady growth.

It definitely wasn't the Rails killer people thought it would be though. It remains to be seen if devs want to be a part of a small community (albeit strong and passionate) because now it's clear that it is small.

I run a growing agency and we run primarily on React + Django, but I really want to give Elixir a shot at some point.

I find its concurrency features, purported developer productivity, and it’s positioning as a “niche but popular” tech (these can be nice to acquire technical clients) very appealing.

Does anyone have any experience running an Elixir consultancy / agency / software house? Or freelancing? How is the market and your general experience?

I'm running an elixir startup


- web-socket story is the goat (and I used to write nodejs)

- ecto database library is amazing

- productivity is on par with django/rails

- performance is comparable to go. sub 100ms response times are normal


- not many drop in libraries.

- hiring engineers is a pain but plenty are interested in learning

- libraries are not as used so you're more likely to run into a big in edge libraries.

- deployment is more complex if you want to use the mesh features

So far, I'm happy with it. There are certain features specific to my startup that would elixir/otp made easy to create. We're getting traction and elixir's strengths are a huge part of that.

> - performance is comparable to go. sub 100ms response times are normal

What does this even mean? What are your dependencies? What does your service do? Sub 100ms response times are the norm for many languages.

It's replies like this that make me think I work in a different world.

Response times are subjective. Sao Paulo to Singapore has 350ms median latency so give me some <1ms service and it'd still take 350ms. If you just mean localhost response times then gin-gonic offers <50ns responses.

in this context. its processing a graphql query, performing a batched set of db calls to postgresql, and responding back from the server. this doesn't include the time from the server to the user which varies from 200-500ms. sadly I can't do much about that part,

> performance is comparable to go. sub 100ms response times are normal

Go is between 2x and 20x faster than Erlang. Erlang is a pretty slow language, it's in the same performance range as Python. Some reasons:

- it's using a VM

- the VM is not very well optimized ( compared to Java or C# )

- immutability has a big cost in term of performance

How hard is it for you to hire someone who knows how to program a web server, be it with Rails, Java, C#, or whatever, and have them be net productive with Elixir in a few weeks?

not hard. I learned enough to be productive after a few weeks of self study 4 years ago when there were way less resources for learning.

we had a subcontractor helping me who didn't' know elixir. I was able to get him running within a few days. his lack of sql knowledge was more of a bottleneck than his elixir proficiency.

It's easy if you have one senior in elixir to bootstrap off of. The right senior could probably bootstrap a junior fairly quickly too. If you don't have a senior, there's a good chance you'll miss your target if you "try to do it in a few weeks" since there's a lot of deprogramming your brain off of imperative pls that you have to do.

+1 Ecto is killer, not just the library itself but also how it provides a slick example of a DSL using Elixir macros. I used to love Elixir for the syntax and actor model, but macros have gotten me all hot and bothered.

so having worked in the trenches in Elixir, I dislike people trying to build too many macros. You get to a place where someone has decided to be clever in the codebase and dropped a bunch of macros that rewrite function calls and make it impossible to chase your function with the search command and you start wishing for a time machine to go back and prevent that code from being shipped.

> sub 100ms response times are normal

That sounds incredibly slow for something that responds on a websocket, but you never told us what you're doing besides db transactions (which should be 1ms unless you're on spinning rust)

thats the full end to end request comes in, parses input, performs db calls, processes a response packet and sends data out

Thanks for clarifying. I still think 100ms is excessively slow unless you're doing something like realtime video encoding. A dozen postgres writes only take a couple of ms and that should be the bottleneck, not stuff like parsing. If you have 2 data centers in the US, network latency to the user only adds 10 to 50 ms

Anyone who knows Rails or Django can pick up Phoenix and anyone who likes Elixir is going to be the kind of person you want to hire.

Not so fast. Rubyists vary in the extent to which they fetishise OOP. Those that do are going to run a mile from Elixir once they peer beneath the surface/syntactic similarities. The Rubyists who revel in Ruby's procs, blocks and lambdas will find Elixir tantalising.

Yeah, freelancing mostly. Lots of developers wanting to work with it, so finding engaged devs has been easy for me when I've wanted reinforcement.

Haven't had a gap in clients for a year and a half of straight Elixir work. Have more opportunities than I can feasibly execute on.

Finding the right companies and connections is a bit of luck and a bit of networking of course. But most of the companies I've worked with are not well known in the community. There's plenty of usage in the wild.

I've been working on Elixir projects freelance for over a year now, and it's become my preference for new projects. Elixir is one of my favorite languages because I feel like it really increases the leverage each developer has in a project.

I would probably recommend against this for most things if you're running an agency. Hiring experienced Elixir developers is not easy nor cheap; and for your clients, if they want to hire developers, at some point, are going to have a really difficult time.

Solution is to relax hiring requirements. The strength of the languages is what they do inherent to themselves due to their VM and standard library. You need people who know how to use them directing the effort, but once the design is there most people can pick it up.

> Solution is to relax hiring requirements. The strength of the languages is what they do inherent to themselves due to their VM and standard library. You need people who know how to use them directing the effort, but once the design is there most people can pick it up.

Yeah, I gotta disagree... a lot of folks grossly underestimate the challenge of learning an _entirely_ new VM and language at the same time. Example, I can't count the number of silly things I've seen because someone obviously hasn't really grokked that BEAM will preempt processes. Or the amount of superfluous testing I see around gen servers when folks could directly call the function they intend to test...That is to say, I've watched a lot of folks try to write things coming from other VMs and languages and constantly miss the mark. What works for language or runtime X likely doesn't work that way on the BEAM and may not even translate.

At the same token, I've seen programmers with 10+ years experience still royally mess up inheritance, brutally misuse design patterns, test private methods, shove view logic into every place they can think of that isn't an actual view, and so and and so forth. In my limited experience with BEAM, sure you can write shit code, but it's not like it's creating any bigger of a mess than anywhere else. The only advantage to not using BEAM is that you're creating a somewhat familiar mess.

So when you hire a dev unfamiliar with your stack you risk having all that (bad engineering in genetral) plus all the possible mistakes they can do with the specific stack. Having good seniors on the team can mitigate this but honestly having people familar with your stack is a huge pro.

What's wrong with testing private methods?

This has not been my experience. I typically receive five to ten resumes from experienced Elixir programmers whenever we are hiring. There are a lot of Elixir shops in my area though.

I wouldn’t ditch Django/Python and React - I still believe in them. This would be more of a tactical thing for projects where reqs are a good fit, and to slowly position ourselves as Elixir-competent, which may attract technical clients. I don’t think I’d have such a hard time hiring (we’re 100% remote) or training people gradually.

Erlang/Elixir were designed for telecommunication programs. If what you are doing is even vaguely like this Erlang/Elixir are going to be perhaps the best language you could pick. If you need to make a GUI or handle a CRUD app it may well be the worst language you could pick.

Elixir is a lot more forgiving, but with Erlang you can tell it was made for telecom switches.

> If you need to make a GUI or handle a CRUD app it may well be the worst language you could pick.

I don't know how else to say this except that it's complete nonsense. I work for a place that deploys an elixir crud app and it's very nice despite tons of domain mistakes, annoying 3rd party APIs we connect to and bad architectural patterns that have accumulated over the course of several years of a brownfield project (I am a relatively recent hire). I can't imagine dealing with all that pain in another language

Usually when I see a story like that it goes the other way. The friction of integrating ${nice_thing} with ${everything_else}. I'll take your word for it for now, but it seems like your honeymoon period with FP-heavy programming just coming into industry.

Well, I have been "programming FP-heavy" for about ten years, and before that, I used ruby/python, (though admittedly not professionally). I did help maintain an absolutely godawful yet bog-standard django site for a short spell as a professional.

The reason why elixir is fantastic is in such a ball of mud codebase, you can truly work in one corner of the codebase and not worry about anything else being disrupted. Testing story is also great. You can very simply do things you can't in other languages so easily, like concurrent tests that hit the database.

I've had over a decade of experience in this industry, ~2-3 with Erlang, ~3 with node, ~3 with Java, ~1 with Go, and a smattering of others (C#, Ruby), plus plenty of others outside of production deployments.

Erlang was as easy as any of them for CRUD apps. I used Cowboy for it, which is comparable to Express (JS) or Gin (Go) in how it approaches things (i.e., simple request/response paradigm, with various helper functions to pull stuff out of the request data, or write into the response data, and easy to insert middleware as, again, just functions). Integrations outside of the app weren't especially painful; admittedly, we didn't have to integrate with an undocumented API whose only integration was via supplied library that was implemented only in Java or similar.

Is your objection just "CRUD apps have to integrate with lots of downstreams, and so for that you want a language that has a huge plethora of libraries to make those integrations easy"? Because in that case "maybe". But that's orthogonal to being a CRUD app; that's simply because you have a bajillion dependencies that have existing libraries.

> Is your objection just "CRUD apps have to integrate with lots of downstreams, and so for that you want a language that has a huge plethora of libraries to make those integrations easy"?

There's real projects where you have to bind together 10 different things and using C++ is simply the most expedient option because getting FFI bindings made for everything is intractable. I have a feeling most of the people here (especially the ones downvoting me) only have web experience.

I didn't say there weren't such projects. Just, the defining part of those projects is "the bulk of the work is the dependency integration" (and implicitly, those dependencies are sufficiently unique and uncommon as to be implemented only in a limited subset of languages), and that is not the same thing as "CRUD app". You are using the phrases interchangeably, and using that as grounds to reject Erlang.

It doesn't matter what you or I think, it matters what other people think, and yes, compatibility with their existing infrastructure is something that is highly desirable.

From my other comment:

The people here are not really painting a good picture of BEAM vm languages. The typical complaint is some extremely energetic new hire wants to use Erlang for something, does, and then it turns into a hard to maintain mess. You can equivocate about the specific reasons but if you are really going to say ignoring legacy compatibility is stupid then I'm not sure what to tell you.

You are grossly underestimating fault tolerance, looks like.

Being a fan of C++ is fine but you come across as biased.

I'm not a fan of C++. I'm a fan of Erlang and the OTP. The entire reason I became interested in Erlang was fault tolerance. I have no idea why I'm getting downvoted, exactly what I have said was said later, above, and is more highly upvoted.

The people here are not really painting a good picture of BEAM vm languages. The typical complaint is some extremely energetic new hire wants to use Erlang for something, does, and then it turns into a hard to maintain mess. You can equivocate about the specific reasons but if you are really going to say ignoring legacy compatibility is stupid then I'm not sure what to tell you.

I think the disconnect (and downvotes, although I didn't downvote you) comes from mismatched expectations.

When you say this:

> There's real projects where you have to bind together 10 different things and using C++ is simply the most expedient option because getting FFI bindings made for everything is intractable.

My immediate reaction was: "But dude, that doesn't apply to web development, and Elixir is most of the time used only for that". So your aside seemed inapplicable and out of place for this thread. Elixir isn't used for such projects most of the time (with some notable exceptions like the Nerves framework that allows you to burn a full bootable image of an Erlang/Elixir app to an SD card and boot off of it to a supported set of ARM and x64 SBCs).

And then you say this:

> I have a feeling most of the people here (especially the ones downvoting me) only have web experience.

Which is, again, a bit out of place as a comment here, because again, Elixir is mostly used for web apps (REST, normal Web, GraphQL, WebSockets magic like LiveView etc).

So to me it looks like your comment was addressing a wider issue that is mostly applicable to projects that actually care a lot about backwards compatibility -- and most web projects don't.

> The people here are not really painting a good picture of BEAM vm languages. The typical complaint is some extremely energetic new hire wants to use Erlang for something, does, and then it turns into a hard to maintain mess.

This happens in every ecosystem I've participated in for my 19.5 years of career (including when I was doing C++) so I urge you not to end up being negatively biased against the BEAM ecosystem folk in particular.

People do get hyped and invest a lot of energy trying to fit square pegs into round holes -- and several months/years later people like myself are being called to beat the project back into shape. I have given up hope that the programmers at large will ever learn not to get hyped and judge tech based on its merits... :|

There's not a huge difference between the domain of telecom switches and modern web applications. You've got multiple clients connecting, with the understanding that one client shouldn't have undue impact on the responsiveness seen by other clients. This is even more true once you get into using websockets which are directly analogous to active calls in a telecom switch - one websocket channel dying shouldn't cause other user's channels to be killed as well.

Well yes, but Erlang was actually solving the far more generalized problem of concurrent client/server applications. I would almost go as far as to say it's a DSL for writing such applications. So like, why isn't this stuff more popular? Because it's dynamic? Because Node? I dunno. Our industry is weird, or perhaps very normal in the age old mantra "Why do it the easy way when we could just do it the hard way?".

I think Node would indeed be seen as the “easier” choice for project with concurrency needs thanks to its async model.

As much as I like JS I’m not a big fan of it on the server however - web frameworks are lacking and tooling and libraries feel like the wild west.

In Phoenix I’d see a web framework for projects with concurrency reqs, with great ergonomics and DX, nothing else. Maybe I’d miss types? But we work with Python just fine.

I keep hearing Phoenix is an outstanding web framework for CRUD apps. I wouldn’t be trying this without building a significant web app and seeing for myself first anyway.

And yes, high concurrency can be a fairly common req in web projects, it’s not only seen in telecomms engineering.

Have you used Phoenix? With it, Elixir is easily the best language to build CRUD app in. No other web framework even comes close.

All these stories can be distilled into - "I picked something I liked and we made it work. Yay team"

Lots of posts here are talking about Elixir's concurrency and performance, but in benchmarks like techempower, Elixir comes fairly low.

Can someone help me rationalize this discrepancy?

Also, in general, for a Digital Ocean droplet, how many requests per second (db query based) can Elixir handle while maintaining a sub 200ms latency?

Almost no business application is written for performance at raw request speed. What Elixir brings is far more the that. Watch this if you want to understand why so many people fall in love with it. https://youtu.be/JvBT4XBdoUE

Well it’s pretty competitive if you limit it to dynamic languages + eliminate some outliers (weird non-Node JS runtimes, web frameworks mostly written in C that happen to be called from Python/PHP, etc)

I really wish there were more remote Elixir job offers. Seems like such a great productive language to use, and Phoenix with LiveView is just great.

You might wanna check https://elixir-radar.com/

Wow, this is the exact opposite of my experience. I can't imagine a company trying to hire Elixir developers and having any success unless they're willing to hire remote developers. I know a lot of folks in the community and literally everyone is remote...

They get posted all the time in Elixir Forum and I also see them advertised in the various email newsletters.

I feel that there are more remote Elixir jobs than ever. I get a lot of offers without actively searching, most often it's sole Elixir or Elixir + Ruby (unsurprising) combo.

It seems like most Elixir jobs are remote these days.

You could say they are asynchronous!

most jobs are remote these days.

If you're in the US(I know, I know), PepsiCo Ecomm is hiring.

They’re building all the eCommerce capabilities from scratch in Elixir? Or is the latter just part of the stack?

It's a mix of things. A lot of the b2c stuff uses big partners that are already dominant in eGrocery. However the tooling around the ad placement, fulfillment, product cataloging, b2b sales, sales intelligence and consumer profiling all make use of Elixir.

Basically we use React for most front ends, Python for data engineering/data science, and Elixir for everything else.

I’m interested as well. But having built out a complete e-commerce stack in elixir, I can say I found it particularly well suited to this use case.

> The need for concurrency and fault-tolerance made Elixir an obvious choice for this application, but that alone wasn't enough to seal the deal.

What? Why does this make Elixir an obvious choice? Most (if not all) major languages offer concurrency primitives (e.g. goroutines), and fault-tolerance is included as a requirement by default for any sufficiently complex distributed system, but has little to do with the languages/frameworks used to build that system. Not seeing why these requirements make Elixir a better choice than any other language

That's not really true.

Or rather, the need for the distributed system to be fault tolerant is, but downplaying the language/framework isn't entirely.

Kubernetes has definitely made it so the distributed tooling of Erlang is less valuable; many of the things it provides you can also get with Kubernetes. The tradeoff being that Kubernetes is far more complex (but, also allows for things Erlang doesn't have).

But what Kubernetes can't do is help you write correct programs. That is, if an error happens, what do you do? In most languages, you catch the ones you can predict and handle them. For the others, you let them trickle up until someone catches them, or the app crashes. The problem is figuring out where to handle them, and to make sure you really -are- handling the things you know how to handle.

Erlang flips this. Handling things becomes far less important as isolating failure. If something goes wrong ~here~, where else is there state that is likely wrong, and how do we get it all back into a good state?

This is a really powerful approach, as it ends up being far more robust, for far less work. It's harder to get 'incorrect' behavior this way; rather than have to enumerate the unhappy paths, and how to address each, you just have to declare dependencies between processes and how, should they find themselves on an unhappy path, to drop their state and get a known good one.

> fault-tolerance is included as a requirement by default for any sufficiently complex distributed system, but has little to do with the languages/frameworks used to build that system

Erlang/Elixir have an extremely compelling story when it comes to fault-tolerance. You can only argue that all languages are created equal in this regard if you fall back to the "everything is Turing-complete so they're equivalent" argument.

Erlang and its VM were built from the ground up to work closely together with fault tolerance in mind, in a context where a few minutes of downtime meant severe financial penalties.

I think it's because Elixir runs in Erlang VM which was built from the ground-up to satisfy these requirements... And it's been around for a very long time with very good track record.

That really doesn’t answer the question. What does the BEAM VM provide that cannot be implemented in Rust or Go or even Python.

You can implement those things, but then you'd be implementing them. Suppose you open a file (or a tcp socket, or a database transaction) while being connected over an http connection. How many LOC does it take to make sure that file is closed at the end of the http connection, through all possible error conditions? How confident are you that said code is correct? In elixir, it's 0 LOC.

Nice, finally a straight answer. To be clear, this is because Erlang forces you to think about such failures and also because the entire stack being designed for such fault tolerance, correct?

Immutable memory, preemptive scheduling, per process garbage collection with isolated heap and a 0.5kb memory footprint per process for starters.

Immutable memory — writing functional Python or Rust can come close. Pre emp scheduling — I assume this is to prevent deadlocks? This and everything else is very nice but do we need a full VM for that?

I’m not trying to be a jerk but I want to understand why this isn’t a feature implementable as a runtime.

> Pre emp scheduling

Userspace pre-emptive schedulers do not incur the same level of overhead that kernel threads would normally have. To address another comment you made in another chain, Erlang accomplishes pre-emption through reductions which is roughly equivalent to ~4000 function calls, if I remember correctly and if I have read the correct documentation, before it yields the Erlang process.

Other forms of async to my knowledge essentially queue a task while the main thread of execution continues before awaiting; and the queued task is run in some form of thread pool (or other forms of executors). Without going too deep into this subject (and assuming I have made no logical errors) it is possible to have misbehaving tasks in async tank performance and latency, while the same misbehaving process in Erlang wouldn't. See Saša Jurić's demonstration [1] for an example. Do keep in mind that async is not forte, what I have stated is merely my observations. I write primarily with kernel threads.

This is why it is possible to spin up millions of, in Erlang terms, processes with minimal overhead. See this article by the Phoenix Framework team regarding an experiment on having two million active websocket connections on a single (albeit beefy) host [2].

> This and everything else is very nice but do we need a full VM for that?

See [3] regarding an effort to port the Erlang process and supervision strategy to Rust. In my naive view, it is possible to write something similar to BEAM in another language, but the more troubling portion of the work is to jury-rig the original language into a concurrency model it isn't designed for whereas Erlang by contrast was created purely for this task.

[1] https://www.youtube.com/watch?v=JvBT4XBdoUE

[2] https://www.phoenixframework.org/blog/the-road-to-2-million-...

[3] https://github.com/bastion-rs/bastion

Appreciate the answer. I just saw the Sasa talk the other day incidentally and pre emption value is pretty clear now.

It’s so clever and well thought out, I think I start too see value of Erlang now

Private memory for all processes (Go uses public memory)

The actor model.

Superior memory usage and CPU processing.

Supervision trees.

It’s pretty nifty. But I’m dumbfounded these features haven’t been copied over to other async runtimes. This is why I wonder if there’s a hidden catch that Erlang supporters don’t talk much about

Well, you can downvote me all you want, fuckers, but what I said is accurate.

I can understand deciding Elixer is a good solution for for concurrency. What I couldn’t understand from the story was why it was needed. Unless the IDE had some sort of real-time shared coding?

There’s nothing in the list of requirements that couldn’t be done over http it all seems asynchronous? Scale here seems like it would be small.

Three months is almost zero time for a team of 4. I’m glad it all worked out but I have a harder time from the story understanding how using Elixer was the right choice vs a stack they were more familiar and confident on condieribg the constraints.

Immutability is its own reward :) Yes yes, available in other langs, but it kinda takes YAGNI in terms of scale off the table since it's a perfectly good fit for small applications as well. It just happens to scale (in terms of concurrency) really well. But if you never need to use those features that's ok—you'll still have an application written in a really nice, expressively terse language.

Looking for some advice.

We have written several microservices primarily for websockets in Elixir. They are great with literal zero maintenance costs..but how do Elixir developers handle the following when going all in:

1. Long running workflows - there do not seem to be popular frameworks like camunda, jbpm, temporal or cadence for elixir

2. Integration libraries - similar to apache camel

3. Inbuilt scripting engines to run user scripts like nashorn, graaljs or groovy

We really enjoy working with rails and would like to go all in into elixir. But the ecosystem of available frameworks seems to always come in the way and makes us choose spring boot or rails.

Pre-emptive caveat: I do not know much about the three things you mention. THAT SAID, I'm familiar with Erlang, and moderately familiar with Elixir, so wanted to still give you some areas to explore.

For #3, you probably would need to NIF out to something. You can also execute an uncompiled script of Erlang using escript, but that, obviously, is not something you'd expect most users to learn (and not sure you can execute it in a running Erlang context, rather than from an external shell). You can also evaluate a string, and/or compile and load new Erlang code from a running Erlang program, but these are suuuuuper dangerous if it's user supplied.

For #1, I've never seen anything, but if you just need something that allows you to change out or customize logic on the fly you can do so in a pretty straightforward manner. Because it's a functional language, you can do stuff like have a stateful process template that you can swap out on the fly, i.e., submit a list of function identifiers a la [read_data/1, transform/1, write_data/1], and now whenever you call run(data), it spins up a new actor that effectively calls write(transform(read_data(data))). Technically you can even provide new code snippets (using one of the mechanisms from #1, or, if opening up an Erlang shell to the running instance, supplying it directly as a lambda), but you'll need to be mindful about persistence. I am not that well versed with workflow engines, but that partly comes from the fact that I haven't really seen the point given how easy many languages make it to create and customize workflows without having to learn special semantics.

I am not at all familiar with apache camel, but I think your concerns here aren't easily addressed; certainly, the library support for Erlang/Elixir isn't anywhere close to the JVM. But I -will- mention that sometimes writing the integration(s) you need to enable using a different technology are worth it.

For #3 in general the advice is a NIF or a Port to an engine (polar is a good example) or to use Luerl

For #2 i know nothing about this kind of stuff

For #1 it depends a lot. Something like Oban can be enough, or Ulf Wiger's Job or Broadway. These tends to be more ad hoc and extracted to independent services. It depends a lot of what kind of constraints you have for these workflows. There are lot of options but they are all more specific to use case and constraints than these broad projects that try to do it all.

The other comments make good points, but here's a few more:

> 1. Long running workflows - there do not seem to be popular frameworks like camunda, jbpm, temporal or cadence for elixir

Oban seems good for job processing. Otherwise Elixir builds on Erlang's OTP. That means you can also use ets, mnesia, or even Rick's distributed dynamo application runners.

> 2. Integration libraries - similar to apache camel

Nothing quite like this, but you can make use of Flow or GenStage for data pipelines.

> 3. Inbuilt scripting engines to run user scripts like nashorn, graaljs or groovy

There's a nice Lua implementation 'Luerl' IIRC. Also as others mentioned you can do a NIF. In particular Rust & Rustler would provide lots of scripting language runtime.

Off topic: Am I the only one who finds the stock photos in the article off-putting? They have nothing to do with the subject matter, except in a very indirect, sort-of-metaphorical way. I would take the article much more seriously if they weren't there.

You’re not alone. Those gratuitous, unimaginative photographs (one mirror-flipped, suggesting obliviousness somewhere in the assembly line process), combined with the prolix, buzzword laden, soggy prose, emit a strong corporate odor that made me want to turn away. Pity, because the core of the content is somewhat interesting.

One of them is mirrored and the way Canon is unreadable on the camera is vaguely annoying me.

The best things about Elixir is it's pattern matching and it's inbuilt documentation tool. Each and every document for every package is consistent, which are exploited by the IDE tools for stellar in editor help.

> We had two viable choices for the new application that needed to be built---Rails and Phoenix. The team to which we were delivering the project had no Elixir experience at all, so a choice to build a Phoenix app represented a choice to adopt Elixir.

*Laravel and all the other PHP frameworks waving their hands frantically...

Hello, Hello!

I wish there were more roles for elixir newbies/general newbies out there. I've been learning some elixir basics and it seems like itd be a tonne of fun to work with.

Folks who have used Elixir in a team environment (as opposed to for personal use), is the lack of types an issue in navigating big/unfamiliar codebases?

I came to Elixir from another untyped language (python) but I've never had any issues.

Elixir code often relies heavily on pattern matching return values (ok/error tuples or Structs) so that helps along with typespecs.

The fact that the language is compiled is also a big help when it comes to avoiding run-time errors while developing locally.

The other thing I appreciate about elixir coming from python (more specifically Django) is that most application-specific code is just modules and functions. There is none of the indirection and abstractions-upon-abstraction I used to have to grok in big python codebases.

I just switched from a Python (with mypy) codebase to a full-time Elixir position a few weeks ago.

I thought I'd miss typing but due to the functional nature of Elixir and its pattern-matching, I found out that the code is much simpler to grok.

In this case I think that having readable idiomatic code trumps the need for typing.

If needed there is type hinting with @spec but it's not used very often (or mostly in libraries?).

I can definitely say that it's very fun and productive to work in a Phoenix mono-repo. The development experience is great and I'm really having fun programming again.

Not for me. To be fair I didn't have types in PHP, Python or JS either. I wrote about my onboarding thoughts on Elixir here: https://underjord.io/onboarding-to-elixir.html

I recently started a new job working on a Elixir codebase. While it took me a while to understand the architecture and conventions (just like with any language, I guess), dealing with types was not a concern when it came to navigating the codebase.

Granted, the codebase uses pattern matching, type specs and Dialyzer extensively, which all definitely contribute towards making the navigation easier. Without those tools in use, I can definitely imagine navigation being more difficult.

No, but I don't come from a typed language.

For me, it was an upgrade, because the beautiful docs have type information for most main modules you'll use.

Wow that is insane to do that much in 3 months.

I'm not sure how big the team was but I'm impressed.

7 engineers

More impressive then ;)

Just curious, seems to see a lot of Elixir posts crop up on HN lately.

I know a startup that did £75K+ in sales in the first week with only just using node + heroku.

Could the same be applied to Elixir if it is really that good?

Are there any pitfalls that one should know about?

Your tech stack is going to have 99.99% less to do with your first week's success than your business plan and execution.

This seems slightly bizarre -- sorry for /s but if you have engineers who are familiar with {language} rather than Node then they'll probably generate more sales for your company building an application in {language} than they would building in in Node. The pitfalls would be that it'll be slower to get an application in {language} to production if the engineers know how program Node but not {language}.

Elixir on Heroku is just as viable (we're still running on a single heroku Dyno and postgres instance on Elixir).

Not having much info, my guess is many languages would worked out for that startup.


  title:elixir / UK = 11 jobs
The trouble with Elixir is that it's never going to be mainstream.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact