And all of them picked it up in a couple of weeks to a level where they could start making changes to code.
I think we are overestimating the amount of time it takes to learn a new language.
The hardest thing to grok with FP is immutable data.
Once you get past that, you're rolling.
But the speed and concurrency is no laughing matter. Miles and miles and miles ahead of ruby, python, etc in that matter.
Spin off a background process from a web request where you don't care to get back something.
Basically eliminate Redis or caching.
Just need Postgresql/Mysql.
If you're wild eyed you can use Mnesia without the databases.
Run jobs across a cluster sanely with a good job library that only needs Postgresql.
The story goes on and on. Unless you have tons and tons invested into what you're doing right now, it makes a lot of sense to start to spin up things on the edge of your monolith or SOA with Elixir.
New projects should be started with Elixir.
The idea that it's "hard to find programmers" -- does not really stand up. Because anyone who can't grok a new programming language in a short time, is not really a good programmer.
This this THIS! I have a profound philosophical disagreement with CS professors (usually professors who retired from industry into academia) who think that part of the job of the CS curriculum is to equip students with a language and think that learning multiple languages will dilute their focus or some bull crap like that. Learning new languages is not hard! It is much better to learn how to learn new languages, than it is to learn one language really really well. Learning multiple languages will give you the mental tools to use other languages to their fullest extent.
Someone raised their hand and said we've never had a class about Python, how are we expected to do the assignment? The professor said "If you can't figure this out, you really shouldn't be able to graduate". The dude groaned, but I really agreed.
Now the same thing happens in job postings all the time. Companies looking for specific things rather than smart people. It happens on the job seeker side too. I understand not being interested in something, but say a job was in Elixir. I wouldn't hesitate to apply, but you'll see tons of responses on twitter that are like "oh I'm a JS guy" or something. Who cares! You'll figure it out.
If I know how closures work, then I can pick them up in Lua or Go or JS very easily, all I need is to figure out the syntax, I already know the computational pattern. On the other hand to get to know how closures work I need to write a lot of them and use them in many scenarios.
You kinda need that 'one' language to get the point where you can learn multiple languages well. My cs courses kinda sucked, but one good thing they did was to teach me C, C++ and Java pretty extensively.
And what made me a better Typescript programmer was to hack some F#, C#, Elixir, Rust, Python… Not even real projects but just little katas and a lot of documentation readings.
In my experience, becoming specialist in a langage can even be dangerous if your work doesn’t consists of « framework code » because the more you know about the machinery, the more you are tempted to write code that nobody else will understand. Of course it’s a trend I saw, not a general rule.
that's unfortunately not taught at university. people who are already quite bright will pick this skill up by osmosis or naturally. People who dont know how to learn often struggle.
at no stage in the k-12 education curriculum does learning how to learn gets taught systematically.
I have had java professors never explain subtype relationship or interfaces properly.
Without understanding basic concepts that underlie programming languages, mechanically teaching syntax (with language specific OOP buzz words), is ensured to only confuse students when learning new languages.
Rather teach them how polymorphism works, what's a virtual function and a function pointer, any student worth their salt will understand why things are the way they are.
it's even more meta than that.
Learning to learn is not easy, but can be taught. It requires some introspection - what it is that you don't understand about a subject, as well as _why_ you'd be learning it.
In university classes, the professors aim is to teach the material (despite them mostly not wanting to do teaching, but merely a requirement to exist as a professor) - so a lot of them just teach the materials directly (aka, present facts).
These sort of teaching style is only good if the student is already a sponge and can remember the material. For students who aren't interested, but their course requires it, the learning is stunted because there's no context for which the student can grab hold of to learn the material.
And then, the order in which the material is presented makes a lot of difference - teaching OOP, in my experience, tends to start with inheritance, how the language (like java treats) inheritance, and so on, as the course goes along introducing more of the java language.
But they don't teach the surrounding context, like how it compares to C++, or haskell, or LISP. Learning like this is like learning how to walk on stilts.
Really, after you know one language well, you know them all instinctively. Picking up Elixir coming from Python was very smooth for me.
This isn't strictly true. Elixir, while functional, isn't completely pure and has a dynamic type system which definitely lowers the learning curve.
Developers who know one language well are likely equipped to pick up a language like Haskell faster than even a mathematician, but I can't imagine that a Java one trick would pick up Haskell (or APL, or Scheme, etc.) "instinctively"
Honestly I can't think of an example of anything which is harder to model in Haskell's type system, but I think the Java and C examples demonstrate that you do have to learn different type systems to model more complex examples.
I totally agree that type systems end up reducing cognitive overhead in the long run though.
By definition, this can only happen if you use the wrong type in the wrong place : it can only be a huge bug or there is some dark magic I don’t understand.
I know a lot of langages manage to let you use strings as numbers and vice versa but I don’t see another case where you would voluntarily pass a wrong type. And this is not incompatible with string typing as long as the langage provides auto-casting.
This can lead to a runtime error, or it can make no difference whatsoever. Enforcing that consistency in the compiler is powerful but is also another obstacle to someone who's just trying to get a simple example working.
Contrast this with Scala, where a subtle type error refuses to build at all. If you're green, hunting down the source of that error can be very arduous.
Failing at compile time can be very powerful but it can also add another obstacle to overcome.
It never happened since I switched to strictly typed langages and tbh, you rarely compile type errors if you have an IDE that shows in bright red that you are wrong.
The few times last years I had to write some Python, I really felt uncomfortable and it was really hard not to make mistakes and not to trust my IDE auto completion.
My brain is now plugged on « if I write something that is not proposed, it must be red ».
But again, I understand your point and maybe it’s just me.
Btw, there are other tools than compilers that can analyze your code for errors. Try PyCharm
But once you start adding lots of details, such as trying to specify the exact type of a stable sort operation, the overhead starts increasing dramatically. Another example I really love in this area is trying to code a type-safe MxN matrix multiplication that keeps track and validates measurement units for each element in each matrix (that is, it allows a cell in the result matrix to be 0.3m^2*s, but not 0.3m^2 + 0.1m). You'll find that few libraries, even in Haskell or F#, attempt such a thing, just because the types become infuriating and obscure the linear algebra.
Excel is easy, but if you were restricted to only using the keyboard to use excel, it wouldn't be so easy anymore.
I didn't had difficulty learning kdb+/q. But that's maybe because I had an early exposure to APL (I read a chapter on it in a book as a kid).
Also I had a previous work experience in HPC, data-parallel computations and vectorization/SIMD using the array-programming languages such as Matlab, Octave, R, Julia, etc. And GPU Compute using CUDA/OpenCL.
The really hard to learn part of the kdb is a proprietary frameworks around it. You need them in order to develop a production-grade systems.
Most of the ones I would consider worthwhile attending, do have a polyglot curriculum.
Except C++, fuck C++. My favorite line from a professor was "Any course taught using C++ always turns into a course ON C++."
Can you comment on why so many benchmarks show Erlang / Elixir performing significantly worse than other languages like Go.
EDIT: why the downvotes? I’d rather you post a comment to me so we can have a health dialogue.
First of all some context. The BEAM community does not engage heavily with these benchmarks compared to a lot of others communities. That means that this code is definitely not optimised. On top of that, benchmarking makes hard to do what the BEAM community love above all else, which is to put the thingy in prod and then tweak the performance based on what you discover there. The reason to do it so much is that the BEAM come equipped for that.
Second point is that as pointed down there, the BEAM optimise heavily for latency and graceful degradation. One important point is that if you look at the std deviation on latency and the latency itself here. They are super low. What this mean is that the BEAM system will still be highly reactive under load. And interactive. But also that we are nowhere near the max load they can deal in this benchmarks!
If you wanted to compare you would need to overload the machine so that this latency climb. A latency so low mean that the machine is not overloaded. It may be at 100% CPU but it is totally coping ok. The BEAM is basically exploiting the machine to its limits better.
The reasons are multiple but basically you should consider that these benchmark for the BEAM are not sweating it. And that means that this load is just a normal day for a BEAM setup, not one that would make you auto scale.
In that light, it is a far different picture, no? Happy to discuss
From those techempower benchmarks, look at elixir-plug-ecto. 2.8 ms max latency, with an average of 1.0ms latency. That average latency is very respectable (but not the lowest), but that max? That max IS the lowest.
What do you want from a web server? Something super fast for 99% of requests, and then takes orders of magnitude longer for the worst case, or something that is somewhat slower on average, but stays predictable?
And as someone still else alluded to, those have to do with the runtime characteristics. How easy is it to write reasonably performant, correct, concurrent code? Erlang (and Elixir) makes it very easy; I would argue easier than any other language.
However, Erlang, as you have correctly pointed out, is not very good if you only care about parallelism. Parallelism is not concerned with semantics, but performance; it concerns itself with trying to make a given computation faster by using multiple physical hardware resources but maintaining the same semantics as a hypothetical non-parallel version.
Your argument is a cogent one for why Erlang is not great if you only care about parallelism. Being able to scale a program to use 20 cores is no good if the same program written in another language could beat the pants off that program using just one core. However, that is an independent concern from concurrency.
It might not have the highest transactions per second, but for those transactions it completes - it’ll do it fast and with the same low latency.
There’s other languages that can perform more transaction per second but as load increases, their latency and standard deviations grows exponentially.
(This is seen in the stressgrid benchmarks).
That being said, my point was that concurrency is an independent topic from runtime performance (including latency), which is the domain of parallelism. Erlang's big selling point is it makes it possible to write concurrent programs whose functionality (not performance) would require oodles and oodles of code and discipline in some other languages.
EDIT: On reflection I do think if what you're talking is catastrophic latency overruns caused by cascading queuing failures then Erlang can indeed help make it easier to avoid those pitfalls.
Yes, that's exactly correct. Erlang/Elixir server apps aren't the fastest around but their latency is very predictable and they remain responsive under load, unlike programs written is most other languages.
That's the main selling point of the BEAM VM.
Good to know!
I think this sells it short. Serious software has to cross machine boundaries at some point. There are pieces of code and styles of writing them that work really well on one core. They may also do okay on one machine. But once you have to go to two machines they become a huge burden. A liability.
Part of the way people like IBM used to make tons of money was off of people who had software that could not cross the one machine threshold. They sold really, really big machines. They sold unobtanium that let you continue to scale your one machine vertically. Like $20,000 hard drives (1990's dollars) that were essentially battery backed RAM, meant for things like putting your WAL on to speed up transaction commits per second.
Speed doesn't mean much if it's taking you in the wrong direction.
The stressgrid folks did a RCA and posted the explanation in a subsequent, but most people rushing to shoot down erlang/elixir don't get around to reading that article.
Extracted comments from the article:
> we discovered that Elixir had much higher CPU usage than Go, and yet its responsiveness remained excellent. Some of our readers suggested that busy waiting may be responsible for this behavior.
> c5.9xlarge and c5.4xlarge instances show similar results in responsiveness, and no meaningful difference with respect to busy wait settings.
Also, note that your link shared from stressgrid was published BEFORE the benchmarks links I shared from stressgrid. As such, wouldn't those benchmarks take into account your explanation?
I don't know for sure, but the benchmarks you linked do some more RCA, which involves looking into why cowboy2 is not great (tl;dr HTTP2 support is hard)
also, here is how stressgrid can POC (possibly too dangerous for prod?) to 100k cps (which is on par with what Go can do in the article you linked), you could recompile kernel and configure ranch differently, would probably need to be proven out more:
> "the stressgrid benchmarks look bad because they forgot to turn off the "spin your cpus" setting"
stressgrid blog post:
> "When running HTTP workloads with Cowboy on dedicated hardware, it would make sense to leave the default—busy waiting enabled—in place. When running BEAM on an OS kernel shared with other software, it makes sense to turn off busy waiting, to avoid stealing time from non-BEAM processes."
stressgrid seems to recommend the opposite of what you mention, taken from the blog post you shared .
Am I misunderstanding?
The perf looks bad because they are also looking at CPU usage.
Getting to 100k rps is unrelated to busy wait.
CPU usage has no baring on transactions per second. So why are you bringing it up.
Spin/wait settings: how is "cloud hardware" different than "legacy hardware"? Both are servers located in a data center. Furthermore, the blog post goes to say that if you aren't running any other non-erlang services (you box is dedicated to Erlang) that you should KEEP the default setting of spin/wait since it won't cannobalize other services running on that box ... which is exactly how these benchmarks were tested.
The fact of the matter is, Erlang had lower transactions per second than most of the languages benchmarked. My insight from this HN discussion is that where Erlang shines is having consistently low latency/response time. It's not the fastest, but it's the most consistent even under load.
In general, I think blog posts like this do a pretty bad job of explaining it. At some point I'll publish my take and hopefully it'll reach the front page here.
Languages like Go, C, Rust will always beat Elixir/Erlang in the computationally intensive benchmarks.
You would chose Elixir/Phoenix/Erlang for the concurrency and networking story.
I think you need to define worse here...unpredictable spikes in latency will give you plenty of headaches when trying to guess how much hardware you should throw at a service. Erlang's consistent latency here is what I would choose above everything that benchmark shows for almost every problem I've ever solved.
Going fast at all costs is not a desirable trait for my software and I suspect it isn't for most peoples software. I want predictable behavior that operates gracefully under extreme circumstances.
However, since BEAM doesn't have access to unique CPU instructions that nobody else has or anything else, and since a lot of focus across a lot of languages has been put on that problem, that particular advantage has waned, and Erlang to my eyes has indeed been outright passed on this front by multiple languages. In the 200Xs, I did not see people talking much about NIFs as a solution for performance; that talk has started as an effort to keep up with things like Go and other languages that have taken advantage of BEAM's lessons and explorations of the space.
Personally, while I think a lot of the hoopla surrounding Erlang/Elixir isn't wrong per se, I do think a lot of it is outdated. They'll say "We do X and nobody else does!" but while that may have been true 10-15 years ago, it isn't anymore. There's no performance reason to pick Erlang/Elixir over Go, for instance, and if you take the models of memory access back to Go there isn't a huge organizational reason either. What Erlang/Elixir force you to do, you can voluntarily do in other languages too. And I think that's become true across a lot of other of the putative "advantages"; it isn't that Erlang/Elixir aren't nice in some ways, but I do wonder how much of the recent push is stemming from people who are experiencing some of these capabilities for the first time and trusting the Erlang/Elixir storyline that they're unique, when in fact they are increasingly just table stakes for a new language nowadays rather than special characteristics unique to the BEAM family of languages.
This argument is older than dirt. I don't need Java, I can do all of this stuff in C (voluntarily). I don't need Y, I can do it in C++ voluntarily.
You don't control your team. It doesn't matter what you are willing to volunteer if it doesn't work unless the whole team does it. If it only works if everyone has to do it, that's not voluntary now, is it?
Every boundary that is by agreement only will constantly be pushed and pushed. Every time someone wants to leave early, or there's a production issue or a customer deadline, or they just don't want to. It's why size and speed are a constant fight at some places. Everything you fix is counteracted by ten other people who just don't care.
Either the system has to enforce the rule or your coworkers become enforcers. That's a shit job to begin with, and doubly so for introverts, who will either do too little or too much in the face of boundary testers.
Like immutability! :P
Jokes aside, I agree for basic concurrency you can get pretty far with other modern languages. I think it's what brings people's attention to Erlang/Elixir, but I don't think it's the most important differentiator. It also isn't the one that Erlang's community (I can't speak to Elixir) really touts, except as one that is easily understood by those outside of it.
The real benefit is fault tolerance. Everything about Erlang, the concurrency and distribution stories included, is built around fault tolerance. You need concurrency and distribution to be fault tolerant (can't have one bad process choking out others; can't have one bad machine taking the service down, etc). The immutability, the supervision tree, those also are about fault tolerance.
I've written production systems in Go. It scaled better with way less tweaking than the JVM based stuff we'd written previously required. But it wasn't nearly as resilient to failure, or as predictable, as the Erlang stuff I've run in prod was.
This is part of what I mean; Erlang doesn't have access to special CPU instructions that make supervision trees possible. They're just code. There's no reason you can write them in other languages. It's not even particularly hard, unless you insist on exactly matching all the accidental details of the way Erlang implements them instead of implementing their essential details in a manner idiomatic to the base language.
Define hard here? Because there is a lot of bookkeeping involved, and, yes, to get some of the effects you have in Erlang that are necessary for reliability, you'd basically have to create your own runtime atop Go. I.e., yes, if you just want to "if this process fails, restart it", you can do that trivially in another language, but "if this process fails -kill these others and restart them from a known good state-" is devilishly hard, given that Go has no way to kill a running goroutine. You can send a message along a channel of "hey, you should stop", but if code execution in that goroutine never gets there, you have no guarantees.
And while the CPU doesn't have special instructions, the VM -does-. Exit signals are guaranteed in the language spec; a colocated supervisor is guaranteed to be able to both detect a failed process, and to be able to kill others. Go offers no such tooling, let alone such guarantees. I'd be quite interested if you say you did that; I suspect it was, as mentioned, just "hey, if an error comes out of this response channel, restart the goroutine". Possibly also a "here's a channel we can send 'kill' commands on, downstream goroutines should check it occasionally to see if they should terminate". A lot of bookkeeping, no guarantees.
Is it a problem? Yes, absolutely. Is it a problem worth spending extremely valuable language design budget on? Heck no, and the fact that Erlang does is a negative to me, because what they gave up to get that capability is way more important to me.
Does my solution exactly match Erlang? No. Of course not. But it gets me 90% of what I care about for 10% of the effort, and in the meantime I get the other things that directly impact my job on a minute-by-minute basis, like an even halfway decent type system (it's not like Go's is some sort of masterpiece here, but it's much better than BEAM's), which Erlang sacrificed as part of its original plan. I understand why they have the type system they have, and what they got out of it, and I'd rather have a decent static type system and solve those problems another way, which happens to be the conclusion pretty much everyone else has come to as well. Again, genius thinking in the 1990s, way ahead of everyone else, don't let my current assessment of Erlang diminish the fact I deeply respect what it did in its time... but not a solution I have much interest in in 2021.
As the meme goes, Will Ferrel gets a deep puff of smoke from his cigarette and says "I don't believe you".
I agree that Go has better type system and Erlang/Elxir/BEAM languages in general. Absolutely. But I think you're showing bias and were already looking for an excuse not to use Erlang. That's completely fair but I think you are non-fairly misrepresenting the exact merits of the decision.
> This is part of what I mean; Erlang doesn't have access to special CPU instructions that make supervision trees possible. They're just code.
Everything is "just code", dude. Yours is no more an argument than technologically advanced aliens visiting us and exclaiming "How come you don't have super-alloys that can withstand atmosphere entry without losing atoms? It's just chemistry!"
I am getting the vibe that you're one of those super-programmers that can tinker with everything and the computers have no secrets from them. That, or you are bragging too much.
And you should know something else -- I am not a huge fan of Elixir these days. I work with it for 5 years but it's showing some cracks and lack of community (and core team) attention in critically important areas for the ecosystem's advancement -- like compiler instrumentation, or tooling to modify code in an automated way -- not to mention the dynamic typing. I get it, it's far from the magic many fans are making it out to be.
But you are discounting very real advantages in a very dismissive manner.
For the record, if one Rust runtime gains most of Erlang's OTP capabilities tomorrow then I'll switch to Rust for 100% of my work next week. But nobody has surpassed OTP's capabilities.
Finally, I'll also agree that we don't need 100% of OTP -- that much we are in complete agreement on. But as you yourself pointed out, cancelling running background threads is still mostly an unsolved problem so just scoffing at a technology that has mostly achieved that is a very uncharitable take that makes me question your other arguments and wonder if they are not emotional arguments.
I am by no means a super-programmer. On my non-humble bragging days I'd say that I'm only very slightly above average. But I've tried to duplicate OTP in at least 3 other languages and I failed miserably every time (Java was one of them). So yeah, don't just say "meh, I can invent OTP everywhere else". No. You really can't. If you can, open-source this effort and I'll donate, I promise.
the real key for erlang and later elixir's success was pervasive async io. this was a genuine advantage during erlang's peak but languages, runtimes and libraries like go, node and nio have caught up and surpassed the erlang vm
the truth is without that advantage almost everything in the erlang/elixir world is worse than more mainstream alternatives. there's some exceptions -- ecto is pretty good because it has one of the best written connection pooling implementations i've ever seen. i think both languages are pretty good and i still write the occasional thing in erlang for my own satisfaction but the world has moved on and elixir and especially erlang haven't innovated enough to have kept up
Seriously? You say this without providing examples?
I'm not aware of any other programming environment that has BEAM-like processes.
Not Go, not Java, not any other system I know.
these all have slightly different semantics and characteristics, but it's simply not true that beam is doing anything unique here. pervasive links between processes are a somewhat interesting wrinkle of the beam implementation but even that is achievable in other languages with little work
what made green threads work in erlang (and later elixir) was async i/o. in other languages green threads would block on all i/o calls and had no opportunity to yield whereas on the erlang vm all i/o calls would effectively yield while waiting. today nearly every language has async i/o (in libraries if not pervasively) so green threads are much more accessible
i don't say this as an elixir hater or whatever. i genuinely respect erlang's place in history as a populizer of some of these concepts. when i say the elixir/erlang community is credulous and prone to exaggerating the differences between those languages and other more modern languages (particularly when it comes to implementation) i don't say it dismissively as a reason to abandon those languages. i say it because without an impetuous to keep improving elixir and erlang are going to become increasingly irrelevant. it would be a shame if the 'elixir is different and unique' attitude led to complacency and stagnation
Of all 7-8 programming languages I worked actively with for my almost 20 years of career, only Elixir apps had predictable and stable latency even under load. So you know, at one point I stopped caring how does the BEAM do it or why aren't the other languages/runtimes doing it. I just started going to the technology that gives me this.
Is Go faster? Feck, absolutely yes! But its 95 percentile latency is spiking through the roof under load while a Phoenix/Ecto raises its median latency by no more than 20-30% (worst I've seen is +300% when from median latency 15ms the app went to 60ms and that's only in the 99th percentile of requests) even when the hardware is close to toppling over.
I feel that the raw muscle power of languages is vastly over-represented. I'd love to have a Rust with OTP's guarantees because at some places it's literally 1000x faster than Elixir, absolutely. But in a world where we have to choose raw power versus predictable performance (even if that performance is lower than what we can get in other languages) then I'll choose the latter any day.
And I am not alone in this. Many teams are choosing Elixir for exactly this reason.
One thing I'll agree with is that other languages have taken notice and are working hard to catch up with the OTP. I'd welcome them in the club once they are there because I hate language wars and I gauge technologies based on their merit. But they are still not there, sadly.
Ruby did not remove Fibers (in fact, they’ve been recently, in 3
0, enhanced to optionally be nonblocking—that is, automatically yielding on any operation that wpuld block.)
Ruby removed continuations from the core (moved them to stdlib) after adopting independent Fibers way back with 1.9; continuations were the previous mechanism for similar lightweight concurrency.
If you use Node as an example, your code is JIT compiled to machine code, any single request can fail, and you can scale to any number of requests without thinking about the underlying OS or VM.
Async/await will allow you to do a “blocking receive” like Erlangs processes.
For something like this I'd choose Rust or OCaml due to their insanely fast cold startup time (if the program is a CLI tool).
Erlang/BEAM is not there yet and it might not soon be.
OP said he did not know any other "BEAM like environments", but AWS and GCP are "BEAM-like" systems in that they allow you to use distributed hardware computers to achieve scale and fault tolerance.
Admittedly Google's Cloud Run is very easy and nice to use though. And fairly cheap.
I'd also cite the real technology I'd use today, which is a heterogenous set of services in whatever languages I'd like, hooked up by a high-quality message bus. This is the real technology that drives the at-scale Internet. You basically get Erlang's reliability out of that setup when used properly, and you don't need Erlang to do it. In fact you can get a touch more than Erlang's reliability because I find in practice 1-or-n delivery to be much more practical than Erlang's 0-or-1 delivery. It's basically the same enviromnent Erlang gives you integrated, except decoupled, and since all the pieces are decoupled, while Erlang has sat on the same effective point in this space that it picked out 25 years ago, all the decoupled components have been iterating and evolving over that time frame and are now better than what Erlang offers in its integrated package.
Second of all, if in 2021, around 25 years after Erlang and easily 15 years after Erlang has been generally known as a B-list language among language designers, almost nobody else has seen fit to copy it... maybe it isn't that great of an idea. Rust does something completely different, and in my opinion, strictly more useful, albeit at the cost to programmer complexity. I moved to Go from Erlang roughly 8 years ago, and I'm happier, because it turns out "general community good practices + channels" is fine, and also means I can go faster, and get a nicer language in the meantime. All the other modern languages are coming with some sort of concurrency story; it's table stakes for any language born in the last 10 years, if not the last 15.
For the mid 1990s, it was sheer genius. For 2021, it's a very brute-force, inelegant solution to the problem that nobody's very interested in copying. While in the 1990s concurrency was a nightmare and Erlang legitimately had a claim to a better solution, in 2021 there's a good 3 or 4 things I'd use before dropping back to Erlang as a solution. Concurrency is much less of a problem than it used to be, through a combination of various things, and the proposition of burning so much of a languages design budget on that problem is a lot less appealing than it was 30 years ago. Erlang really needs to adopt Go-like channels for some of what it's doing (not in replacement for the processes, but for some of the things they're not very good at), the ~10x slowdown for general logic is a real kick in the teeth in 2021, the lack of backpressure in the Erlang message model becomes a big problem at scale, and lots of other little problems I'd have if I had to go back to it. (Yes, I've been reading the release notes. If I weren't I'd have a couple more things to add.)
Erlang/Elixir/BEAM isn't leading the pack anymore. They're a cut behind in most ways now, but the community still thinks they are leading, ensuring that none of the lessons learned by other communities can filter back into the Erlang/Elixir/BEAM community.
Erlang / the BEAM did indeed make a lot of good innovations and I can only be angry at myself for being an idiot pressured by employers and never looking beyond it all for something better (until 5+ years ago anyway). But I agree that some of it is starting to show cracks.
In terms of language design, Erlang (and Elixir) aren't anything special. I can't fall in love with syntax anymore because I've literally never seen a language I completely like (LISP included, although it + OCaml are fairly close to ideal languages if you don't stray too much off of the beaten path and venture into their more arcane constructs, of which OCaml sadly has plenty).
To clarify, I believe Elixir is one of the most solid contenders for writing highly available and reasonably performant Web / GraphQL server apps but the lack of compiler apparatus tooling, tooling to modify AST and a few others are definitely starting to hurt it. Having standardized introspection in the language helps it reach higher levels, e.g. have tools that can manipulate an existing project a la like TreeSitter and/or SemGrep can modify/query language-specific constructs. Elixir doesn't have that and I am starting to get annoyed with it because of that.
RE: Using an external messaging bus makes sense but let me point out something important that seem to be often not said in discussions about Erlang / Elixir:
The BEAM gives you a lot of good training wheels and the truth is that at least 90% (if not 98%) of the commercial projects out there don't require much more than that. As shared in the other comment, I was able to get away with not using Redis for a long time and had zero trouble. I only yielded after we needed to share various message queues and events/streams with other apps (not written in a BEAM language).
So I'd say the BEAM ecosystem gives you a lot out of the box, plus the Elixir community is small but fairly dedicated and they have libraries of excellent quality. But, as you alluded to, when you need to throw those training wheels off, other much more dedicated and focused technologies like Redis do exist and we should reach for them after the circumstances change enough.
Would you agree with those assessments?
OTP 25 will also include JIT support for ARM64
A lack of understanding how concurrency works can really screw you up in Elixir.
I almost never use task, because I like certainty. I've been bitten too badly by people using it who didn't understand it. I don't like having to explain `Enum.each(tasks, &Task.start/1 )` is going to screw up your order of operations to someone who doesn't get it.
> Basically eliminate Redis or caching.
It's easy to start off thinking that, but then you can end up spending a lot of time maintaining subpar libraries you've written in-house that don't have the nicer features of the thing you replaced. Also, caching can turn into quite the memory hog.
> Run jobs across a cluster sanely with a good job library that only needs Postgresql.
It's all fun & games until you have to undo everything so you can containerize the apps.
> If you're wild eyed you can use Mnesia without the databases.
I've used DETS & hate it quite a bit, I really don't want to be married to Mnesia.
> New projects should be started with Elixir.
I love Elixir, but I wouldn't use it for everything, and it's got a lot of sharp edges that can get you into a lot of trouble.
> I almost never use task, because I like certainty. I've been bitten too badly by people using it who didn't understand it. I don't like having to explain `Enum.each(tasks, &Task.start/1 )` is going to screw up your order of operations to someone who doesn't get it.
I don't understand this. In any language with any concurrency support there's a point where you spin up a bunch of threads to do something. That's a useful capability and developers should understand it. If you don't know Elixir then you need to learn what Task does, but that's true in any language.
Uhhh, `Task.async_stream` is much superior in almost any way -- you can specify maximum concurrency, timeout and if you want the results in the original order. You only have to append `Stream.run()` at the end and boom, you got parallel work's results handed to you in a variable assignment.
Using naked `Task.start` is only marginally better than spawning raw OS threads and I avoid it like the plague. Thought this was common wisdom but apparently it isn't.
> It's easy to start off thinking that, but then you can end up spending a lot of time maintaining subpar libraries you've written in-house that don't have the nicer features of the thing you replaced. Also, caching can turn into quite the memory hog.
All true, although `cachex` and `ane` are extremely well-done libraries that have carried me a long way. I only gave up on them when we had to integrate the Elixir server app with apps written in another languages at which point we started (ab)using Redis' streams and normal caching abilities.
> It's all fun & games until you have to undo everything so you can containerize the apps.
Also true but that depends on your app and DevOps requirements. A lot of businesses don't need auto-scaling of their Elixir apps. They just conservatively buy a good enough static hosting and it serves them extremely fine. For 5 years working with Elixir I have never seen hosting that was bogged down by Elixir CPU constraints. 98% of the time it waits on I/O, be it disk or network.
> I've used DETS & hate it quite a bit, I really don't want to be married to Mnesia.
Agreed. This was a good idea 20 years ago, nowadays I'll reach for PostgreSQL or sqlite3 without a second thought. No need to reach for a homegrown half-database where scaling and actual querying become a problem the moment you reach out of hobbyist app territory.
> I love Elixir, but I wouldn't use it for everything, and it's got a lot of sharp edges that can get you into a lot of trouble.
Sadly I'll agree with this as well. I love Elixir and it will have a special place in my heart all the way to retirement, but I am seeing its downsides and I am not religiously advocating for it. Nowadays it's much more likely for me to reach for Rust for new projects, especially if the project isn't a web app (and even there `actix_web` 3.X and `rocket` 0.5 are super solid and fast as well).
People always make this comparison, which I feel is a bit weak/unconvincing. It's pretty easy to beat Ruby or Python in concurrency; most language runtimes do it.
Something most people might not realize is that Erlang/BEAM is miles ahead of even Java/the JVM, when it comes to operationalizing its concurrency — that is, building services that are robust in the face of heterogeneous workloads and misbehaving clients.
Ever tried to build a RESTful web service, that performs unpredictable-runtime tasks in response to user requests, but which budgets that runtime such that requests are hard-killed (and their resources freed) after a deadline, and/or if the client closes their TCP socket — even if they're in the middle of some some CPU-intensive hot loop — and which makes sure to "push down" that failure into resource-handles like DB sockets, such that the DB "sees" the failure and gives up on its related long-running CPU-intensive task as well?
This is a Hard Problem in Java, with a huge number of little considerations involved: lots of calls to Thread.interrupted; async requests; CompletableFutures; moving any parallel streams to their own explicit ForkJoinPools, etc. You can see the Project Loom team slowly giving the Java stdlib a working-over in this style, but even after their work becoming Generally Available, library authors will still need to do all this stuff as well.
In Erlang/Elixir, meanwhile: 1. the accepted TCP connection is a process, 2. the web request runs either inside it or in another process linked to it, such that either one dying kills the other, 3. any concurrent Tasks get linked to the TCP connection process that spawned them as well; 4. any DB calls temporarily link the checked-out DB connection to the request process as well; and 5. adding request deadlines is as easy as calling a function to spawn a deadline timer pointed at the spawning process's own head.
Essentially, idiomatic Erlang/Elixir code gets this type of robustness for free, with zero additional lines of code. The moment you have an Erlang web server, you have a robust Erlang web server.
(Erlang/BEAM is also miles ahead of Golang in operationalizing concurrency, for separate reasons, mostly to do with goroutine resource leaks. Separate topic, but I just figured someone would ask.)
And then there's the fact that per-process heaps mean most short-running request-response type workloads can get away with never making a garbage-collection call, instead just allocating a pre-sized arena with each process, doing work, and then tossing away that arena when the process quits. So, as long as you've built your Erlang/Elixir app idiomatically, doing each task in a process that doesn't survive the lifetime of the task — then you'll never see the same sort of gradual "putting off GC for later" asymptotic allocated-memory climb that you see in a most GCed languages.
This sounds like a fairly simplistic primitive operation, but what it means is that you can start a whole set of processes that are logically linked together (e.g. because they handle one web request) and if anything happens they all get killed together. This means you get almost the simplicity of error handling of throwing an exception in a single thread of code, but with all the concurrency, parallelism and locality of reference that you get with multiple actors doing the work.
I think in a lot of ways StackOverflow has changed this dynamic. Previously I benefited from memorizing all of the API functions that return slightly different responses than their peers. This one returns null instead of an empty array. This one mutates the second argument. This one has n^2 complexity, and so on.
What I need most is a toolbox full of Important Questions. Important Questions scale logarithmically across languages and many problem domains. Important Answers do not.
I'd pick Erlang myself, but Elixir is compelling.
Does tech have to be "mainstream" to use it? I would say it doesn't.
Take Erlang, or OCaml, or F#, I wouldn't call those mainstream but they are great languages that solve real problems, and people using them seems generally very positive about them.
All you need is enough popularity and enough companies using and backing it for it to be maintained. I think Elixir has this.
Can't speak to the hiring situation but I wonder how problematic it can be given all the stories of how easy it is to onboard people and plenty of posts and comments of devs wanting to do Elixir. Even if, it depends on your situation, you don't always need a big workforce to do big things, see Whatsapp for example.
Go ended up winning out because “Google makes it more mainstream”. That was the crux of the decision.
Don’t get me wrong, Go has plenty of selling points but there were a lot of supporting articles making that claim.
Denouncing Elixir as “not mainstream” is probably just a result of the built in defensiveness from people who argued against it for a while.
Elixir’s fantastic. It has made me a much better programmer.
In my opinion "mainstream" is not a good factor to select technology on. The factors you mention can be important, but then select based on those not on "mainstream" or not. If you need longevity, pick something based on that, if you need tons of resources, select on that, but not a lot of projects need to live 20 years, and plenty of people prefer well written docs over an abundance in stack overflow answers.
Also mainstream does not mean longevity. Take for example Python 2->3, that was quite stressful to say the least. Plenty of once mainstream languages and technologies are dead or slowly dying, it's very hard to predict.
Continuing on the longevity point, it might be good to note Erlang/BEAM — that Elixer runs on — is older than most languages you mention, still actively used, maintained and developed. So I could very well argue Elixir/Erlang is as safe a bet as any of them.
Lastly I don't think it's true that Elixir is niche enough that this should be a discussion point at all. If you have giant companies running and backing Elixir/Erlang, it's pretty safe to say its battle tested enough and will be around long enough for a lot if not most projects.
Mainstream isn't everything, I wouldn't overestimate it, there are better ways to choose technologies.
It just made it into top 50 TIOBE index. Projects like Nerves for IoT, Nx, etc are giving it life in new applications and industries.
It's never going to be a top 5 language, but it's on its way to become a well-known language with a strong community.
I agree it has a strong community for its size. It remains to be seen what happens 5 years from now when the hype settles (It arguably already is happening). Some languages reach sizes where it's impossible to die out even though they are declining (PHP for example. Probably Ruby as well). Perl seems to have gone extinct though, probably due to competition in the web sphere and also poor decision making (Perl 6 etc).
It's impossible to know what's in store for Elixir; I do see a lot of love and excitement by Elixir devs so that's a reason to be optimistic I guess.
Since then, the community has matured, many have either moved out or settled with it. It's gaining momentum but the strong and steady kind, not the overhype kind. There isn't this infatuation over it anymore as we used to see those days.
Elixir is now in the strong, mature community stage. It's not hype anymore, it's the period of subdued stability and steady growth.
I find its concurrency features, purported developer productivity, and it’s positioning as a “niche but popular” tech (these can be nice to acquire technical clients) very appealing.
Does anyone have any experience running an Elixir consultancy / agency / software house? Or freelancing? How is the market and your general experience?
- web-socket story is the goat (and I used to write nodejs)
- ecto database library is amazing
- productivity is on par with django/rails
- performance is comparable to go. sub 100ms response times are normal
- not many drop in libraries.
- hiring engineers is a pain but plenty are interested in learning
- libraries are not as used so you're more likely to run into a big in edge libraries.
- deployment is more complex if you want to use the mesh features
So far, I'm happy with it. There are certain features specific to my startup that would elixir/otp made easy to create. We're getting traction and elixir's strengths are a huge part of that.
What does this even mean? What are your dependencies? What does your service do? Sub 100ms response times are the norm for many languages.
It's replies like this that make me think I work in a different world.
Go is between 2x and 20x faster than Erlang. Erlang is a pretty slow language, it's in the same performance range as Python. Some reasons:
- it's using a VM
- the VM is not very well optimized ( compared to Java or C# )
- immutability has a big cost in term of performance
we had a subcontractor helping me who didn't' know elixir. I was able to get him running within a few days. his lack of sql knowledge was more of a bottleneck than his elixir proficiency.
That sounds incredibly slow for something that responds on a websocket, but you never told us what you're doing besides db transactions (which should be 1ms unless you're on spinning rust)
Haven't had a gap in clients for a year and a half of straight Elixir work. Have more opportunities than I can feasibly execute on.
Finding the right companies and connections is a bit of luck and a bit of networking of course. But most of the companies I've worked with are not well known in the community. There's plenty of usage in the wild.
Yeah, I gotta disagree... a lot of folks grossly underestimate the challenge of learning an _entirely_ new VM and language at the same time. Example, I can't count the number of silly things I've seen because someone obviously hasn't really grokked that BEAM will preempt processes. Or the amount of superfluous testing I see around gen servers when folks could directly call the function they intend to test...That is to say, I've watched a lot of folks try to write things coming from other VMs and languages and constantly miss the mark. What works for language or runtime X likely doesn't work that way on the BEAM and may not even translate.
Elixir is a lot more forgiving, but with Erlang you can tell it was made for telecom switches.
I don't know how else to say this except that it's complete nonsense. I work for a place that deploys an elixir crud app and it's very nice despite tons of domain mistakes, annoying 3rd party APIs we connect to and bad architectural patterns that have accumulated over the course of several years of a brownfield project (I am a relatively recent hire). I can't imagine dealing with all that pain in another language
The reason why elixir is fantastic is in such a ball of mud codebase, you can truly work in one corner of the codebase and not worry about anything else being disrupted. Testing story is also great. You can very simply do things you can't in other languages so easily, like concurrent tests that hit the database.
Erlang was as easy as any of them for CRUD apps. I used Cowboy for it, which is comparable to Express (JS) or Gin (Go) in how it approaches things (i.e., simple request/response paradigm, with various helper functions to pull stuff out of the request data, or write into the response data, and easy to insert middleware as, again, just functions). Integrations outside of the app weren't especially painful; admittedly, we didn't have to integrate with an undocumented API whose only integration was via supplied library that was implemented only in Java or similar.
Is your objection just "CRUD apps have to integrate with lots of downstreams, and so for that you want a language that has a huge plethora of libraries to make those integrations easy"? Because in that case "maybe". But that's orthogonal to being a CRUD app; that's simply because you have a bajillion dependencies that have existing libraries.
There's real projects where you have to bind together 10 different things and using C++ is simply the most expedient option because getting FFI bindings made for everything is intractable. I have a feeling most of the people here (especially the ones downvoting me) only have web experience.
From my other comment:
The people here are not really painting a good picture of BEAM vm languages. The typical complaint is some extremely energetic new hire wants to use Erlang for something, does, and then it turns into a hard to maintain mess. You can equivocate about the specific reasons but if you are really going to say ignoring legacy compatibility is stupid then I'm not sure what to tell you.
Being a fan of C++ is fine but you come across as biased.
When you say this:
> There's real projects where you have to bind together 10 different things and using C++ is simply the most expedient option because getting FFI bindings made for everything is intractable.
My immediate reaction was: "But dude, that doesn't apply to web development, and Elixir is most of the time used only for that". So your aside seemed inapplicable and out of place for this thread. Elixir isn't used for such projects most of the time (with some notable exceptions like the Nerves framework that allows you to burn a full bootable image of an Erlang/Elixir app to an SD card and boot off of it to a supported set of ARM and x64 SBCs).
And then you say this:
> I have a feeling most of the people here (especially the ones downvoting me) only have web experience.
Which is, again, a bit out of place as a comment here, because again, Elixir is mostly used for web apps (REST, normal Web, GraphQL, WebSockets magic like LiveView etc).
So to me it looks like your comment was addressing a wider issue that is mostly applicable to projects that actually care a lot about backwards compatibility -- and most web projects don't.
> The people here are not really painting a good picture of BEAM vm languages. The typical complaint is some extremely energetic new hire wants to use Erlang for something, does, and then it turns into a hard to maintain mess.
This happens in every ecosystem I've participated in for my 19.5 years of career (including when I was doing C++) so I urge you not to end up being negatively biased against the BEAM ecosystem folk in particular.
People do get hyped and invest a lot of energy trying to fit square pegs into round holes -- and several months/years later people like myself are being called to beat the project back into shape. I have given up hope that the programmers at large will ever learn not to get hyped and judge tech based on its merits... :|
As much as I like JS I’m not a big fan of it on the server however - web frameworks are lacking and tooling and libraries feel like the wild west.
In Phoenix I’d see a web framework for projects with concurrency reqs, with great ergonomics and DX, nothing else. Maybe I’d miss types? But we work with Python just fine.
And yes, high concurrency can be a fairly common req in web projects, it’s not only seen in telecomms engineering.
Can someone help me rationalize this discrepancy?
Also, in general, for a Digital Ocean droplet, how many requests per second (db query based) can Elixir handle while maintaining a sub 200ms latency?
Basically we use React for most front ends, Python for data engineering/data science, and Elixir for everything else.
What? Why does this make Elixir an obvious choice? Most (if not all) major languages offer concurrency primitives (e.g. goroutines), and fault-tolerance is included as a requirement by default for any sufficiently complex distributed system, but has little to do with the languages/frameworks used to build that system. Not seeing why these requirements make Elixir a better choice than any other language
Or rather, the need for the distributed system to be fault tolerant is, but downplaying the language/framework isn't entirely.
Kubernetes has definitely made it so the distributed tooling of Erlang is less valuable; many of the things it provides you can also get with Kubernetes. The tradeoff being that Kubernetes is far more complex (but, also allows for things Erlang doesn't have).
But what Kubernetes can't do is help you write correct programs. That is, if an error happens, what do you do? In most languages, you catch the ones you can predict and handle them. For the others, you let them trickle up until someone catches them, or the app crashes. The problem is figuring out where to handle them, and to make sure you really -are- handling the things you know how to handle.
Erlang flips this. Handling things becomes far less important as isolating failure. If something goes wrong ~here~, where else is there state that is likely wrong, and how do we get it all back into a good state?
This is a really powerful approach, as it ends up being far more robust, for far less work. It's harder to get 'incorrect' behavior this way; rather than have to enumerate the unhappy paths, and how to address each, you just have to declare dependencies between processes and how, should they find themselves on an unhappy path, to drop their state and get a known good one.
Erlang/Elixir have an extremely compelling story when it comes to fault-tolerance. You can only argue that all languages are created equal in this regard if you fall back to the "everything is Turing-complete so they're equivalent" argument.
Erlang and its VM were built from the ground up to work closely together with fault tolerance in mind, in a context where a few minutes of downtime meant severe financial penalties.
I’m not trying to be a jerk but I want to understand why this isn’t a feature implementable as a runtime.
Userspace pre-emptive schedulers do not incur the same level of overhead that kernel threads would normally have. To address another comment you made in another chain, Erlang accomplishes pre-emption through reductions which is roughly equivalent to ~4000 function calls, if I remember correctly and if I have read the correct documentation, before it yields the Erlang process.
Other forms of async to my knowledge essentially queue a task while the main thread of execution continues before awaiting; and the queued task is run in some form of thread pool (or other forms of executors). Without going too deep into this subject (and assuming I have made no logical errors) it is possible to have misbehaving tasks in async tank performance and latency, while the same misbehaving process in Erlang wouldn't. See Saša Jurić's demonstration  for an example. Do keep in mind that async is not forte, what I have stated is merely my observations. I write primarily with kernel threads.
This is why it is possible to spin up millions of, in Erlang terms, processes with minimal overhead. See this article by the Phoenix Framework team regarding an experiment on having two million active websocket connections on a single (albeit beefy) host .
> This and everything else is very nice but do we need a full VM for that?
See  regarding an effort to port the Erlang process and supervision strategy to Rust. In my naive view, it is possible to write something similar to BEAM in another language, but the more troubling portion of the work is to jury-rig the original language into a concurrency model it isn't designed for whereas Erlang by contrast was created purely for this task.
It’s so clever and well thought out, I think I start too see value of Erlang now
The actor model.
Superior memory usage and CPU processing.
There’s nothing in the list of requirements that couldn’t be done over http it all seems asynchronous? Scale here seems like it would be small.
Three months is almost zero time for a team of 4. I’m glad it all worked out but I have a harder time from the story understanding how using Elixer was the right choice vs a stack they were more familiar and confident on condieribg the constraints.
We have written several microservices primarily for websockets in Elixir. They are great with literal zero maintenance costs..but how do Elixir developers handle the following when going all in:
1. Long running workflows - there do not seem to be popular frameworks like camunda, jbpm, temporal or cadence for elixir
2. Integration libraries - similar to apache camel
3. Inbuilt scripting engines to run user scripts like nashorn, graaljs or groovy
We really enjoy working with rails and would like to go all in into elixir. But the ecosystem of available frameworks seems to always come in the way and makes us choose spring boot or rails.
For #3, you probably would need to NIF out to something. You can also execute an uncompiled script of Erlang using escript, but that, obviously, is not something you'd expect most users to learn (and not sure you can execute it in a running Erlang context, rather than from an external shell). You can also evaluate a string, and/or compile and load new Erlang code from a running Erlang program, but these are suuuuuper dangerous if it's user supplied.
For #1, I've never seen anything, but if you just need something that allows you to change out or customize logic on the fly you can do so in a pretty straightforward manner. Because it's a functional language, you can do stuff like have a stateful process template that you can swap out on the fly, i.e., submit a list of function identifiers a la [read_data/1, transform/1, write_data/1], and now whenever you call run(data), it spins up a new actor that effectively calls write(transform(read_data(data))). Technically you can even provide new code snippets (using one of the mechanisms from #1, or, if opening up an Erlang shell to the running instance, supplying it directly as a lambda), but you'll need to be mindful about persistence. I am not that well versed with workflow engines, but that partly comes from the fact that I haven't really seen the point given how easy many languages make it to create and customize workflows without having to learn special semantics.
I am not at all familiar with apache camel, but I think your concerns here aren't easily addressed; certainly, the library support for Erlang/Elixir isn't anywhere close to the JVM. But I -will- mention that sometimes writing the integration(s) you need to enable using a different technology are worth it.
For #2 i know nothing about this kind of stuff
For #1 it depends a lot. Something like Oban can be enough, or Ulf Wiger's Job or Broadway. These tends to be more ad hoc and extracted to independent services. It depends a lot of what kind of constraints you have for these workflows. There are lot of options but they are all more specific to use case and constraints than these broad projects that try to do it all.
> 1. Long running workflows - there do not seem to be popular frameworks like camunda, jbpm, temporal or cadence for elixir
Oban seems good for job processing. Otherwise Elixir builds on Erlang's OTP. That means you can also use ets, mnesia, or even Rick's distributed dynamo application runners.
> 2. Integration libraries - similar to apache camel
Nothing quite like this, but you can make use of Flow or GenStage for data pipelines.
> 3. Inbuilt scripting engines to run user scripts like nashorn, graaljs or groovy
There's a nice Lua implementation 'Luerl' IIRC. Also as others mentioned you can do a NIF. In particular Rust & Rustler would provide lots of scripting language runtime.
*Laravel and all the other PHP frameworks waving their hands frantically...
Elixir code often relies heavily on pattern matching return values (ok/error tuples or Structs) so that helps along with typespecs.
The fact that the language is compiled is also a big help when it comes to avoiding run-time errors while developing locally.
The other thing I appreciate about elixir coming from python (more specifically Django) is that most application-specific code is just modules and functions. There is none of the indirection and abstractions-upon-abstraction I used to have to grok in big python codebases.
I thought I'd miss typing but due to the functional nature of Elixir and its pattern-matching, I found out that the code is much simpler to grok.
In this case I think that having readable idiomatic code trumps the need for typing.
If needed there is type hinting with @spec but it's not used very often (or mostly in libraries?).
I can definitely say that it's very fun and productive to work in a Phoenix mono-repo. The development experience is great and I'm really having fun programming again.
Granted, the codebase uses pattern matching, type specs and Dialyzer extensively, which all definitely contribute towards making the navigation easier. Without those tools in use, I can definitely imagine navigation being more difficult.
For me, it was an upgrade, because the beautiful docs have type information for most main modules you'll use.
I'm not sure how big the team was but I'm impressed.
I know a startup that did £75K+ in sales in the first week with only just using node + heroku.
Could the same be applied to Elixir if it is really that good?
Are there any pitfalls that one should know about?
Not having much info, my guess is many languages would worked out for that startup.
title:elixir / UK = 11 jobs