Hacker News new | past | comments | ask | show | jobs | submit login
Would you still pick Elixir in 2019? (github.com)
317 points by kristerv 71 days ago | hide | past | web | favorite | 240 comments



I’ve been programming in elixir for about 2 years now. I have to say it’s hard to go back to something like Ruby or JavaScript.

In elixir you really get the full power of multi core and support for distributed computing out of the box.

Code that would have been beyond my pay grade or wouldn’t even imagine to write in Ruby or JavaScript is now easily reasoned about and maintained in projects. I can write succinct code that is easy to read, is fast, able to take advantage of multiple cores, less error prone, which I can scale to multiple machines easily.

The erlang scheduler is so damn powerful and it feels amazing to be able to execute your code on multiple machines with a simple distributed task which is built in as a standard functionality of the language.

I’ll end this note saying that, look at the problem you are trying to solve. If you need multi core and distributed features (which is generally more common than you think) elixir is truly your friend.

I can say without a shadow of a doubt the project I’m building right now would not be progressing as fast as it is if I picked anything other than Elixir. You get a lot of bang for your buck when it comes to productivity in the domain that elixir solves for.


> If you need multi core and distributed features (which is generally more common than you think) elixir is truly your friend.

Is it though? At least in my line of work I don't think I've ever run into this. I feel like I've always been able to distribute just fine with workers/queues. If I even suspected it would I'd look into it more, but generally I find distributing across systems to be a software architecture-level and not language-level work; perhaps I'm missing something, however.


Because those features are accessible, you end-up using them a lot more frequently and find new and exciting ways to use them. For example, I am really happy that Elixir and its tooling does pretty much everything using all cores: compiling code, running tests, generating documentation, etc. and all of this has a direct impact on the developer experience.

The other part is that you can build more efficient systems by relying on this. If you have a machine with 8 cores, it is more efficient to start a single process that can leverage all 8 cores and multiplex on both IO and CPU accordingly. This impacts everything from database utilization, to metrics, third-party APIs, and so on.

The Phoenix web framework also has great examples of using distribution to provide features like distributed pubsub for messaging and presence without external dependencies.

However, when it comes building systems, then I agree with you and I would probably use a queue, because you get other properties from queues such as persistence and making the systems language agnostic.

I hope this clarifies it a bit!


Exactly right, just the pure fact that it's at your disposal makes you think about problems in a whole new way. For example, I recently just abstracted my solutions away from Redis. I have nothing against redis however removing a dependency makes things more simple, which is important for my setup.

For caching you have SO many options which are already built in, ets, agent, ets + gen_server, or even reaching out for a library like nebulex.

Another example is recurring jobs, you can create a gen_server that will run a job every x hours in roughly 50-100 lines of code depending on how complex your problem is.

I rarely feel the need to reach out for external dependencies. Not that there is anything wrong with that, it's just that you now have a wider array of tools to work with and some problems are just solvable with a couple of lines of code instead of having to reach out for external tooling.

I recently built a distributed work queue, generally in Ruby I would use something like sidekiq, however elixir made me feel like "hey you can write your own" which generally is not recommended since you don't want to re-invent the wheel, but if you are doing something that breaks away from the existing solution, having the ability to craft a custom solution that works for your specific set of problems is extremely powerful, you can get much more creative, and the important thing is it got done fast (I wrote a distributed job scheduler in 2 weeks + 1 week to clean it up and work out the kinks, it's already running stably in production)


Cool, thanks José, that helps indeed. The multi-core DX stuff sounds interesting even on its own, production code aside. I hope to check it out soon!


Until I used Elixir, I thought workers/queues were enough. But after the last nearly-three-years, I've actually fallen into a place where workers/queues are almost always strictly inferior.

Workers/queues in languages like Ruby have problems like,

* Require very specific ergonomics(for example, don't hand the model over, hand over the ID so you can pull over the freshest version and not overwrite)

* They require a separate storage system, like your DB, Redis, etc. This doesn't sound big, but when doing complex things it can turn into hell.

* They have to be run in a separate process, which makes deployment more difficult.

* They're slow. Almost all of them work on polling the receiving tables for work, which means you've got a lag time of 1-5 seconds per job. Furthermore, the worse your system load, the slower they go.

* You can't reliably "resume" from going multi-process. Lets say you're fine with the user waiting 2-3 seconds to have a request finish. With workers/queues, you either have to poll to figure out when something finished(which is not only very slow, but error prone), or you have to just go slow and not multi-process, making it into a 8-10 second request even though you've got the processing power to go faster.

So, you've got all that. Or in Elixir, for a simple case, you replace `Enum`(your generic collection functions) with `Flow` and suddenly the whole thing is parallel. I mean that pretty literally too- when I need free performance on collections, that's usually what I do. Works 95% of the time, and that other 5% is where you need really specific functionality anyway, and for those, Elixir still has the best solution to it I've ever seen.


Erlang/Elixir has some really great advantages in concurrency and parallelism but what you're describing are just badly designed systems.

Shopify, for example, use Resque (Ruby + Redis) to process thousands of background jobs per second.

> * Require very specific ergonomics(for example, don't hand the model over, hand over the ID so you can pull over the freshest version and not overwrite)

This is good practice but certainly not a requirement. You can pass objects in a serialized format like JSON or use Protobuf etc.

> * They require a separate storage system, like your DB, Redis, etc. This doesn't sound big, but when doing complex things it can turn into hell.

ETS and Mnesia aren't production ready job queues, unfortunately: https://news.ycombinator.com/item?id=9828608

> * They have to be run in a separate process, which makes deployment more difficult.

Background tasks have different requirements so this is a good idea regardless.

> * They're slow. Almost all of them work on polling the receiving tables for work, which means you've got a lag time of 1-5 seconds per job. Furthermore, the worse your system load, the slower they go.

Redis queues have millisecond latency and there's no polling. Resque and Sidekiq use the BRPOP to wait for jobs. BRPOP is O(1), so it doesn't slow down as the queue backs up.

PG has LISTEN/NOTIFY to announce new jobs or the state change of an existing job so there's no need to poll. SKIP LOCKED also prevents performance degrading under load.

> * You can't reliably "resume" from going multi-process. Lets say you're fine with the user waiting 2-3 seconds to have a request finish. With workers/queues, you either have to poll to figure out when something finished(which is not only very slow, but error prone), or you have to just go slow and not multi-process, making it into a 8-10 second request even though you've got the processing power to go faster.

There are multiple other options here which are better:

Threads - GIL allows parallel IO anyway and JRuby has no GIL

Pub/Sub - Both Redis and PG have a great basic implementation usable from the Ruby clients

Websockets - Respond early and notify directly from the background jobs


> This is good practice but certainly not a requirement. You can pass objects in a serialized format like JSON or use Protobuf etc.

ie, requiring very specific ergonomics. If you have to change what you're doing, it's a new domain to learn.

> ETS and Mnesia aren't production ready job queues, unfortunately: https://news.ycombinator.com/item?id=9828608

I didn't mention ETS or Mnesia? The OP was talking specifically about using job queues to get concurrency/parallelism, in which case you absolutely don't need job queues. If you need a job queue, you need a job queue.

> Background tasks have different requirements so this is a good idea regardless.

Why? You're just stating this like it's obviously true, and honestly I can't think of a time I significantly wanted a different system doing my jobs than the one handling requests.

> Redis queues have millisecond latency and there's no polling. Resque and Sidekiq use the BRPOP to wait for jobs. BRPOP is O(1), so it doesn't slow down as the queue backs up.

Redis queues have millisecond latency, Ruby using Redis queues does not. That's the part that polls when there's nothing else going on. If you're never running out of jobs to do then your latency is fast, but you're also not accomplishing things as fast as possible(since it's waiting on whatever is in front of it).

If this isn't true anymore, then alright, but last I used Sidekiq(early 2018), the latency to start processing a job was often greater than a second.

> Threads - GIL allows parallel IO anyway and JRuby has no GIL

And are incredibly difficult to use and pass information back and forth(hence why Elixir exists at all- Jose Valim was the person implementing this on the Rails core team).

> Pub/Sub - Both Redis and PG have a great basic implementation usable from the Ruby clients

Can certainly work, to be honest I never tried this because of the complexity of initial setup and how green I was when I needed it.

> Websockets - Respond early and notify directly from the background jobs

Which Ruby has a lot of trouble maintaining performantly. When my original team went to use Rails5 sockets, we found we could barely support 50 sockets per machine.

---

It's worth saying, I'm not saying one shouldn't use Ruby- the place I work right now is a primarily Ruby shop, and my Elixir work is for event processing and systems needing microsecond response times. But, we've also built things in Elixir that normally I would use Ruby or JS for, and not only does it do well, but often it's write it and forget it, with deployment being literally "run a container and set the address in connected apps".


> ie, requiring very specific ergonomics. If you have to change what you're doing, it's a new domain to learn.

Resque & Sidekiq build this in by converting job arguments to JSON. There's nothing extra to learn.

> I didn't mention ETS or Mnesia? The OP was talking specifically about using job queues to get concurrency/parallelism, in which case you absolutely don't need job queues. If you need a job queue, you need a job queue.

Sorry, I thought you were talking about building a background job system in Erlang using out of the box OTP but it sounds like you're actually talking about trying to get parallelism in Ruby by doing RPC over Sidekiq? That's always a bad idea!

> Redis queues have millisecond latency, Ruby using Redis queues does not. That's the part that polls when there's nothing else going on. If you're never running out of jobs to do then your latency is fast, but you're also not accomplishing things as fast as possible(since it's waiting on whatever is in front of it).

Ahhh! When Mike Perham says "Sidekiq Pro cannot reliably handle multiple queues without polling" what this really means is a Redis client can only block on and immediately process from the highest priority queue. The lower priority queues are only checked when blocking timeout expires. There's no "check all queues and sleep" polling loop which adds artificial latency.

> And are incredibly difficult to use and pass information back and forth(hence why Elixir exists at all- Jose Valim was the person implementing this on the Rails core team).

Jose Valim didn't join Rails core until a couple years after Josh Peek (now working for GitHub) made Rails thread-safe.

> And are incredibly difficult to use and pass information back and forth(hence why Elixir exists at all- Jose Valim was the person implementing this on the Rails core team).

It's really not that hard anymore!

results = ['url1','url2'].parallel_map{|url| HTTParty.get(url) }

2012-2013 onwards Ruby got great libraries like concurrent-ruby and parallel that make things a lot easier.

> Which Ruby has a lot of trouble maintaining performantly. When my original team went to use Rails5 sockets, we found we could barely support 50 sockets per machine.

ActionCable is designed for convenience not performance. https://github.com/websocket-rails/websocket-rails will handle thousands of connections per process.


If you need a request/response model (ex: query some data) and not simply queue an operation to execute later without waiting for it. I agree (worker/queue) are the wrong solution. But you should use a multi-language RPC framework like GRPC instead of building a distributed monolith.

With kubernetes you have an endpoint per service that route and load balance to the correct machine.

It seem to me you already got all the benefits from using actor but with less lock-in to a language.


That's even more work? Most developers already have redis/Postgres running, the issue there is the added complexity in complex operations.

Not to mention, microservices are not always the answer. And even if they were, they're still an insane amount of more work than literally changing what functions you call.

I'm not saying you should never use an RPC, but I've significantly reduced the times I'd want to use one. The only reason I even advocate for one now, is because I prefer to empower developers to use whatever language makes them happy, even if I personally greatly prefer Elixir.


Also, building Elixir apps aren’t really the same as traditional monoliths.

It’s really not a distributed monolith as each node can deploy with different code. More importantly actors and supervision trees really help keep projects organized. It’s easy to use PubSub mechanisms or just named actors to communicate between services.

As an example I recently took a project that ran on a single IoT device and moved a chunk of it that managed a piece of hardware to another IoT device connected by Ethernet. It only took moving a handful of files and renaming a few modules. It took longer to figure out why multicast wasn’t working than to refactor the app. There are some limitations with type specs not working as well as I’d like with PubSub style messages (most type checking is done via api functions, not on individual messages).


Calling a remote function is exactly the same as calling a service in both case you are doing an RPC. While doing a Rest service is more work, using something like GRPC or java RMI is not and also support pub-sub mechanism and give you have a clear interface that define which functions can be called remotely which make understanding the cost of the function call and the security implications a lot easier.


> Most developers already have redis/Postgres running

You are joking right? You mean web developers?


Yes, web developers. Elixir is primarily a network systems language, and generally, the Ruby, Javascript, Python, etc communities use Postgres/Redis. Obviously it isn't universal, but there are obvious analogies(MySQL, etc).


I'm a big fan of Erlang, but I think you can acheive similar things in other languages with queues and workers. Erlang's advantage here is that you can do easily do a worker per client connection, for almost any number of client connections; for data processing queues, the lack of data sharing between processes (threads) strongly pushes you towards writing things in a way that easily scales to multiple queue workers -- you can of course write scalable code in other languages, but it's easier to write code with locking on shared state when shared state is easier.

As for distribution, again, this isn't necessarily exclusive, but the right primatives are there and work well to start with. You could have good reasons for a bigger separation between nodes as well.

Erlang has some warts too, of course. For me, the warts are usually about scale, oddly enough. BEAM itself scales very well, but some parts of the OTP don't, often because of the difference in expectations between a telcom environment and a large scale internet service. Two examples:

A) the (OTP) tls session cache is serviced by a single process and in earlier versions, the schema and queries were poorly designed so you could store multiple entries for a single destination, and the query would retrieve all and then discard all but the first. When you were making many connections to a host that issued sessions but didn't resume them, all of the extra data could overwhelm that one process, and resulting in timeouts attempting to connect to any tls host. This was fixed in a release after r18, I believe, to store only one session per cache key, and the cache was plugable before then, but it wasn't fun to find this out in production.

B) reloading /etc/hosts and querying the table it loads into weren't done in an atomic way. I believe this is fixed in upstream as well, but queries satisfied by /etc/hosts were actually two queries on the same table, and reloading the table was done by clearing and then loading, so the second query could fail unexpectedly. This led to the bundled http client getting stuck, despite timeouts.


Workers and queues fail. SQS was down for us for almost two weeks while AWS fixed a bug. We had no choice but to wait or rewrite our implementation... Again! We've already had to rewrite once due to poor visibility and rare occasional problems processing data. Debugging such distributed systems is legendary hell. And that's just for simple async processing so that we can return a response quickly to the user and finish the task in a few seconds. There is simply no comparison between such a complex, failure prone distributed system and the simplicity, reliability, and ease of use of having support built into the language for this, IMO.


I am sorry but I disagree. You are trying to make it sound that your cloud provider downtime has something to do how you manage your workload in your code.

Debugging __any__ distributed system is difficult, this is why monitoring and tracing should be first class citizens in your deployments. It seems they are not for you.


Yeah, monitoring told us it was down and eventually we figured it was an AWS issue we could do nothing about until they patched it. My main point there is actually that for many use cases, this doesn't have to be a distributed computing problem and thus the non-distributed version is superior to the distributed version.


> In elixir you really get the full power of multi core and support for distributed computing out of the box. Code that would have been beyond my pay grade or wouldn’t even imagine to write in Ruby or JavaScript is now easily reasoned about and maintained in projects.

One of the alternative languages you mention is single-threaded, and the other has a global interpreter lock (in its most common implementation). That Elixir is superior to them for parallel programming doesn't really say much.


How does elixir plays in the serverless/FaaS world?


There are plenty of great, battle-tested frameworks and libraries in Node.js which help to leverage multiple cores but they just don't get as much hype.


Are you mainly talking about clustering?


This went past me as the post is filled with a lot of claims with no reasoning to back those up. It is not a critical evaluation of the language, but rather sounds like a "fanboy" piece, for the lack of a better term.

> Memory efficiency is much better than most other languages (with the exception of Rust, but Elixir is miles better at Error handling than Rust, which is a more practical feature IMO

How exactly are arbitrary runtime exceptions better? Any elixir function you call has the potential to crash. Meanwhile with Rust, your function returns a `Result` if it can error, and callers are then forced to handle those by the compiler, either via pattern matching or ergonomic error propagation.

Rust has runtime panics, but those are for rare unrecoverable errors and are not at all used for conventional error handling, reserved usually for C FFI, graphics code, etc.


I am not sure I would say one is better than the other, but they are very different.

As you said, in Rust you are forced to handle errors by the compiler. In Elixir, you actually don't. In fact, we even encourage you to [write assertive code](http://blog.plataformatec.com.br/2014/09/writing-assertive-c...). This is also commonly referred as "let it crash". In a nutshell, if there is an unexpected scenario in your code, you let it crash and let that part of the system restart itself.

This works because we write code in tiny isolated processes, in a way that, if one of those processes crash, they won't affect other parts of the system. This means you are encouraged to crash and let supervisors restart the failed processes back. I have written more about this in another comment: https://news.ycombinator.com/item?id=18840401

I also think looking at Erlang's history can be really interesting and educational. The Erlang VM was designed to build concurrent, distributed, fault-tolerant systems. When designing the system, the only certainty is that there would be failures (hello network!) so instead trying to catch all failures upfront, they decided to focus on a system that can self-heal.

I personally think that Erlang and Elixir could benefit from static types. However, this is much easier said than done. The systems built with those languages tend to be very dynamic, by even providing things such as hot code swapping, and only somewhat recently we have started to really explore the concepts required to type processes. But a more humble type system could start with the functional parts of the language, especially because I think that other techniques of model checking can be more interesting than type systems for the process part.


@chmin thanks for the great feedback! ;-)

I did not write the post for general consumption, more as a reply to the question from the person as indicated in the first paragraph of the thread ... I really did not expect it to end up on HN. ¯\_(ツ)_/¯

100% Agree that there is a lack of "critical evaluation" and it borders on "fanboy" ... It's not a scientific or statistical analysis because I did not find any data I could use to make a an argument either way.

My experience with Elixir, JavaScript, Ruby, Java, PHP, etc. is based on doing the work in several companies big and small and I don't consider myself an "expert" in any of these languages. I have felt the pain of having to maintain/debug several large codebases with incomprehensible/impenetrable and untested code over the years and I find Elixir to be the most approachable of the languages I am fluent with.

I wish there was an objective way of assessing the day-to-day experience of living with a language ... have you come across such a measure that isn't based on the opinions of, as you say, "fanboy" users?

You appear to have superior knowledge/experience of Rust. Have you written any tutorials or blog posts sharing that knowledge? I would love to read your work. Is this you: https://github.com/chmln ? If it is, https://github.com/chmln/asciimath-rs looks cool! (nice work! :-)


Hey, thanks for chiming in!

I didn't mean the "fanboy" remark to be personal on any level. I just thought that some particular comparisons were unfair.

There are numerous valid points in the piece and I don't see much wrong in sharing the joy of working with a language, even if it's a little biased.

> I wish there was an objective way of assessing the day-to-day experience of living with a language ... have you come across such a measure that isn't based on the opinions of, as you say, "fanboy" users?

At least the "scientific" comparisons of programming languages I've come across have been questionable at best. Each language has its strengths and weaknesses, big or small, so wholesale comparisons are complicated further. Thus people have to rely a lot on opinions and real-world experiences of themselves and others.

> You appear to have superior knowledge/experience of Rust. Have you written any tutorials or blog posts sharing that knowledge?

Thanks for the compliments, and that's indeed my profile. Unfortunately I haven't had the time to blog at all, but perhaps I will someday get around to it.


I really think that you don't need to utilize italics to make yourself appear like you care.


I found the emphasis helpful


Erlang's error handling is... kind of different and takes a while to really grok and work well with. It's not so much about individual functions, as how the system as a whole recovers from failure.


Most people (especially from type unsafe languages) haven't figured out that the Option type with pattern matching actually eliminates a whole class of runtime errors. They just see the match operator and types in general as a syntactical nuisance.

   fileString = checkFile("sample.txt")
   if(fileString == null){
        //handle error
   }

If I showed the above pattern to typical javascript, python, ruby or elixir programmers at any company, 99% of them won't be able to identify why this pattern is bad, they see it as a necessity and rely on the programmers skill to catch that potential null (or exception depending on the implementation).

In fact, you dear reader, might be one of those programmers. You might be reading this post and not understanding why the above code is an unsafe and a bad style. To you, I say that there are actually compilers that automatically prove and force you to handle that potential null not as a logic error but more as if it was a syntax error.

That guy who advocates unit tests at your company doesn't understand that unit tests only verify your program is correct for a test case. These compilers provide PROOF that your program is correct and can eliminate the majority of the tests you typically write.

The code above is unsafe not because of the developer, it is unsafe because of the nature of the (made up) programming language.

In elixir, python and javascript you will inevitably have to follow this unsafe pattern.


Well, yes, but that guy who advocates unit tests might understand all of this and the shortcomings of the chosen programming language, and that is why he advocates unit-testing. Careful who you are criticizing here :)


I never discounted unit tests. I advocate it as an important feature in a project. I'm saying you need significantly less when you have the types of checks I talk about in place. :)

The guy I'm criticizing here is a certain breed of person who advocates unit tests but suffers from a logical flaw in his reasoning. He advocates unit tests like a mad man but he uses an unsafe, untyped language for the entire project. Javascript is the worst offender here.

Unit tests are a safety first philosophy that is employed at the inconvenience of writing tests. However, it makes ZERO sense to employ unit tests as a safety net without type checking as a feature that proves your program is absent of type errors. Remember unit tests only verify a test case works, the safety features I talk about in my post actually Prove your program correct.

This is a huge flaw in engineering paradigms that I see permeate the industry today. People literally are ignoring a feature that proves a major part of your program correct and they are advocating the entire program be safe guarded with a weaker check (unit testing). I advocate we use both.

We see thousands of python, ruby and nodejs projects with massive unit testing overhead on top of the project itself. What these advocates don't realize is that you can get rid of 80% of these tests with a type safe language. I urge you to check the unit testing involved with a golang project vs javascript. There's usually a significant difference in size. For robust applications the unit testing suite of a javascript app is much much bigger in size.

I end with a quote from a guy who used logical methods to prove the absence of bugs in his programs rather then rely on hundreds of unit tests.

"Testing shows the presence, not the absence of bugs." - Dijkstra (1969).


I understand your viewpoint, but aren't you assuming that the "guy advocating unit tests" has the option of choosing a type-safe language? In my experience, the programmers rarely have the power to choose the programming language. In fact, they were probably hired because of their expertise in the language already being used.

Just saying, if I'm writing Python and advocating unit-tests... even of TypeError exceptions and similar problems that would be avoided by using Golang... it's probably because I know these are common problems in Python/JS/non-type-safe languages, and I probably don't have the power to choose a different language than the one already being used by my company/development organization.


For python you can use type annotations with an external type checker such as mypy or flow for js.

To implement type checking in js or python would add a layer of unparalleled safety for a fraction of the development time involved with unit tests.

Yet I would say 90% of developers are unaware of this contradiction and go on piping about the extreme importance of unit tests while completely ignoring type checking. They value safety but are too naive to know what safety means.


I've been working with Elixir in a single-developer production system for over a year now. I'm running it in Docker containers on Kubernetes, in the cloud.

It has been extremely stable, scaling has been a non-issue. Error reporting has become easier and easier, now that companies like Sentry and AppSignal have integrations for Elixir.

Elixir is VERY fault-tolerant. DB connection crashing? Ah well, reconnects immediately, while still serving the static parts of the application. PDF generation wonky? Same thing. Incredibly fast on static assets, still very fast for anything else.

I've had nothing but fun with the language and the platform. And the Phoenix Framework is just icing on the cake. I've been fortunate to have been to many community events, and meeting (among so many others) José and Chris at conferences has made me very confident that this piece of software has a bright future. The Elixir slack is also VERY helpful, with maintainers of most important libraries being super responsive.

I would not start another (side or production) project with anything else than Elixir.


> Elixir is VERY fault-tolerant. DB connection crashing? Ah well, reconnects immediately, while still serving the static parts of the application. PDF generation wonky? Same thing.

I still don't understand this.

I don't think I've ever built a web server in any language where this wasn't true unless I specifically wanted hard failure.

The amount of fault tolerance would be a per-app design goal rather than something that seems to be a language feature. I've worked in apps in all languages that range from any failure being a hard failure to being impossible to crash, and this is due to business requirement.

For example, regarding your examples, just about every web server I can think of will automatically turn uncaught exceptions into 500 responses unless you opt otherwise.


You are correct. The difference is in term of idioms and how you design your software.

In most languages, you achieve this behaviour by rescuing/catching exceptions. In Erlang/Elixir, we don't like to that, because exceptions are mechanism to signal that something went wrong and telling the system to continue despite of failures is not a good practice.

Instead, in Erlang/Elixir, you organize your software using separate entities (called processes), which are completely isolated. Therefore, by definition, if something fails, it won't affect other parts of your system. This also leads to other features like supervision trees, which allows you to restart part of your application, exactly because you know all of those entities are isolated.

When you have shared mutable state, it is much harder to have something like built-in supervisors, because you have no guarantee that a crashed entity did not also corrupt the shared state.

In a nutshell, I would say Erlang/Elixir makes you think more about failures and how things go wrong.

I know this sounds a bit handwavy but it is not that trivial to explain those details on text. I have also given talks on this called Idioms for Building Fault-Tolerant and Distributed Applications in case you are interested: https://www.youtube.com/watch?v=B4rOG9Bc65Q


It's less about how it handles exceptions but rather how the BEAM makes sure that things that break don't crash the whole system.

The magic is in the supervisor pattern, explained here for erlang: http://erlang.org/documentation/doc-4.9.1/doc/design_princip...

It is hard to describe why this "feels different" in Elixir than it does in Express.js or a Tomcat running a Java application. It's all experiential for me, but maybe I can put the sentiment in words: I always KNOW that whatever part of my application may break, however much and for whatever duration, the scheduler and the supervisors will make sure that the rest of the system runs exactly as intended, and the broken part of the system will be back up eventually. I did not have this feeling (as strongly) prior to working with Elixir.

But I will admit this is a very subjective position. And I am not sure you'd experience it the same way were you in a similar situation.


But with Kubernetes what's the point of BEAM?


BEAM is a battle-tested, decades-old technology that most likely runs a critical part of your telephone network. You're pretending that K8s, the newcomer, is already the incumbent. Are you sure you've done a proper assessment?


The critical parts are run in C not Erlang. You think Erlang is fast enough to route packets? btw routing packets and running a backend are two different things.


When you said: 'With Kubernetes what's the point of BEAM?', what exactly were you talking about? How does Kubernetes make BEAM pointless?


It doesn't make BEAM pointless, but fault-tolerance and scalability (two of the core features of Elixir/BEAM) are also handled by k8s. If you use Elixir/BEAM for these features and deploy on k8s, it may seem redundant to use them both together. Maybe that's what parent is referring to.


K8s is great for service orchestration and horizontal scaling, and lots of people in the Elixir community use it to deploy, while using Elixir itself to implement fine-grained fault-tolerance logic and vertical scaling.

A dead BEAM process can be restarted in a few microseconds, load up some complex state, and keep going. I don't believe the same can be said of a dead K8s service.


Not sure what you mean. Kubernetes is a containerisation and orchestration platform BEAM is a VM for Erlang. How would they be comparable? Or is this a different BEAM?


Mostly retaining internal state. ets is very high performance, so no Redis or memcached needed.


I think in this case they weren’t talking about the kind of shared-nothing per-request DB connection failing, but a persistent DB connection pool failing to connect during the initialization phase of a persistent app server.

In most runtimes, initialization like that is linear (think bash’s execfail switch); if something fails to initialize, the whole HTTP app daemon will crash out, get restarted by its init(8) process, and then try again.

In Erlang, you’ve got something more like “services” in the OS sense: components of the program that each try to initialize on their own, independently, in parallel, with client interfaces that can return “sorry, not up yet” kinds of errors as well as the regular kind—or can just block their clients’ requests until they do come up (which is fine, because the clients are per-request green threads anyway.) In Erlang, the convention is that these services will just keep retrying their init steps when they hit transient internal errors, with the clients of the component being completely unaware that anything is failing, merely thinking it isn’t available yet.

Certainly, Erlang still has a linear+synchronous init phase for its services—just like OSes have a linear+synchronous early-init phase at boot. But the only things that should be trying to happen in that phase involve acquiring local resources like memory or file handles which, if unavailable, reflect a persistent runtime configuration error (I.e. the dev or the ops person screwed up), rather than a transient resource error.

Indeed, any language runtime could adopt a component initialization framework like this; but no language other than Erlang, AFAIK, has this as its universal “all ecosystem libraries are built this way” standard. If you want this kind of fault-tolerance from random libraries in other languages, you tend to have to wrap them yourself to achieve it.

(You could say that things like independent COM apartments or CLR application domains which load into a single process are similar to this, but those approaches bring with them the overhead of serialization, cross-domain IPC security policy enforcement, etc., making them closer to the approach of just building your program as a network of small daemon processes with numerous OS IPC connections. Erlang is the “in the small, for cheap” equivalent to these, for when everything is part of the same application and nothing needs to be security-sandboxed from anything else, merely fault-isolated.)


You are not wrong. The OP is also not wrong. I think that the examples might not have been ideal.

The fault tolerance that I love about Erlang/Elixir is the actor model. Everything is (or can be) an actor, which is like a living and breathing instance of a class. So they can live and do their own stuff, and then if they fail at that and need to be recreated, they get recreated by something that supervises them.

Contrast this to for instance a Django or Rails app... if a vital service in the system dies the entire Ruby or Python runtime will (potentially) die and then respawn. It's cheap and we don't care, right? It will get restarted. The net result is similar, you don't get woken up in the middle of the night and customers are happy. But in systems where you want or NEED an entire system to remain on 24x7x365 it changes the game.


That's not necessarily the case, especially with asynchronous actions (speaking for Python, at least). One execution can crash and burn unexpectedly, but the entire runtime doesn't go down.


You are probably in the top 10% of software engineers. It is definitely the case with the rest of them to not to handle database connections, use single threads to talk to the network and than blame the performance on network engineers and I could go on and on. These are work experiences btw., I did not make it up. I would be super happy if all of the software engineers out there would not have such silly assumptions about how computers and networks operate.


Elixir is a great tool for some applications, but one should always think carefully about what is the right tool for the job at hand. Elixir is not the only language that helps enormously with concurrent programming. Specifically, Clojure is very comparable in this regard, although it gives the programmer much more freedom. And then it has Clojurescript, which makes writing hybrid server/client apps much easier.

I've written large applications in Clojure/Clojurescript and I've seen/reviewed reasonably large code bases in Elixir, and while I would agree that Elixir is a very good solution for many problems, it is not a tool for everything.


The JavaScript Fatigue argument is not good. There's simply no data that backs it and nobody is forced to use new libraries only because they use JavaScript.

I've seen third party dependencies churn on Elixir as well (packages that are no longer maintained or alternatives that are better) - I think it's an inherent problem with using dependencies and has nothing to do with the programming language in which those dependencies are written.

> As a developer I just want to get on with my work, not have to read another Hackernoon post on how everything from last week is obsolete because XYZ framework

My recommendation is that you don't read Hackernoon. This seems like a very ineffective way to level up your developer skills.

Edit: I agree that Elixir is very nice and would pick it over JavaScript for backend heavy applications without thinking. I just don't think this argument makes any sense in that context.


> nobody is forced to use new libraries only because they use JavaScript.

It's not completely true IMO for 2 reasons: 1- the nodejs standard lib is quite poor compared to say, Java's, Scala's or python's, so you generally need quite a lot of modules to do anything 2- the npm ecosystem is much more amateur. To do anything you have a ton of poorly supported by hobbyists or not supported at all modules. This can force you to change modules/libs regularly. This is to be compared to the Java ecosystem for example, were more people are working together to build well supported/high quality libs (Apache libraries for example)


Exactly, it is impossible to use a stable long term Linux distribution and node, most packages force you to get the latest or before latest version of node.

Other issue is that things move fast and break, you are not sure that 3 months old tutorial will work in present.

Edit: I know I can and I did grabbed node and npm outside the repositories, but you do not see this issue with other languages where I must install latest stuff to get most libraries working.


Also, your average npm module packages incredibly small amounts of functionality, making dependency trees huge and hence also very brittle.


This is why many of us do the sane thing and leave JavaScript to the browser, while enjoying server side rendering with a bit of dynamism.


Yeah but on Node.js, Express has been the de facto framework of choice for building REST APIs for over 6 years. The JS fatigue phenomenon was mostly on the front end, and even that has basically settled down as people have rallied around React, Angular and Vue.


Not for our customers that keep happily using solutions based on Java and .NET platforms.


You must be joking. I never heard of Express and we have put several REST APIs into production in several companies.


Uh, what?

It might've become the defacto standard for consuming them, but definitely not for creating them.

Almost no service I've ever administered used a nodejs backend.


He never said anything about node being prevalent, only that Express was within node.


I’m not calling out JS or elixir communities by any means here. I just want to mention that a big part of the “JavaScript Fatigue” is the amount of overlapping libraries that all do the same thing and it isn’t always straightforward to figure out which one is better - and this happens with so many packages it would be impossible to do it with all of them. Just look up “isArray” (which I wish where just not showing up in libs at this point. It’s a built in in both node and the browser now) and you get 102 packages that are very similar but will clearly vary in implementation and quality.

Where as something like the Python community or Rust community (where I have had more experience) I have always found that even if there are packages that do the same thing, there usually aren’t nearly as many duplicates and often times the community has done a better job communicating the value of many of the packages. There is just less confusion around the whole thing

I have also found there to be relatively little overlap between the big packages, in my experience


It's no more difficult than choosing an app in the app store. You go on NPM, you look at the download count, you look at the feature set, you install and then move on with your day. JS is a vast and very open ecosystem so duplication is inevitable.


Rust has also embraced the very small packages mentality more and more, and it's already showing, things turn far more brittle quickly


"backend" and "JavaScript" quite don't fit together in the same phrase.

I would never work on backend with JavaScript or any other interpreted language, due to error proness.


You're free not to work on backend javascript (or other interpreted languages) but many people would (and do) disagree with you.


> interpreted language, due to error proness.

There is no connection, at all, between a language being interpreted and it being error-prone to write or run. You either mean something else or are mistaken.


I've been fortunate to work with a CTO that sees the value in Elixir and also letting us push forward with it. It has been excellent. At this point we have about 20 engineers who have chosen to work in it close to full time for their services.

It's hard to pick one big draw, but I'd say the biggest for me is that everything I wanted to do in rails has been possible in Elixir and then additional functionality not easily possible in rails is trivial in Elixir. I often consider the distribution techniques as "enhancers" as you could work around them with global locks and data stores, but you don't need to.

I'm very bullish on Elixir and I'm curious to see where it will go. Looking forward to giving my talk about bringing Elixir into production (from a human and technical standpoint) at Lonestar Elixir conference.


> everything I wanted to do in rails has been possible in Elixir and then additional functionality not easily possible in rails is trivial in Elixir

I also noticed that every functionality I write in both Ruby and Elixir is both more concise (less code) in Elixir as well as 5-10x faster :)


Elixir has great things "of his own":

* the syntax is well though-of (`with`, destructuring, `|>` are powerful

* message passing has great use-cases

And then it has problems that are not necessarily "elixir-y", but are there nonetheless:

* it's hard to model an application around the Actor model. It's very easy to abuse it.

* it's hard to maintain / refactor a large application without help from the compiler before run-time

* it's hard to maintain an application in a language with a young ecosystem and no "seamless" integration with a better established one (ports are not seamless.)

Quite frankly, I'm looking forward to writing a backend in Rust, to have a point of comparison.


Thanks for these notes. I’m suspicious of unconditional praise for any technology, and a lot of the goodwill for Elixir seems like it’s biased by the intention to promote adoption of the language, or like it’s from people who haven’t encountered or paid attention to its problematic aspects. Seeing fundamental problems listed like this does more to convince me that it is a real and serious technology (one with a compelling set of strengths, no less).

As for Rust, do try it out. Haskell-esque type checking, the “anti-OO” interpretation of C-style conventions, and memory safety without garbage collection are a seriously potent set of features, but it can be frustrating when you find out yet again that your whole day of R&D leads somewhere incompatible with its philosophy, and is therefore a dead end. I’m building a Rust webservice framework as a hobby/learning project, but it wouldn’t be my first choice for a production API under active development. On the other hand I’m not aware of a better choice for an embedded daemon process or a stable microservice.


> I’m suspicious of unconditional praise for any technology, and a lot of the goodwill for Elixir seems like it’s biased by the intention to promote adoption of the language, or like it’s from people who haven’t encountered or paid attention to its problematic aspects.

I realize that the tone of this GitHub post has been a bit fanboy-ish and biased but you have to understand that your comment here is biased as well. It's non-objective to dismiss a technology because somebody couldn't articulate it as well as Mark Twain would. Most people simply aren't that good at articulation -- me included. Doesn't mean that what they are trying to articulate is invalid, wouldn't you say?

As for "fundamental problems" -- it's a case of "pick your poison" as usual. There is no universally good language. If you frequented the official Elixir Forum you would know that most of us use other technologies every day. Many people in the forum have 10+ years of experience and are well-aware of the big picture. We are very realistic about when Elixir is a good fit and when it isn't. There's a plethora of posts where we straight up advise somebody not to use Elixir.

IMO practice critical thinking and don't judge by the tone of isolated articles.

As a final point, you should also consider why the language has so much fanboy-like articles. Maybe it is doing something good for real? Objective thinking demands consideration of all major possibilities.


I've found CQRS and DDD designs fit well with OTP and elixir. The actor model with pattern matching ends up cutting away a lot of the classical OOP DDD details.

The issue I see is carryover from other ecosystems taking paradigms that aren't necessary and don't fit into libraries and patterns. It feels like there are still conventions to settle on.


The fact that it's dynamically typed is also often overlooked, while it's at the top of my deal breaker list.

The programming world is strongly moving toward statically typed languages, because today, there's pretty much zero reasons to use a dynamically typed language.


> The programming world is strongly moving toward statically typed languages

How? Python and Javascript (not Typescript) are two of the most popular languages in the world and still growing very fast by many accounts. Which strongly typed languages are taking over?


I guess you're not a regular HN reader? :) Many HN commentors routinely think that they are "the programming world", which is rather far from the truth.

For years, dynamic typing has been hyped on HN, while statically typed languages such as C# were routinely scoffed at.

Something in the air has definitely changed recently, with many discovering the very trait they eschewed is actually a major boon - very likely because they had only ever dealt with JavaScript. My guess is this is primarily driven by the growing popularity of TypeScript, Rust and Go.


> Something in the air has definitely changed recently

- Type inference becoming commonplace is an important factor. Previously it felt kind of silly in statically typed langs to supply some of the things compiler could figure out.

- Strict null checking becoming commonplace increased confidence. Previously, even if your code compiled, you would still get null pointer exceptions.

- Type checking became kind of opt-in with langs like TypeScript.

- Statically typed langs used to be associated with heavy IDE's previously. Now you can use most editors and get the benefit of compiler within the editor through language servers.


These are some great points I hadn't considered, thanks


Pattern matching is strong typing. It just doesn't assert on a type alone -- it also asserts on the shape of the data itself. Example:

    def handle_data(%{
      customer: %{
        date_of_birth: %NaiveDateTime{} = dob,
        account_balance: %Decimal{} = balance,
        name: name,
        count_purchases: purchases
      }
    }) when is_binary(name) and is_integer(count_purchases) do
    # work with the data here
    end
^ This both asserts on a particular data structure (a map with a "customer" key containing at least those four attributes) and asserts on the types of some of the attributes. I find it pretty handy and practical.

---

But I concede that strong+static typing eliminates a class of bugs preliminarily. That is unequivocally true.


I agree and believe that it's harder to maintain a dynamically-typed codebase, but Elixir has a well-thought-out gradual typing solution: typespecs ( https://hexdocs.pm/elixir/typespecs.html#basic-types ). This builds on Erlang's Dialyzer tool and is supported by editor plugins like VSCode's ElixirLS extension. In practice, you do get instant typechecking while you code, if you write down the typespecs properly.


Debatable in practice, given the lack of a type system that actually enforces checks -- Dialyzer's even less complete, not to mention its poor error messages.

Elchemy, Alpaca, and Gleam try bringing static typing to the BEAM, but they're still too immature, unfortunately.


Agreed, but even with the above issues, Dialyzer still gives you, let's say, more than 50% of what you'd be getting with an ML-like type system.


Purely anecdotal: our experience with Dialixir was rather disapointing (slow, lots of false positive, hard to express types coming from external libraries and generated code.)

Unfortunately, the tradeoff at the time made us stop using specs altogether.


Surprised to hear about false positives, that's supposed to be impossible by the design of Dialyzer (it only reports provable type errors).


Static typing incurs restrictive boilerplate while only eliminating a small subset of all possible bugs. I will admit the guarantees it gives you are nice but I’d argue that the guarantees immutability-all-the-way-down gets you in a language like Elixir are stronger. You don’t get those guarantees on ANY of the statically-typed languages that run on the JVM because as soon as you interop with anything Java, bye-bye guarantees.

And this is anecdotal but with good pattern-matching and guards in Elixir, I can’t remember the last time I created a bug that would have been made impossible by static typing.


> Static typing incurs restrictive boilerplate

Wrong, type inference has been aa thing for decades.


Rust is much much less mature for writing backends.


`|>` comes from F#.


Could someone give a more concise reason for using Elixir? I have a "rule of 3" for checking out things - the third time I hear it mentioned and it seems interesting, then I'll go check it out.

Elixir is past 3 times - so I will check it out for sure! - but this article didn't seem to actually say anything (seemed more like a PR piece that was trying not to be technical, and the main argument appeared to be "well, it's not javascript!").

The part that actually talked about Elixir listed some Pros that didn't seem that unique. What's the "killer feature" of Elixir - or is it just a combination of "good features"?


It's the synergy between the language, the runtime and the standard library to deliver the actor model.

Elixir "threads" are called "processes". It's a bit confusing at first, but there's a good reason for it. So from hereon in, when I say "process" think "thread".

Elixir processes, like OS processes, are fully isolated from each other. If Process A has data than Process B wants, Process A has to ask it (send it a message) and can only get a copy of the data (like real OS processes, hence why the name makes sense). The advantage to this is that data ownership is explicit, race conditions are eliminated, and code is less coupled (A can't have a reference to an object that points to B that points to C..., that anyone anywhere in the code can mutate).

At a high level, this allows you to get many of the benefits of microservices, without the high cost (deployment complexity, service discovery, lower performance).

We run an analytics on live sports which does various transformation on a stream of play by play data. There are very distinct features: boxscores, lineups, scores, time alignment, broadcast information...For each live game we start a process for each of these features. When a play by play event comes it, we pass that to each worker of the appropriate game, and each worker does its own thing.

The workers are isolated. Not just the physical code, but at runtime. This makes them easier to test, refactor and grasp.

There's some interaction between the workers. For example our boxscore worker needs to know the score at a given time. So it sends a message to the score worker for that game: {:get, time}. The score worker replies with the score. There's no need for explicit locking. A process handles a message at a time. There's no chance that the boxscore worker changes the score worker's data (it can change the copy of the data that it got, but that change is only on the copy).

Really, it's most of the benefits of microservices (and I mean, being able to have true MICROservices, not just rebranded SOA) with few of the downsides.


Spotted the BleacherReport guy (...I think!) ;)



I think the real killer feature of Elixir is that it is a friendly, Ruby-like language that gives you access to the underlying power of Erlang/OTP on the Beam VM.

OTP (Open Telecom Platform) is a set of tools, libraries, and middleware that was built around Erlang. This has been in development since the 90s, and was originally developed by Ericsson for the telecom industry to handle massive numbers of concurrent connections without a single point of failure (I may be butchering this story, but this was the end result). OTP includes things like extremely lightweight processes, process supervision to quickly restart based on your desired behavior, multiple native real-time datastore options (both in-memory and/or persistent), hot deployments with no downtime, an extensive standard library, and other cool things that have stood the test of time. All of this just comes with the base Erlang/OTP installation.

Elixir essentially introduces a modern ecosystem around the Erlang/OTP system.


>> I think the real killer feature of Elixir is that it is a friendly, Ruby-like language

Elixir looks a little like Ruby, but that's where I'd end the comparison. There's almost know similarity in how can be used well.


True in a lot of cases. However, Phoenix has major similarities to Ruby on Rails, and Elixir is often the functional language of choice for those moving away from RoR.


I would say it's about developer happiness while coding and that all the decisions made by Jose and the core team tend to have been the correct decision. So taking Phoenix as an example you look at the framework and every time they find a problem i.e. Presence instead of saying that's a difficult problem and moving on they fix said problem in a really scalable way [1].

The same could be said for things like data processing with Flow [2] or even things like Ecto (semi official database wrapper) or even third party libraries like say ex_money [3].

Then you start looking at the packages and language and see that there are rarely thousands of bugs or that the infrastructure (mix, hex, docs etc.) is really nice to use and that the language is really stable, yet still provides you with useful but clear abstractions. Or that you can spin off processes and tasks inline without too much worry, or that you can use 20+ years of Erlang libs transparently, or that it's immutable and has the best concurrency primitives of any system available, or that it allow you to supervise processes and let them crash if needed without bringing down your app, or that you can transparently get multi machine out of the box, or that message passing is build in as the default way to scale the system. Or pattern matching or |> or the amazing community.

[1] https://phoenixframework.org/blog/the-road-to-2-million-webs... and https://dockyard.com/blog/2016/03/25/what-makes-phoenix-pres...

[2] https://www.youtube.com/watch?v=XPlXNUXmcgE

[3] https://github.com/kipcole9/money


Actually Ecto is the most unhappy part of the Elixir ecosystem to me. It's unnecessarily complicated for almost any software project. I'd even take Django's ORM instead of it, but what I want is something close to ActiveRecord. There are some Elixir modules similar to AR, none popular.


It is tragically funny how many people like and how many people dislike Ecto. :) I guess that's "ORM"s in a nutshell? Which is totally fine, of course!


Ecto is not an ORM, and an ORM like ActiveRecord would never fit a functional, immutable language anyway. Ecto is a query builder and DSL.

But I may be biased, I _love_ Ecto and have been using it every day for the past year, and have never come across something as powerful yet lightweight (perhaps SQLAlchemy, ignoring its ORM features).


The real "killer feature" is the Erlang VM (the BEAM). Thanks to a combination of exceptionally lightweight "processes" (not true OS processes, a construct of the VM), which share nothing, can only interact through message passing, and which can be supervised by other processes, it's possible to build incredibly robust systems.

Elixir makes the Erlang VM much more pleasant to use in my opinion (plenty of Erlangers will disagree with this, ymmv). It provides a developer-friendly, modern ecosystem, with doc support, testing support, a great macro language for building DSLs, package management, etc, but underlying it all is a battle-tested VM, that has been under active development for 30 years.


It is an ergonomic language built on top of a stable, performant, proven runtime with a lot of features built-in.

It's a nice ecosystem with a good culture in a language that promotes pretty good programming practices.

The only major downsides (if they even are for you) are that it's dynamic--not good for number crunching, though you can connect to compiled binaries--and is not strongly typed--which can lead to runtime bugs.

That said, part of the philosophy is to enable fast failure without taking down the whole application. In communications, it's not considered the end of the world to drop or fail on one connection, so it works well for web services, chat, etc.


Killer feature: pattern matching mixed with destructuring.

Bonus: parallelism almost for free.

If only GenServers had a sensible interface instead of semi random handle_* functions that obfuscate what a given GenServer is implementing.


I am not into Python so I assumed it was the python db thingie.


If the erratically emphasized text bothers anyone else, you can get rid of it by running the following JS in console:

    document.querySelectorAll('em').forEach(el => el.replaceWith(new Text(el.innerText)))


Great, thank you. I have no idea what the author was trying to achieve there, but it made the article really difficult to read


Sure.

I am building a quite involved video learning platform as we speak with Elixir and Phoenix. No regrets so far, and if anything as time goes on, I'm becoming more and more happy with the decision.

The community is really great and there's a lot of quality libraries available. Not just libraries, but entire production systems too. For example https://changelog.com/ is written with Elixir / Phoenix and their platform is open source'd at https://github.com/thechangelog/changelog.com. There's so much good stuff in that repo to learn from.

Also the Elixir Slack channel has 20,000+ people in it and the official forums at https://elixirforum.com/ are very active.


I've been using Elixir to build applications for production use for 3 years now my summary is I and my whole team can (in contrast to languages I've used in the past):

- Ship faster

- Write simple, readable, reliable and fast code

- Scale easier and with less resources

- Onboard and train new hires into the code base quicker

I know I'm making it out to be a panacea which to be clear it isn't as the deployment story still has some final pieces for the core team to work through but I will say I'll continue to use it to build in the future


I use Elixir daily for the past 8 months, and I love it. For my personal projects, I use a great Heroku-like service called Gigalixir (https://gigalixir.com/). No restarts or connection limits for the free tier, and it runs on your choice of Google Cloud or AWS behind the scenes. Elixir doesn't have as many easy cloud deployment options as, say, JS, so this service is really helpful.


Yep. Would bet on it again. It's proven itself to me and my employer, and has increased developer productivity and joy dramatically. Bad code looks bad, good design emerges organically, and macros let you hide the plumbing where needed, and optimize things away at compile time. Not even mentioning the wonderful world of OTP.


Yes! And most likely in 2020 as well.

I've been programming intensively in Elixir for the past two years and it's a wonderfully productive language, which allows one to write elegant systems that leverage the multi-core architecture of today's machines effectively.

In addition, the networking capabilities and fault tolerance of the VM make writing systems which spawn machines and services a breeze; not the mention the ecosystem only gets better by the day.

So yeah, Elixir is one of my main tools when I want to get things done elegantly and productively. And if for some reason I need to speed things up a bit here and there, I just add a little rust into the mix. [1]

[1] https://github.com/hansihe/rustler


What sort of work have you been doing with it?


I've been writing a social collider for real life social interactions on demand. The backend is written in Elixir (which is a collection of services e.g chat subsystem, telemetry, authentication, rate limiting, etc) and the client is an iOS app, so Swift, which is also a nice language btw.


Tangent: It's proposed that Elixir has better error handling than Rust... but this doesn't sit well with me, and I know the Rust community is in flux here as well. I personally like not having exceptions. It's very easy to trace where an error is coming from when it's a value like anything else. Yea it might be a bit more typing... this is the conflict. Rust does have an issue with the boilerplate involved with writing error types, but there are already attempts at fixing this as a crate: https://github.com/rust-lang-nursery/failure


Different tools for different jobs. Rust is more of a systems language for close to metal performance and memory safety. Elixir is memory safe from a functional perspective utilizing the actor model. Elixir is good for real time concurrency and higher level systems.


? OTP and Supervisers give you powerful tools for building fault tolerant apps but at the lang level it would be hard to argue that Elixir has better error handling


Question : How does OTP work together with things like kubernetes in the real world ? Designing around actors with OTP spawning and respawning part of the actor tree, while at the same time provisionning / deprovisionning VMs if load is going up or down, sounds like either a dream if it works well, or a nightmare if there's just a single glitch somewhere.


Best question in this thread and I am looking fwd to an answer. I guess that just very few mastered Elixir and k8s.


This post is killing me. I’ve been really really loving Elixir and for a while was fighting the “everything looks like a nail” syndrome once I learned it.

But now I have a contract that would really benefit from the runtime. That being said the existing environment has a lot of python expertise and I don’t have enough production Elixir experience to have confidence in myself to deliver something of the right caliber.

It’s a damn shame. This system has to process hundreds of thousands of API calls for workloads against a half dozen third parties that all have different rate limits and failure modes. It’s the perfect job for Elixir. It needs to be as fast as possible while isolating failures to the smallest unit possible.


I'd try building a small service or set of subscriber/workers to start. Messaging infrastructure and API clients are a good place to get a feel for the language. Ideally something that can benefit from the concurrency and stability. It really depends on the team though - it's a harder sell if there's high change potential on the code and maintainers would have to pick up a new language.


Honestly? I'd still encourage you to do it. The thing that's nice about Elixir, is that it gives you the tools to screw up and make good on it.

This isn't to say you'll write great Elixir from the beginning. I'm on a codebase now that was from back before the semantics of good Elixir were really well known(2016). It's not uncommon for me every week or two to rewrite a portion of it to look cleaner, and be more performant.

The crazy thing though? Holy shit did it scale. We're doing event processing for an application that is processing nearly 100m events per week. At times, it needs to process 1500 per second. These events need to check the DB multiple times, fan out to multiple services, and make discreet HTTP calls of their own to external servers.

We're still on one box. We still have plenty of the old, harder-to-read, less-performant code. And it still takes under 10 minutes to understand the deepest inner workings of any one feature in the system.

I think you'd be pleasantly surprised.


> The crazy thing though? Holy shit did it scale. We're doing event processing for an application that is processing nearly 100m events per week. At times, it needs to process 1500 per second. These events need to check the DB multiple times, fan out to multiple services, and make discreet HTTP calls of their own to external servers.

Frameworks make a huge difference here rather than language. Phoenix and Ecto have done a really great job with performance.

Ruby will deliver the same performance on a similarly light framework like Sinatra/Roda + Sequel but definitely not Rails.

Once you get a high performance service running on Phoenix + Ecto or Sinatra + Sequel, the gains from moving to compiled languages are a lot smaller unless you invest a huge amount of time in optimisation.


I love Elixir.

It's the first language I genuinely enjoy reading and writing even in my private life.

If I wonder about the internals of a library I use, I can just look into the code and kind of understand what's happening. Never had that with JS or anything.

I'm just a genuine fanboy.

Only drawback I feel is: Some libraries that would have been quite developed in JS are not that well developed in Elixir. Some libraries are quite dead and it's hard to find alternatives (mostly obscure stuff)

But on the other hand, it often seems manageable to just write it yourself, or fork it and move on.


I have never used Elixir, so maybe it's a great language, maybe not. But I have to question the reasoning of the post simply going by the comments about other languages in the "Conclusions" sections — some of which I did use extensively.

Really, Go "is the choice if you need to 'sell it' to a 'Boss'" and the imperative programming style leads to more complexity? And Python/Django can only be used if you "don't need anything 'real time' and just want RESTful 'CRUD'".

I get it, you guys like Elixir, but painting the world using such broad strokes doesn't really sound like "kaizen learning culture" to me, but more like "Negative Nancy".


I really like Elixir. There are a lot of practical realities that can make Elixir not the best language to use in many situations, and the same is true for any language. Just ignore the hype train, because you'll find one for every language.

I'd say Elixir's killer feature in today's day & age is concurrency. I'd argue that using concurrency is appropriate in most programming situations IF your language's concurrency model isn't a pain in the ass to use. You can write completely non-blocking, async code in Elixir (and Erlang) without losing your mind. The preemptive scheduling is nice, too.

I love a lot of other stuff about Elixir, too. Pattern matching, process supervision, tooling, documentation, etc.


Phoenix speed/scalabity is quite on par with Go frameworks like Gin. I wouldn't use Django in 2019 for a new project. It's not even async/non blocking.


Elixir just really nice. I gave it a shot again and it is super smooth experience nowadays. Distillery, mix, iex <3 Also most of the libraries I care about are 2.0+ and now this:

https://github.com/aws-samples/aws-lambda-elixir-runtime

The only downside is that the out of the box performance is subpar for http services but it is still acceptable.


Cowboy measures its latency in microseconds, what bottlenecks are you running into specifically?


Not really, you can't have a microsecond latency talking over a network. I understand localhost microbenchmarks are nice but real life scenarios are much better.

The setup for the test:

- provision node A for being the server

- provision node B for being the client

- open X (16..16000) connections from node A and use http pipelining start to send requests to node B

I use wrk2 as the test client it is pretty amazing and looking at the latency distribution graphs.

Tools that clear winners of performance:

- https://github.com/valyala/fasthttp

- https://www.rapidoid.org

Elixir/Cowboy/Plug is in the middle range, kind of like what Techempower[1] guys saw during their tests.

[1] https://www.techempower.com/blog/2018/10/30/framework-benchm...


Wrk2 is not fast enough and will thus give spurious results since the bottleneck is the client.

The only measuring tool that is fast enough to accurately measure Phoenix performance is something like Tsung (which is also an Erlang app...)


I am not sure what you are talking about. wrk2 can put out 7M req/s on a 16 core box. That is way beyond Phoenix's performance on the same HW type. wrk2 is widely used and accepted performance measurement tool. Again, you mentioned microseconds latency which means you are talking about localhost microbenchmarking. That is irrelevant from the production workload point of view. I have saturated network links successfully with wrk2 which is the definition of fast enough.

Interestingly there was a thread on HN previously which tools are used for HTTP perf testing:

>>> - Wrk: https://github.com/wg/wrk - Fastest tool in the universe. About 25x faster than Locust. 3x faster than Jmeter. Scriptable in Lua. Drawbacks are limited output options/reporting and a scripting API that is callback-based, so painful to use for scripting user scenario flows.

https://news.ycombinator.com/item?id=15738967

https://news.ycombinator.com/item?id=15733910


alright. this looks interesting. I'll have to dive back into this space and see what's changed, I'm probably out of the loop. What's the diff between "wrk" and "wrk2"?


I assume perf comment is in relation to AWS Lambda, for which the Elixir example they've put up is severely below par perf wise


I am talking about raw HTTP request handling performance. Elixir/Cowboy/Plug is in the middle range of web servers. Again, it is good enough for most use cases.


Ah, fair enough, apologies for misconstruing. Definitely good enough for a very wide range of use cases; it's not a language built for speed and power anyway.


Our team has had great success with Elixir over the last year and ported core node services to it over the last few months. We are very happy with the results. There are some things we haven't been able to do with it, like intensive data processing (for which Python is still used), but if those libraries existed we would switch our Python services ASAP and be an entirely elixir backend.


Why does the "author" love "quotes" so much? Makes me "discount" the "article" when every other phrase is "quoted".


Considering progress in statically typed languages with regard to programmer ergonomics, does it still make sense to go with dynamic languages?


Between pattern matching and typespecs you get a lot of the "hey you're doing something wrong" checks at compile time to avoid errors. Definitely not a complete solution; a language like OCaml or F# will be better if you're concerned about type safety.

The dynamic typing in Elixir/Erlang is a trade-off for Actor model message passing. You get a state of the art run-time for fault tolerance and concurrency, but the messaging aspect makes typing problem-prone. A co-dependency on a custom type is coupling you want to avoid when sending messages around. You don't want a long running process that knows about Type_v1 sent a message from a newer process messaging with Type_v2.

The Aeternity team is building blockchain systems with Erlang for nodes and infrastructure. However since smart-contracts necessitate so much type safety and formal verification - they're designing an ML flavor functional language just for that.


I have mixed feelings about using Elixir (or Erlang); as far as I understand the platform, it is about building fault tolerant systems/high availability, specially in the presence of hardware failure. I think those are handled well by cloud service providers; they didn't exist during the 80s.

I think performance is better compared to Ruby and Python, but then again my experience with web applications is that the domains are best modeled using classes.

For writing networking code and protocols, the binary pattern matching is amazing, though. The Plug libraries are a pleasure to use also.


I keep wanting to be hyped about elixer but the performance benchmarks confuse me. If you look at the tech emperor benchmarks[1] phoenix makes the list at #46 registering 16% of the performance of the top framework. I'm willing to sacrifice performance for readable maintainable code but it just surprises me that its that slow.

Anyone know whats going on there?

1. https://www.techempower.com/benchmarks/#section=data-r17&hw=...


Simply looking at its position in that list will tell you nothing without looking at what is above and below. I don't use Phoenix or Elixir but from what I understand Phoenix tries to be for Elixir what Rails is for Ruby. That is, it is a full web framework. The number one position on that list is a "Asynchronous PostgreSQL driver".

This would be a more apt comparison:

https://www.techempower.com/benchmarks/#section=data-r17&hw=...


There are more reasons, but

- better configuration (tweaked to this benchmark / hardware). Bigger communities will try harder to tweak their benchmark config. The Phoenix one is probably just the default one or slightly tweaked. Is the SQL connection pool size ideal? Are all default Phoenix-added Plugs ("middlewares") needed here? Is the BEAM virtual machine config / flags ideal? Would using OTP releases be beneficial?

- Elixir ecosystem is constantly improving the performance. The VM, language and packages versions should be updated. Elixir 1.6.5, Phoenix 1.3.2, and Ecto 2.1 are not recent.

- Abstractions used (and amount of them, so amount of work done at run-time). Compare the Fortunes "handler" implementation in vertx-postgres [1] to the Phoenix one [2]. Raw SQL vs generated query, almost bare DB connection pool vs Repository abstraction, and it's just the "handler". But you wouldn't want to maintain a lot of low level code.

- Typical functional language overhead. Copying (copying the conn when modifying it, Enum.sort on a list etc) is more expensive than modifying in-place. Again, functional code will be easier to maintain.

But overall I think in the particular benchmark you linked, Phoenix isn't that bad. Latency is pretty low and consistent (0.5 ms min and 9.9 ms max (!)) thanks to the BEAM pre-emptive scheduler. And BEAM will scale well vertically.

1. https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast... 2. https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...


I don't know the exact specifics here but I know a lot of communities that rank high invest time contributing into the open source Tech Empower tests for their framework. This way they can tweak & make sure their stack runs at optimal speeds.

At the same time, Elixir (and Erlang) are not meant for raw speed. It is best used for real time communication, lots of users & handling errors. At least that is what I have read.


>Anyone know whats going on there?

Yup, many things.

It's worth reading this very long thread about exactly this, from 2016. Look for Sasa Juric and Chris McCord's comments in particular. [1]

tl;dr benchmarks are hard, not all benchmarks are implemented well, Elixir folks haven't heard back from Tech Empower re: details of errors rates etc. It's an unfair analysis, and not just for Elixir.

[1] https://elixirforum.com/t/techempower-benchmarks/171/44


Does anyone have experience with Elixir as well as Scala/Akka. Afaik, these are the two largest Erlang inspired systems out there. I only have experience with Akka, and I'd love to hear a comparison.


I have experience with both. I have recently been working with Elixir. It's okay. I find the lack of static typing to be some thing I celebrate and curse. Elixir is VERY simple and beam is VERY slow at computation. I wouldn't recommend anyone working with scala/akka look at elixir unless you want to understand how BEAM works but I would recommend _everyone_ working with elixir learn different functional programming languages. Ultimately I'm much happier writing scala with akka. I get more done faster (except things like handling json sometimes because it's hard with types), can refactor the code more freely, and release less broken code to production.


Also, I have to say that elixir developers are generally not very experienced with it and seem to be generally less desirable than the people who are working with scala. You get rubyists that have some interest in working with elixir because there were some blog posts saying it's the new hotness. Scala seems to attract more comp-sci savvy people. And generally those people will refuse to work with elixir when there are Scala job out there that will pay. Startups should use scala for these reasons. Elixir may be a mistake for the resource pool alone.

One thing that's trecherous is that rubyists can bring whatever they believe to be the right way to do things and assume everything should be exactly the same, especially with regards to ecto vs active record. Elixir isn't ruby. Ecto isn't rails active record. Not anywhere close. It just happens to look like ruby and there are some influences in the design but Ecto tells you not too implement STI like rails does for example so don't assume you're going to do it like you would in rails. I'd argue ruby is more like scala than it is like elixir as it has multiple paradigms. Elixir is squarely functional, just a very very simple functional language. The skill ceiling is pretty low and it should take very little time for someone to get up to speed which is important because you won't find a big pool of rockstars using it in the job market so you'll have to hire good people without experience and hope they will be okay using elixir and not jump ship to go work with strong typed languages.


> Also, I have to say that elixir developers are generally not very experienced with it and seem to be generally less desirable than the people who are working with scala.

Scala has been around for 14 years and is built on top of a much more used VM compared to Elixir. Given Scala's growth, it is expected that Scala developers are more experienced as they have been around for longer. Scala also had more time to spread to comp-sci fields, especially as it is taught by many universities. All thanks to Scala's merits, of course!

So while I agree that experience is a factor, I wouldn't draw conclusions that those are intrinsic to Elixir or to its users. I also have heard of companies that had no trouble to hire Elixir developers (some have 80 Elixir devs and growing) and some that had many difficulties. As with any other technology, YMMV.


Scala/Akka has a much worse developer experience than Elixir/Phoenix. BEAM is fairly slow but you can make FFI calls into much faster implementations if your workload is CPU bound (most of the workloads I am familiar with in the web world are not).

Release less broken code to production? I am not sure about it either.


I've used both in production and I must say I'm a very big proponent of both. Actor model of concurrent computation I think maps very well to, at least my, way of thinking and I think is the most productive way of writing distributed systems.

Elixir/Erlang/OTP: + Very mature, very well thought out. While the newer stuff may feel a still under construction (string handling, date handling) all the concurrency primitives are rock solid, and by rock I mean diamond. + Elixir is simply a great language and you can get quickly very productive in it once you grasp the actor/process model of BEAM + Has one very big advantage over akka that actors can receive messages in different order than they were sent. That of course can cause some headaches if not handled carefully, but 99% of the time straight up leads to nicer and simpler programs. Really a lot of akka code very often is just written to deal with the order in which the messages may arrive. + Truly resilient with very good error recovery design once you know how to work with it. I still don't know more graceful and productive way of recovering from failures in a running system.

- For doing any expensive computations it's slow and it's a fact. Not much can be done about it. - Library coverage is 7/10 or 8/10, but these few missing points can sometimes make a big difference.

Scala/Akka: +/- I love Scala language and static typing, but one must be honest that it's much more complex. You can learn Elixir (without macros) in an afternoon. One really needs to take some time and think it through to utilize Scala properly and a true mastery lies even further. To be fair proficiency with Elixir macros also requires a considerable effort, but one can go very far with Elixir without writing macros, while with Scala already the upfront cost is pretty high. + Up to my best knowledge Akka Streams is completely unique and completely amazing library that is gaining support throughout library ecosystem. This is one point where Scala/Akka completely outshines everything else. Streams are such a great abstraction and gave a huge productivity boost to most of the projects I was working on. Compared to that, elixir GenStage feels much less robust and polished. + Speed of JVM should be enough for 99% of applications, and with that respect it wins with BEAM. + Java/Scala library ecosystem is very deep and is simply much more comprehensive than Elixir/Erlang one.

- There are places around akka that still feel a bit immature/adhoc, but the library is steadily improving. It just is not as mature as BEAM/OTP. - Over small and mid size projects I think I was more productive with Elixir. Meaning, if I had the same amount of time, I could implement more features in less time using Elixir. But it could be just a personal thing.

Overall I think both are fantastic platforms and I'm happy to have both of them to choose for each project. If I were to chose what to select today:

- Choose Elixir/OTP for a system where we need to do a lot of io but not much computation and we're sure existing libraries cover our needs. Very big plus if we need it to be resilient. - Choose Scala/Akka if we need speed or call any existing JVM libraries. Very big plus if your project could use Akka Streams.


Agreed. The BEAM run-time and the preemptive scheduler implementation is designed to never block and keep chugging a long; highly desirable for the Actor model. The trade-off is you're further from the metal and raw-compute isn't as powerful as a JVM language like Scala. That said an Actor model on the JVM (Akka) can't make the same run-time guarantees using a cooperative scheduler; instead you get all the enterprise libraries and developer hours to tap into. So the trade-off of more complexity in Scala/Akka is probably worth it for big enterprise who are probably already in bed with the JVM.

I think we'll see more Akka features built in Elixir through things like Genstage and Flow, but it's hard to argue with the mountain of exiting developer man-hours in the JVM.


Very interesting language (and OTP platform), I wish I learned it earlier.


Thanks for your work on Milestone/Droid ;)


Haha, thank you!


From a technology perspective, a thousand times yes. And the same for Ember JS (my other go-to).

But from a talent and recruiting perspective, I'm less enthusiastic. Elixir, yes, there's growing talent. But Ember, boy it seems like nobody is doing it, and I've had to convince potential candidates that it'll be worth their time for future employability to learn Ember.


I started read Programming Elixir and Programming Phoenix. Elixir is amazing.


> No "native" type for JSON data. You always have to parse JSON into a Map and there are excellent libraries for doing this.

I guess the better question would be why is there not an easy, standard lib for doing this in any language in 2019?


I didn't read the article, but intuitively, from the quote you posted, I'd say it's about having JSON(or close to)-literals in the language and/or having Map/List types with semantics close to that of JS. For example, in Python dict and list literals are perfectly valid JSON if you remember not to use single quotes (' vs. "), and the semantics are also pretty close to JS. In Elixir this is not the case: the Map syntax could pass for JSON if you squint hard enough:

    %{key: "val", key2: [1, 2, 3]}
but the semantics here are actually something like this in JS:

    {Symbol("key"): new Int8Array(/*utf-8 encoded*/ "val"), Symbol("key2"): new LinkedList([1, 2, 3])}
you can get rid of the `Symbol()` part in the translation, but then the literal becomes:

    %{"key" => "val", ...}
so, basically, the gap between JSON and Elixir is wider, both syntactically and semantically, than it is in some other popular languages.


I would like it a lot more if it wasn't forced on me by my coworkers.


I love Erlang's OTP, but I would never go back to a dynamically typed language.

Most of my current work is in Go, which is a fairly strict language, and I value perhaps more than anything the ability to verify my program at compile time -- for one, I can do large-scale refactoringest, safe in the knowledge that my program won't run until everything is again sound.

Go still leaves a lot to be desired, so I've been exploring options. I've started picking up Rust. I love the idea of zero-cost abstractions, though at the moment I find the mental overhead of a lot of the constructs (lifetime annotation, implicit operations that happen due to what traits you implement, the many baroque syntax choices, etc.) a little annoying. It brings to mind modern C++, which also has a lot of rules that you have to remember, from copy constructors to what the order of "const" in var/arg decls mean, to the awkward split between functional and imperative styles.

Modern C++ looks interesting, and I've used it for a few projects. What bugs me the most is the warts still not fixed by the "modern" iterations: Include files (leading to long compilation times), lack of modules, unsafe pointers, etc. While I appreciate and understand template mechanics, I'm not overly impressed with some developments -- Rust traits and Haskell typeclasses just seem so much less messy than the current situation with type traits and concepts. There's a tendency in "modern" C++ to offer multiple syntaxes for the same thing, none of which are very intuitive.

I've occasionally written small things in Haskell and OCaml, and I've considered doing a future project in OCaml now that multicore support is getting close. I looked at F# for a bit, too, but it comes across as having too much .NET/Microsoft flavour for me. Same with C#. I've looked at Nim, but it's too niche -- for the projects I'm going to work on, I'd have to write libraries for functionality that just isn't there yet (e.g. gRPC).

Back to Elixir, though; the problem is of course that none of these other languages offer anything like OTP. The closest may be Haskell, with it's Distributed Haskell project. But I'm not sure it's anywhere close to being as mature. Maybe Pony is comparable, but that also seems quite niche at this point.


> I looked at F# for a bit, too, but it comes across as having too much .NET/Microsoft flavour for me.

Mind elaborating? F# seems like a decent fit from what you've said. I'm hoping the language will grow less stagnant as .NET Core matures.


I'm a big fan F# the language. It comes across very as a modern, cleaned-up version of OCaml — F# started out as an implementation OCaml, after all — with some very interesting innovations.

However, it comes with the baggage of .NET Core, which is a rather big thing. And it's growing, as Microsoft is apparently porting over everything from the older, non-cross-platform .NET stuff. For one, .NET Core includes the CLR/CIL, i.e. the JIT VM and cross-language integration, which I'm not interested in at all; I just want an AOT compiler. The AOT support seems like a fairly recent addition, and it's unclear to me how optimized it is or how well-supported it is compared to the older CLR-based toolchain. As a standard library, CoreFX seems rather large, and contains things like GUI and SQL Server support, for some reason.

In short, .NET Core seems like something you'd love only if you were already heavily invested in Microsoft's tech stack. I'm not interested in it myself.


I see -- most complaints I've heard about .NET Core relate to F#'s status as a second-class citizen vis a vis compatibility problems, which have recently resolved.

I'm with you on AOT, but I think the language makes more sense if you understand it as a .NET port of OCaml. If you isolated F# from CLR, you'd lose libraries and tooling, arguably F#'s raison d'etre, pleasant design choices aside.


Depends on the project. I have a project that heavily relies on headless chrome for scraping dynamic pages I'd rather stick to Node and Puppeteer for this particular project. In general Elixir is a joy to use but some of the trade-offs BEAM(Erlang VM) makes might not match your requirements e.g. if you don't need live code upgrades but project could benefit from static typing you might want to consider something else.


Elixir is a niche (that's the truth). Also in the article it is written 'Relatively difficult to "recruit" developers with existing experience in Elixir'.

Why would a SW company invest in niche languages where the resources (software developers) are really expensive and really hard to get? Technologically it's all great but economically that's a nightmare.


I have been involved with bringing it to my company as the primary proponent. The truth is it hasn't been hard to teach people it and they can produce decent code pretty quickly and good code a bit longer than that but still acceptable.

We have found a few people who knew it already and were looking for a job, but that is fairly rare. Instead, we know we can bring people up to speed on it quickly and also it signals to people that we're willing to give them some language options (Ruby or Elixir) within some boundaries. Having these options is good for ownership of an area.


I don't think it is that black and white. I have heard from some companies adopting Elixir that hiring became easier because the demand for their previous technology was really high and they could differentiate themselves with Elixir.

There are also companies that we really successful in hiring by reaching out to functional communities in general. And of course, there are also companies struggling to hire Elixir developers compared to other techs. YMMV.


The only times I see this argued is from suits looking to treat programmers as a fungible resource. This is never a problem for companies looking to retain people.

If your status quo is working programmers as long as you can without a raise until they switch jobs, a niche language is a threat. If you re-evaluate your staff based on their experience gained, you can head off the churn.


I think this is too harsh. A company can establish an enjoyable and empowering culture, that attracts and supports talented developers — and can still try to avoid hiring obstacles at the same time. It might not have been what therealmarv meant in his post, but your comment makes it sound mutually exclusive.

As a counter-argument: I've experienced several times that "niche tech" companies offered non-competitive salary packages and perks, because they offered the cool tech instead; "sure, we can't match that other offer, but we built our stack on that language/tech that is so hot right now".

Not arguing about the quality of Elixir, just about the gatekeeping that happens in this thread.


If you have a solid background in software development, it's not too hard to become proficient in Elixir in a few weeks. If you recruit good talent with a demonstrated proficiency and willingness to learn, you will be fine. If you need to switch languages or systems in the future, these devs will still be of great use to you.


Ok, that sounds good. My overall experience is only for most companies (I'm working as a contractor for many years) that they don't really want to invest in you. Either you fit for the job or not (and competition is not easy sometimes).


I can’t argue with this point in general, but your first question was from the perspective of a company. There certainly are companies, that are willing to use things like Elixir (like my company for example) because we believe that an experienced developer should be able to come up to speed quickly with Elixir. What really takes the time is learning our business domain and our existing codebase. Or looking at it another way, I wouldn’t want to hire someone that was a Rails dev, I would want to hire someone that was a strong dev in general and may have happened to be doing Rails most recently.


Something I'd also mention is, with Elixir, you'll get your MVP done maybe 10% slower than you would in Ruby.

The difference is, in Ruby, after 2-3 months, you'll be rewriting large portions of what you did to get there. And in another year or two, you'll find yourself being blocked by earlier designs not for a day or two, but by a month of work or better.

In Elixir, you simply will not experience that. Even if written poorly, it's easy to rework/change, and you probably won't need to anyway because it's just easier to get things done without painting yourself into a corner.


Doesn't that depend a lot on the project? Only one datapoint, of course, but we have an elixir project at work and it's in dire need of a refractor. The dynamic nature of the language doesn't help.


Thing is, it's not 100% dynamic. It's compiled, so most common refactor errors will be revealed by that alone(no function named, not the right number of arguments, etc). The harder to find things would be harder to find in most typed languages too, unless you had tests to point them out to you.


That's one of the things you lose when going from Erlang to Elixir. Sure the macros are useful and convenient, but they also make the code less explicit and harder to refactor. There's probably a reason why Erlang for the past 30 years had no macros, despite being a homoiconic language.

TLDR: try working with Erlang if you find your Elixir code-base in need of a refactor - chances are you'll learn conventions which will help you write more maintainable code in Elixir, too.


> Why would a SW company invest in niche languages

See http://www.paulgraham.com/avg.html for one answer.


Has anyone had experience of both Elixir and F#?

I've dabbled with both - but really fell in love with both of them! I come from a C# background, so the static typing of F# is a big pull. OTOH, the simplicity of Elixir was an absolute delight - after just an afternoon, I felt like I had a decent grasp of it.

I'm conflicted, and would value some other opinions?


Depends on what you want to do, I think. The pros of the BEAM likely outweigh the cons of Elixir's dynamic types for distributed messaging systems, e.g., Discord. Otherwise, F# and the SAFE stack probably can serve other backend needs more safely with better library support from .NET. Elixir does seem more trendy than F#, which has lagged somewhat due to second-class support from Microsoft and lingering dislike for .NET -- hopefully things will improve as .NET Core matures.


can people give real world business use cases for where they are using elixir? What industries are you working in? what actually gets done in the real world at the end of the day with the system you're working on? e.g. are more ads served to web users? are you monitoring methane on IOT things strapped to cows in farm fields?


PagerDuty has standardized on Elixir as the backend language of choice after a couple of years of incremental adoption with new services. Developer happiness definitely helped get the word spread. https://www.pagerduty.com/blog/elixir-at-pagerduty/


Ok so they use Elixir there to make dashboards. cool.


Highly available critical services comprising the PagerDuty infrastructure, actually. Cool dismissal though.


what are some concrete examples?


- chat / IMs backends

- multimedia streaming

- multiplayer game servers

Generally soft real-time systems


what multiplayer game servers use elixir? can you give specific examples?


These were just examples from head, where I'd see it being used, but concrete examples are here: https://elixir-companies.com/

You can browse by industry, gaming here: https://elixir-companies.com/industries/gaming

Additionally, I believe Riot Games uses Erlang for their in-game chat and maybe other services. Also DemonWare (part of Activision Blizzard) used Erlang for CoD Black Ops, presentation here: http://www.erlang-factory.com/upload/presentations/395/Erlan...

Anywhere where Erlang is used, Elixir can be used too :)


I've done several scrapers on Elixir and it has been times easier and better experience than with Go, JS and Ruby.


www.lovethework.com run on elixir. Data transfer and transformation from an old system, web frontend (we are refactoring a lot of the current javascript stuff recently), ecommerce, the admin stuff on the back office too.

Mostly good old web stuff.


Did this article claim Haskell was slower than Erlang/Elixir?! That’s never been my experience and I’ve shipped both!


What kind of problems are people using Elixir to solve in production? My impression is it’s mainly useful for highly networked applications with real-time features (i.e. chat), but it seems like for most applications you’d be better off picking rails or nodejs for the community/ecosystem.


I dislike coding in javascript for large project. The language was originally for small stuff.

NodeJS bought it to backend and the language itself weren't meant for it. Since then ECMA5 and stuff tried to fix these shortcomings. But you can't expect me to love javascript's weakly type versus elixir or python's strong type (strong not static, as in it doesn't implicitly type convert stuff like javascript). It's a nightmare and concurrency model in NodeJS in my opinion is subpar compare to Elixir's.


> for most applications you’d be better off picking rails or nodejs for the community/ecosystem

Only to find out they do not scale very well after your application is mature and used by a growing number of clients.


From someone that comes from Java/.NET land, I hardly see a benefit, specially given the wealth of programming language options on those platforms.

Now for someone starting new, maybe the Erlang eco-system might be a good bet, and Elixir an entry point.

Still, not everyone has Ericson scale problems to solve.


Erlang (and, by extension, Elixir) is still one of the very few systems which offer this exact (or even just close enough) mix of features AFAIK. Whether this set fits your use-case or not, and whether it would give you an advantage over your chosen technology, are both very important points, but there's also something to be said about how good and well-implemented it is for some use-case(s). To be honest, I first learned Erlang along with Prolog, Forth, Lisp or J - out of curiosity about various paradigms and the most "pure" implementations of them. Erlang was at the time the oldest, actively developed, open-source system for concurrent and distributed programming. Today I think I'd go with Pony, which implements Actor-model on the language-level too, but also with support for it in the (static) type system.

Anyway, what I wanted to say is that Erlang is first and foremost a fault-tolerant language and system, of which both distribution and concurrency are by-products. As an example of a "fault" that the creators of Erlang had in mind, Joe Armstrong often cites "being hit by lightning": the only way to ensure the system will still function after that is to have its copy running somewhere else, hence distribution. Another type of fault I think explicitly mentioned in "Programming Erlang" is dealing with hardware failures, sensors and outputs getting disconnected and reconnected, etc. - hence concurrency and per-process error isolation. Finally, "programmer errors" are also a kind of a fault (as impossible to completely avoid as lightning or flood), hence immutability, versioned rolling upgrades and rollbacks and live introspection into any node from anywhere in the system (among other things).

That is not to say that the by-products aren't important or nice to have, just that many of the design decisions in Erlang start making a bit more sense if you look at them from this angle. It also helps to decide whether Erlang is the right tool for you: it's going to save you many, many years of effort if you need a nine-nines guarantee for a system you'd otherwise have to write a few million loc of C; it can still give you a bit of an edge if you are able to make use of its unique features like a built-in distributed data-store or if the Actor-model with preemptive scheduling fits your app very well. Outside of these pretty specific use-cases (although, to be fair, I'm just giving examples - Erlang/OTP is a large (in terms of built-in functionality) system and Elixir adds even more stuff, so there are many more good use-cases for it) you may struggle to realize any positive outcome with Erlang: unfamiliar everything, no libraries, a runtime system always ready for connecting to remote nodes even if you're writing command-line script, immutability has a performance cost and overall performance is not impressive and so on, each of this things could potentially bring down your project if not carefully considered.


Elixir is the best programming language I have ever worked with. I absolutely love it.

Elixir has totally spoiled me. The meta-programming ALONE is something I miss constantly when I have to use other languages.


Sees post recommends https://nerves-project.org for IoT Clicks thru Sees that Nerves is an excellent platform for IoT and only requires a base of 12MB and Linux. Quickly backs away.


Do you have an actual rational counterargument? It is a completely stripped-down Linux that boots directly into BEAM.


Yes. I prototype in Node, but then usually move all the important stuff to Elixir.


If you are coming to elixir from a rails background, how does it compare? I looked a year or two ago and the number of packages was far smaller with elixir, which turned me off of it.


I think Elixir is quick to learn, long to master. You start off building web apps in Phoenix, being relatively productive from the start, and thinking to yourself "well, there's definitely some magic I don't understand, but this feels a lot like Ruby." You'll immediately notice and learn the smaller differences (immutability and pattern matching).

Then one day, you'll need to store and mutate data in process. And then you'll learn about GenServers and Supervisors.

Then one day, you'll want to have some base functionality but for whatever reason, composition isn't a good fit, so you'll start to dig into macros.

Fundamentally, Go is much more Ruby-like than Elixir (Ruby and Go have shared heap, global GC, array-based data structures, same evaluation strategy, mutability, ...). Elixir is very different. But it's discoverable.


There is more or less everything now. My complaint was the lack of and authentication framework and Coherence filled the void. I have no trouble finding modules to solve problems without coding the solution from scratch.


I wouldn't pick Elixir because:

Rails has revolutionized web application development on Ruby, with Sinatra as the minimalist version and a lot of "me too" frameworks have been developed, and somehow I like them all.

* On Python, Django and Flask * On Elixir, Phoenix * On Crystal, Amber * On Javascript Express js for Sinatra. But on JS we didn't get a successful Rails clone, but a storm of front end frameworks, finally Vue JS/React and endless others.

I wouldn't pick Elixir because The world is elsewhere: My choice is on JavaScript ES6, Vue, and a simple Express.js, Sinatra, Flask for most projects.


> The world is elsewhere

The world is everywhere. Other people have pointed out Elixir is very good at taking advantage of multiple cores and writing distributed applications which are easier to reason about, less error prone and very efficient. I wouldn't say the same things about javascript.


> I wouldn't pick Elixir because The world is elsewhere

People said the same thing about PHP when Rails was first on the scene and there are still way more PHP web apps out there. You could say the world still runs on PHP but that's not a good enough justification to choose it.


If all you're doing is CRUD apps, there's not much more to gain from Elixir. But if you're doing things like pulling in Redis or Sidekiq, or building around realtime use cases - Elixir has so much more to give you.


Interesting perspective on the JavaScript ecosystem.

I never had any debugging issues in particular, but the dependency hell drives me nuts, too.


Was very excited by Exilir coming from Ruby, however found 2 issues that made it hard to work with:

- Pipelines are hard to debug. You can’t just throw a debugger just before the line with the issue.

- Phoenix is very bad at serving static files. It was a nightmare to import a new CSS template requiring to convert everything to work with bower first, or dump the files in the /priv directory to make it work.


> Pipelines are hard to debug. You can’t just throw a debugger just before the line with the issue.

You absolutely can. Just change,

```elixir

users

|> send_email_with_money()

|> do_complex_thing_that_crashes()

```

into

```elixir

users = users

  |> send_email_with_money()
require IEx; IEx.pry()

```

I turned the `require IEx; IEx.pry()` into a snippet just to make life as easy as it is in Ruby land.

> - Phoenix is very bad at serving static files. It was a nightmare to import a new CSS template requiring to convert everything to work with bower first, or dump the files in the /priv directory to make it work.

Well, two sides to this. For one, Phoenix uses Webpack now(since finally the war to see what app bundler would win is over).

But even when you did use bower- you should've been able to just delete `phoenix.css`, copy your template in, and in `app.css` put `import "template_name_here"`.


Yeah, but you do need to edit your actual pipelined code to add a debugger. That's annoying.

> Well, two sides to this. For one, Phoenix uses Webpack now(since finally the war to see what app bundler would win is over).

We shouldn't force devs on Bower or Webpack. If I want to try out a new theme just bought on ThemeForest, it shouldn't take hours to make it compatible. Or to force to do the /priv/ hack that seems unelegant.


> We shouldn't force devs on Bower or Webpack. If I want to try out a new theme just bought on ThemeForest, it shouldn't take hours to make it compatible. Or to force to do the /priv/ hack that seems unelegant.

You could at any time have installed Phoenix without those. And what's so hackey about /priv/? That's where you can dump things that you know will be served. It's literally the intended purpose of the folder.


Both of these points are False, point 2 I’ve been integrating a bootstrap framework and sass and it’s super simple; I put the files in assets, add npm install —save sass and that’s it!

Then debugging a pipeline is as simple as dropping in IO.inspect between statements as it returns the content as well as printing.

    thing
    |> stage1
    |> IO.inspect
    |> stage2
Not that difficult!


> I put the files in assets, add npm install —save sass and that’s it!

It takes forever if your assets are large. Just serving random static files shouldn't take long.

> IO.inspect

It's nothing like a real debugger.


I agree about the inspect / pry not being a real debugger but could you expand on your problems with the debugger and pipelines?

Are you using the Erlang :debugger module?

The only "problem" I see is that we can't set a breakpoint on the first line of a Elixir pipeline, but to see the value of variable from that line we can set a breakpoint on the last line of the pipeline. To see why that happens we can try stepping through a pipeline with the debugger: First "executed" is the last line of the pipeline, then the second line, then third etc and it looks like for the debugger the first line of the pipeline never happened. I don't think this is a big problem to be honest.


I’ve not found this, are you on windows?

Inspect is fine as in Elixir you don’t have any hidden state. Use the :debugger if you need more than this.


> I’ve been integrating a bootstrap framework

You do this for legacy systems. New systems should not use Bootstrap, there is much better out there.


Don't be so dogmatic - Bootstrap works just fine, and if it's what you know, you need some very good reasons to switch and learn a new framework.


Not about being dogmatic. Bootstrap is just wrong in 2019.


I agree but this type of comment also shows your lack of experience. Very rarely do you personally get to make a choice about the technology used. In this case I’ve come onto a project to fix a load of bugs and restyle something that already existed. I’m not going to come in and redo everything from scratch before delivering something...


OK, so how am I supposed to take a reply like this - it sounds... well, dogmatic!

Given you haven't even attempted to explain your reasoning, I can't possibly agree.


What should I use instead of Bootstrap in 2019 then?


What's the best in 2019?


This feels like an orchestrated upvoting and content marketing flash mob from the Elixir Slack channel. Google Trends shows that Elixir is declining. I love new languages but don't like to be fooled.


no it's premature optimization, you won't find any devs, 99% of elixir can be done in node + k8s.


Love Elixir.

But every other word being emphasised in this article was tiring to read.


I agree. If you emphasize too much, it feels like you're trying too hard to convince me, kind of like news headlines IN ALL CAPS.


[flagged]


You mean the use of singular they? Absolutely standard English, and nothing ideological about it.


Only with an indeterminate subject. Definitely not the case here.


Isn't Elixir the most efficient thing available?


Elixir is pretty fast, but far from the fastest or efficient. IMO if all you need is CRUD server that operates efficiently then elixir is probably the wrong choice unless you already know it. It does have some nice features within distribution and real-time stuff, but if you don't need that and don't know erlang or elixir I wouldn't use it.


no elixir/beam is very slow for any computation-heavy tasks.


But what about not so computation-heavy tasks like parsing HTTP requests, interacting with databases, generating and serving responses based on the input, templates, reasonably simple logic and the data? And if it's not fast then why even consider it when there are more well-established alternatives like Ruby for those who like the syntax, ASP.Net/Core, Python/Django, Node/Express, Scala/Play for the FP lovers etc? I've been previously told it's key features are it's super fast, functional and Ruby-like.


Super fast, no. At least not in the C++ way. Faster than Ruby or Python, yes.

Functional, yes.

Ruby-like, yes, as German is English like (mostly guessable vocabulary.) Then you discover that the two languages work in totally different ways and your Ruby skills don't really matter anything when working in Elixir.


>> Node is a single-threaded event loop, if the process crashes for one user, it crashes for all the requests being handled by that process. i.e. one user can crash the server for hundreds/thousands of people! This is a terrible design flaw

This is a design flaw on the part of the team who is using Node.js incorrectly and not a flaw of Node.js itself. There are many ways to implement error handling properly in Node.js so that a user cannot crash a whole server/process and there are a lot of frameworks which implement this by default.

Elixir is over-marketed and over-hyped. It's obvious that there is a big money machine behind it. The entire community is obsessed with evangelizing; they're not getting organic growth; they have very aggressive marketing but it's mostly founded on exaggerations and flat out lies.

In addition to what I've pointed out above, to say that someone can learn Elixir in just 1 week is another example of a lie. It takes years to fully understand the nuances of a language to the point that you can be good at it; there are always a lot of patterns to learn; especially for functional programming languages.

The Elixir ecosystem will never be as significant as that of Node.js because Elixir's ecosystem is founded on hype. Part of the greatness of Node.js is that reality tends to exceed expectations; so-called 'thought leaders' and 'bloggers' have been working very hard to discredit Node.js from the beginning but they failed (see https://news.ycombinator.com/item?id=3062271).

I'm not going to consider using Elixir while it's so clearly over-marketed and over-hyped.


Couldn't agree more. Just look at Google Tends. Because Elixir is dying they do more and more content marketing: https://trends.google.com/trends/explore?geo=US&q=%2Fm%2F0pl...

In their Slack channel they orchestrate organized upvotes of such post like this one, they collectively downvote people like the parent and post fanboism through several accounts.

Elixir is a solution without a problem.


I guess I feel like the annoying formatting is indicative of the community, immature.

Web developers seem to follow trends; Perl -> DJango|RoR -> Node.JS -> Scala -> GoLang -> Elixir -> Something. Or, something like that. To me, it's like buying a $500 pencil and expecting that you should be capable of writing a better book.

If you get in bed with that crowd, don't expect that your program and 3rd party dependencies are going to be stable in 2 years.


A lot of individuals I've seen in the community so far haven't been the type to quickly jump onto a trend. People have been thoughtful with their application design and have chosen Elixir instead of alternatives. Given how drastically different the BEAM is from these other languages, I have a hard time seeing some of the people I've met (and myself) jump to something else. My guess is that if it does happen to others, it is because they switched jobs and cannot get buy in.

Pretty unfounded comments regarding stability of packages long term. As with any community that makes it easy to publish packages, there will certainly be package churn over time. However, core libraries show 0 sign of this and Phoenix in particular has taken a very mature stance on new features.


Yes. And in 3 years (or so) when the hype-train moves on, and all that is left is the core users. It will be a much more viable choice for development, in my opinion.

I don't think anything negative of the language or core libraries.


Let’s not generalize the community over one blog post. The Elixir community has been very mature in my experience.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: