Hacker News new | past | comments | ask | show | jobs | submit login
Elixir 1.3.1 released (github.com)
299 points by eddd on July 11, 2016 | hide | past | web | favorite | 96 comments

So, I know that Elixir is powerful and good at functional programming. The one thing I can't quite understand is why we would want to program everything as a composition of individual programs and applications, with supervisors and application trees and message passing and the like. It seems like a lot of overhead to accomplish something. Making everything asynchronous and detached makes things more complicated, not less.

I hear a lot about OTP and the "let it crash" mantra, but I just don't quite understand what's so great about it. Maybe it's just due to my problem domain (web development), but it doesn't seem like as big a draw to Elixir and Erlang as pattern matching and FP are.

> is why we would want to program everything as a composition of individual programs and applications,

Because it matches the reality better. Concurrent applications built out of individual isolated processes often map well to real world problems. Not all problems, but quite a few of them. Say if you just multiply a matrix or maybe calculate an average, you don't need concurrency and supervision. If you want to sort a list of files, you don't need etc. Not all systems have to be reliable and fault tolerant.

> Making everything asynchronous and detached makes things more complicated, not less.

The real world is asynchronous and detached. You as an agent are detached from other co-workers or friends. Before you take a breath you don't have to wait for them to finish taking a breath. A car driving down the street is detached from others. They come to an intersection and have to work together in order not to run into each other. But after that they keep going on their way. Also they are fault tolerant -- because your neighbor's car engine crashed, doesn't mean yours should stop working. Seems a bit like a silly example but that maps fairly well to distributed computing -- requests, user connections, work processes, items in a processing pipeline, database shards and so on.

So given that there are these types of problem what framework language should you pick to solve it? I would say first of all one which reduces the impedance mismatch and lets your write clearest, simplest code. And also that one that's most fault tolerant.

You could spawn threads (or green threads) in some languages -- but you'd probably be sharing memory and have some bugs in there. You could spawn OS processes and/or have multiple containers/vms in the cloud, but that gets expensive. You could use Rust for example to achieve some of those goal though, because it will guarantee to a certain degree during compile time that your application will have less memory and concurrency errors. Or you could use something based on the BEAM VM (Erlang or Elixir) for example, and you get a different approach to fault tolerance there.

You're right -- there is a conceptual/complexity cost to BEAM applications, distribution, concurrency, supervision trees, etc.

I think the idea is that if you start with phoenix you don't have pay that cost unless you need it. If you end up needing to scale, the features are there. I think Uncle Bob said once that "architecture is the art of delaying decisions". Phoenix / Elixir let you do this.

Another part of the story is channels. If you're doing websocket work, you can't find a better platform than phoenix / elixir.

Once you start digging in, you realize phoenix / elixir are awesome even if you're not using all the concurrency / distribution stuff. The functional nature / immutability posture of elixir helps you with a large code base and/or lots of devs. There are other features -- pattern matching and macros come to mind -- that make Elixir a great place to code.

All that said, if you have a client, say, with modest needs that is never going to have much traffic, all that might not be worth going with something you're not familiar with. Similarly, if you're talking about a 1-2 dev project, it might not worth learning something new. In those cases, maybe it's rails or django or whatever ftw.

I am a huge fan of Elixir and Phoenix even without OTP, don't get me wrong. It's my go-to framework now, replacing Rails. I just ended up hitting the OTP chapters in Programming Elixir and Programming Phoenix, and just kind of wondered why I was doing what I was doing, and why anyone would ever have or want to do this.

That's been covered pretty well by other commenters, but it seems like it's something you have to experience yourself in order to really understand.

What's the advantage of Elixr/Phoenix over straight Erlang and something like Nitrogen or N2O?

I understand that some people (especially those with a Ruby/Rails background) find Elixr/Phoenix more familiar. But is there more going on here than syntactic sugar?

If it's ultimately a BEAM application, I'm guessing that Elixr can't do anything that you can't do in Erlang, is that wrong?

Nitrogen and N2O really aren't that good as frameworks.

Phoenix builds and improves on years of web framework iterations from Rails. They did a lot of things right there, but they are hampered by the limitations of the Ruby language.

Elixir is Erlang but nicer. While erlang is awesome (I write it every day), it's still a bit of a clunky language that could be much more elegant.

At some point, all programming languages are syntactic sugar over assembly. :)

But I hear what you're asking. I'm not really familiar enough to answer, but I've seen plenty of articles that say it's more than just a familiar syntax, e.g., http://devintorr.es/blog/2013/06/11/elixir-its-not-about-syn...

The key is the ease of writing the target program correctly.

You can write any algorithm using any Turing-complete machine, including the original tape-based Turing machine or a language like Malbolge. But it is going to take an inordinate amount of time to write and debug.

What FP and message passing buy you is the ease of reasoning. No shared state -> no problem of concurrent updates. No state at all -> no questions like "but does this method update that instance member?". All your state lives in message queues and call stacks, and both provde very stringent and easy-to-understand disciplines.

This also gives you composability: you have easy time putting things together and rearranging them as you see fit, without caring of any hidden data dependencies. Everything is explicit.

(Combined with a static type checker and a right type system, it gives you the effect of "if it compiles, it runs correctly 99% of times", seen e.g. in Haskell or Rust. Elixir is not a statically-typed language, though.)

Yes, it does sometimes come at a cost. Not all immutable data structures can be made as efficient as mutable ones. Usually they are as fast as mutable ones, but pay some space cost; IIRC it's theoretically proven that the cost is no more than a logarithmic one.

With RAM being so much cheaper than developers' time, it's a rather good tradeoff. At least, using JIT in Java or Javascript, or just using Python and Ruby, one usually pays a similar or higher RAM cost, for a fraction of benefits. JS does move towards immutability and FP-ness, though, because the above applies to it, too, and it's not so OOP-heavy as for make FP approaches largely impractical.

> Not all immutable data structures can be made as efficient as mutable ones. Usually they are as fast as mutable ones,

I don't think this is true. Clojure uses immutable persistent data structure under the hood, just as most functional languages do, and devs often jump through hoops to circumvent them for performance reasons, opting instead for bit bashing like most other languages.

An immutable tree is often about as fast as a mutable tree, it's just pointer manipulation in either case.

An immutable data structure representing an array is definitely much slower than a mutable array (a contiguous piece of memory).

Usually != always, unfortunately :(

> An immutable tree is often about as fast as a mutable tree, it's just pointer manipulation in either case.

This might be true, depending on what type of processing you're doing, and if you have a good garbage collector.

Of course, if you're allowed to use mutable trees, then you should just use a hash table which will be faster both asymptotically and in practice.

I am currently in the process of learning Erlang, and here are my impressions.

In the paper A Note on Distributed Computing, Waldo et al argue that having a distributed object system is bound to fail no matter how hard one tries.

  Differences in latency, memory access, partial failure, and
  concurrency make merging of the computational models
  of local and distributed computing both unwise to attempt
  and unable to succeed.
  A better approach is to accept that there are irreconcilable
  differences between local and distributed computing, and
  to be conscious of those differences at all stages of the
  design and implementation of distributed applications.
  Rather than trying to merge local and remote objects, engineers
  need to be constantly reminded of the differences
  between the two, and know when it is appropriate to use
  each kind of object.
Erlang works around this problem by making the local object model identical to the remote one in all aspects except latency. The semantic model is copy-on-send asynchronous message-passing. This is inconvenient when working with local components, as you rightly point out. But putting up with this inconvenience up-front makes it easier to refactor towards a distributed system later much easier. The language attempts to work around the inconvenience of asynchronous message-passing using functional abstractions - which is largely what OTP seem to be.

BTW, the language can be used in a synchronous context. It works just like another functional language with all code executing from start-to-finish on a single thread of execution.

[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

> I hear a lot about OTP and the "let it crash" mantra, but > I just don't quite understand what's so great about it.

The top two for me are:

(1) you start to just program the sunny path. It's faster and more fun to code in this mode. And with pattern matching of function arguments and return values (as well as guards) and you get pre- and post-condition asserts all while programming the sunny path.

(2) What are you supposed to do when your system raises an run-time error anyway? I think in practice, in the majority of cases simply restarting your process in some known state is the best you can do.

And that "start in some known state" is non-trivial for a system that handles concurrent inputs. OTP helps you do that correctly.

Edit: Formating. Add note pattern matching of function args and guards.

Elixir and Erlang actually map remarkably well to the _web development_ use case. When every request gets it's own process and shared services are abstracted away, the mental model is quite simple. Use Elixir/ Erlang and you also get to use server push architectures with ease. Want server state to write your own device-sync? Need to write web-hooks? Want to count requests? Need to guarantee uniqueness of a combination of variables in a distributed environment? Want to do server some requests requiring heavy-lifting while also serving short-lived requests with low latency? Choose Elixir / Erlang for web development and as your needs evolve (and they will), you're likely to be surprised and delighted by how much you can achieve before you need to move to a polygot or micro-service-based architecture.

I suppose this means that I've been working on the uninteresting problems in web development (basic CRUD stuff for Rails), and none of the fun networking-related problems, or otherwise. That's pretty disappointing.

I find elixir incredibly good for CRUD-type applications, macros and pattern matching alone get rid of most of the duplication and hard to follow logic that plagues most web frameworks.

Phoenix is quite popular but if you look at the source, it's a testament to Elixir's power as a language how light it actually is, the meatier bits are mostly around working with real-time connections, the cookie-cutter web stuff you can achieve with plug with very little work.

Even in these cases Erlang/Elixir processes can be a useful way of designing your app. Off the top of my head, here are a couple obvious use cases I can think of:

Basic state storage in between requests. Maybe you're building a simple shopping cart application, or maybe something more complex like a SurveyMonkey-style form builder. Each time someone makes a web request that modifies this state, you're storing that state somewhere.. but it's usually somewhere outside of Ruby. Maybe postgresql, maybe redis. If you were using Elixir or Erlang that would still be an option, but another option would be to use a process to store this temporary state. When the user adds things to the cart you send a message to the right process and let it know. This can keep all your code in one system (BEAM), so fewer moving pieces, easy to unit-test the entire thing in ExUnit, and potentially more efficient since there's no I/O to some other database. Then you can continue to save your truly long-term data in postgresql, so for example user accounts and sales made.

Another obvious use-case is not needing to have background workers like resque or Sidekiq. When I was doing Rails stuff I found myself doing tons of stuff in Sidekiq workers when I wanted to get the work off the request-response cycle. Sidekiq was great, and it improved things hugely.. but in Elixir I get all that for free with processes. Want to send an email? Have an email-sender process running and just send it a message. Want to do some fancy image processing, or even just thumbnail generation? Just have another process do it asynchronously. And once again, it's all inside the same code base and it's all super easy to unit test. And once again it makes your system simpler because you don't have another system process to deploy and manage.

What happens when the process containing the cart data needs to be restarted or the server shuts down?

One of the core ways to model this is to have a process running the cart itself and another with the data. If you wish for that cart data to be persistent, used DETS (writes to disk) just in case.

The data will have to be persisted eventually once the order is finished, keeping that data on process memory would be incredibly irresponsible. And by all means if you ever do an ecommerce app please use an external database. You will want to do all kind of things with the data outside of the usage of the application, like running reports , etc.

How do you clean up or time out old "cart" processes that are still in memory?

I think if you are starting from nothing, message passing is actually a more intuitive way to think about and design large distributed programs, because you don't need to deal with a lot of the complexities of OO languages such as state and handling distributed communication / disaster recovery.

Running processes in parallel is cheap in Erlang, so you can run hundreds of thousands of them in parallel on a commodity servers. Generally, if you are running any sort of high-traffic service with a lot of dynamic stuff happening, there are probably parts of it that Erlang/OTP can help make more efficient and easier to maintain.

I don't know if you've read this[0] yet, but I think it does a good job of explaining how the "let it crash" mantra is useful in effect.

[0] http://ferd.ca/the-zen-of-erlang.html

This exactly. I've read two books about erlang and otp, including Fred's book and it wasn't until reading this that I felt I really "got it". I think this should be the first thing people point to when explaining the let it crash philosophy.

I should add that while I've never written a line of Erlang in my life, this article has gotten me to structure my Go programs in a relatively Erlang-esque manner, and I've been quite pleased with the results so far.

There are a lot of very good, very simple ideas in the Erlang ecosystem, it seems.

That was wonderful. I'd never made the connection between restarting and fixing heisenbugs: bugs that are fixed by restarting are precisely the ones that are hardest to handle otherwise, and vice versa.

I think part of it is that it fits well into the microservices paradigm - where there is a separation of work even within the same program.

But that said, for where Erlang shines - backends and distributed services - first-class concurrency is damn near a must-have these days (see: Go, Erlang, etc. recent surge in use and popularity).

I ended up with Go, but had I more time to learn Erlang (or Elixir was 1.0 when I started looking) it would have been a solid contender.

Also, don't discount the incredibly solid and robust BEAM VM. If you are writing distributed services or servers (see: Riak, RabbitMQ) the overhead for spinning off "instances" or "processes" within the VM is virtually zero-cost, and in production systems has like sixty-four 9s of uptime. [0]

[0] ok, nine, but still: https://pragprog.com/articles/erlang

Microservices makes sense, I suppose, but the cases in which distributed services and isolated processes are critical just aren't comment enough in my experience to require sticking specifically to a SOA-like architecture for everything. Especially in a web app like something you'd make in Rails, which already handles exceptions pretty well and rarely require concurrency.

The idea of composing individual applications and programs sounds very powerful, and I like the idea of using it as a way to compose programs written in different languages together (a Rails server talking to a Clojure ML service that uses Rust bindings to do hardcore number crunching, for example). That could be interesting. I don't know what I'd use it for, though.

> Especially in a web app like something you'd make in Rails, which [..] rarely requires concurrency.

What? If you don't need concurrency in a web app that means that your app can only serve one client at a time and other clients have to wait in line. That won't scale to more than a couple concurrent users.

You're right that microservices don't make sense for every web app. Especially when you're still figuring out product-market fit. But once you achieve a certain scale (both in number of users and number of developers) microservices/SOA starts to make a lot more sense.

And that's an area where the Erlang VM really shines. Erlang's concurrency model is awesome and fits really well with the hardware evolution to more cores: lightweight processes, immutability, message passing, location transparency, etc.

I wouldn't be surprised if I've just been insulated from thinking about concurrency with Rails/have never worked on a Rails project that had to think about concurrency in any significant way. Which is a little disappointing, I do want to work on interesting problems instead of being a CRUD monkey forever...

I do think Elixir and Phoenix have a lot of value because Phoenix has the "get shit done" factor of Rails and yet still allows for your application to scale once you've entered the market, and you don't have to drop everything and move to a more scalable language/architecture and slow down. Plus, Elixir's features and syntax make it a lot more accessible than Erlang.

> Especially in a web app like something you'd make in Rails, which already handles exceptions pretty well and rarely require concurrency.

That's interesting considering the reason Phoenix was made is because the author was having a lot of trouble doing concurrency in Ruby.

I think something that confuses the issue here is that while most applications are a great fit for monolithic Rails (or similar), the applications that many of us here seek out to work on are those that are larger (in scale or in functionality) and more complex. I'm not sure how you could check this, but I suspect that people working on bigger applications where SOA thrives are over-represented here. Most of the debate between "you're not gonna need that!" and "I have needed that!" just stems from people attempting to project their own experiences onto everyone.

Consider background jobs using Resque or Sidekiq. Those are a kind of concurrency. These kinds of libraries are unnecessary in Elixir, where you can spin up another process to handle this kind of "background" work outside of the normal request/response flow.

How do you process a million jobs in parallel across many machines? How do you schedule a job to run in a week?

I'm the author of Sidekiq and not trying to debate; I'm simply ignorant of how Erlang, BEAM and the ecosystem solve these common problems, which usually persist no matter which language you use.

(caveat: in practice you'd do this differently, making sure applications were prestarted/preregistered on remote nodes, ensuring your network was already connected and wrapping most of these calls in nicer function calls but for the sake of illustration i've described the naive way to do this)

    %% ping a remote node to connect to it
    pong = net_adm:ping('foo@'),
    %% start my application on the remote node
    _ = spawn('foo@', application, start, [my_app]),
    %% my_app is a gen_server, an otp process that abstracts
    %% async request/response and makes it look more like
    %% synchronous function calls
    %% this example assumes the gen_server accepts the JobArgs
    %% and immediately returns 'ok' as acknowledgement, but does
    %% the work asynchronously
    ok = gen_server:call({my_app, 'foo@'}, {do_some_work, JobArgs})
if you wanted to distribute a number of jobs over a number of nodes you'd just repeat this process for each node and distribute your jobs however you choose, either by round robin or something more complicated like rendezvous hashing or something

this example assumes you're uninterested in the responses, but if you wanted to receive the results you could either send a pid (process identifier) along with the request that would be setup to collect the results and do any further processing or you could spawn a local process that made the actual call and blocked until it received the results

as for scheduling a job a week in the future, the naive way would be to setup a timer that, when it expires, spawns a process that can execute an arbitrary function. no one would do this, normally, however. you'd probably want to use something like sidekiq that can store the job off node and poll regularly to see if there's any jobs you should run

For your first question, I use the toniq library: https://github.com/joakimk/toniq. As biokoda mentioned, there's nothing stopping you from doing this by literally just spawning a ton of processes, but you do get a lot of nice things out of the box with toniq or another task queue framework. For example, toniq gives you automatic task retrying with exponential backoff, a max-concurrency option, and task backup with redis.

It doesn't have a scheduled job API yet, but there is a note in the readme that explains how to do this with quantum-elixir: https://github.com/joakimk/toniq#how-do-i-run-scheduled-or-r....

> How do you process a million jobs in parallel across many machines?

Start the processes that execute the code, get notified when they are finished. Literally a couple of lines of Erlang/Elixir. Not some large framework that does all that for you, it's in the language. It is built to run a lot of tasks in parallel and across many machines.

> How do you schedule a job to run in a week?

Like anyone else, put it in some permanent storage like redis. You can also keep it in memory, you can also use mnesia which is a distributed database that comes with Erlang.

Let's go for that one. Processing a million jobs in parallel is not hard

  |> distribute_tasks() 
  |> Enum.map(fn x -> Task.Supervisor.async(x, my_list_of_job) end)
  |> Enum.map(fn x -> Task.await(x, your_timeout) end)
That is will distribute your worload on all the nodes you have that implement a runner for that. Of course you may want to make things a bit different, and i did not covered your load balancing that i hidden in distribute task, nor any error handling, but that would not be far harder imho. Until you enter the partition and idempotent part, but even that would not be that hard.

There is a long running project to work on that to produce a nice job queue lib on Elixir Slack but noone on the interested group have time for now.

To schedule a job to run in a week you have a couple way. An easy first approach is to use Process.send_after/3 to send yourself a message after a week and handle that message in a week. Of course if you want persistence, etc, you will have to do it a bit differently, but not so much....

edit : now use the fn form instead of the shorthand & for ease of comprehension for non seasoned Elixir devs

That's a really good question and I'd be delighted if somebody answers it adequately.

I heard that Erlang/OTP (and thus Elixir) can transparently distribute work no matter where it gets physically executed but I'd just love to see an example -- freely accessible code, demo site, everything.

I don't think those two exclude each other, I intend to upgrade my Go skills, but honestly Elixir is preferred.

Something that everybody seems to be missing is that a microservice can serve as an endpoint for all type of clients: your web app, a rich browser client, your ios app, your android app, your smart tv app, third parties, your reporting application, etc.

> Making everything asynchronous and detached makes things more complicated, not less

It does in languages like JS because they're not built for it. Elixir programs on average don't look very complicated.

I guess I'm just not used to the Erlang/BEAM message-passing/infinite-servers mental model. I'm used to single threaded programs and importing libraries and the like, so I've never had the need to pull off complicated networks.

"...so I've never had the need to pull off complicated networks."

In other words, you haven't run into one of the main problems Erlang and Elixir aim to solve. So it makes sense you don't see the appeal.

Just keep it in the back of your mind, when you someday do need to pull off a complicated network, Erlang and Elixir are excellent tools for solving that problem.

This is actually how I've explained it to a lot of people. Elixir is one of those languages that you won't appreciate it until you suffer the issues of others first.

This is how it makes fanboys of people in their 30s, 40s and 50s instead of their 20s.

Let's try a different tack.


Make more sense now? :) (in addition to the other responses)

tl;dr Evercam replaced Node.js, Sidekiq, Pusher and Upstart with Elixir, huge net gain in stack simplicity which is arguably a strong plus from a long-term maintenance perspective.

It'd be immensely more helpful if you give Elixir library/framework names as alternatives to the technology names you enumerated. In textual form.

In fact, Mike Perham himself (the creator of Sidekiq) is asking the question how do you do scheduled future background processing in Elixir, right in this thread: https://news.ycombinator.com/item?id=12072359

There was a post about their usage of Elixir on the ElixirForum recently: https://elixirforum.com/t/complex-open-source-phoenix-apps-t...

I am not the originator of the slides; perhaps someone from Evercam will respond. I will try to hail them on Twitter.

Immutability stops you from experiencing random state changes. The code you read takes in data structures and spits out data structures making it easy to test things in isolation.

OTP is actually a lot lot faster than you'd expect and allows you to use all CPUs very simply and also provides network transparency (on top of supervisors and "let it crash").

The main argument why seems to be for reliability so it can run for months without going down.

I understand the need for reliability, i.e. "come back up if you get bad data so it doesn't just die on you", but this seems like something that dates back from when any kind of raised exception would bring the entire system down. Don't we have modern frameworks and systems now that can tolerate individual failures instead of completely dying just because someone tried to POST some bad data?

Plus, having reliability and coming back up if it goes down doesn't solve the root problem, which is that there's a bug in the code, or some sort of unhandled edge case. 99.9999999% uptime doesn't help if your code doesn't work right.

My personal anecdote to illustrate the difference: I've worked on a python app before which had a bug that sometimes made the websocket closed callback throw an exception. The exception was still being caught by the framework, so the system kept working fine.

...or did it? Eventually, we noticed instances of the app locking up, not responding to any request. There was no obvious error, it just kept running but did not log or do anything. It turned out to be due to the callback bug: the exception was being caught alright, but we were leaking file descriptors like a sieve and eventually running out. Due to this a minor bug affecting only some requests was able to take down the entire app.

That's the kind of problem Erlang solves well. Every request runs in its own lightweight process. The process can open (and thus own) files, sockets, OS processes, in-memory DB tables, etc. If the process crashes or dies for any reason, its memory and any other resource it owned gets cleaned up by the VM. If the process tries to hog the CPU or even enters an infinite loop, the preemptive scheduling means other requests will still get a chance to run.

In short: having strong isolation between processes means it's unlikely for a bug in a minor feature somewhere to affect the overall application health.

Imagine there is a bug in the code or unhandled edge case, only exhibited on 0.01% transactions. Elixir/OTP handler dies on this edge case, supervisor spawns another worker, 99.99% of user transactions continue to be served.

Programmer wakes up next morning, comes to work, reads the exception log, fixes this edge case, optionally hot-upgrades the code in production if it's easier than doing a full restart, problem solved.

Other frameworks have to carefully implement the same semantics (that is, prevention of propagation of bad state/cascading failures). Elixir/OTP/Erlang come with it by default, and coding style encourages it.

> Plus, having reliability and coming back up if it goes down doesn't solve the root problem, which is that there's a bug in the code, or some sort of unhandled edge case. 99.9999999% uptime doesn't help if your code doesn't work right.

Yeah but you don't have to wake up at 4am in the morning to fix the problem if the system stays up and still works (even if a bit slower or some feature is degraded). I've seen systems that had parts crashing and restarting for a weeks and months. Yeah it is bad nobody noticed, but it was secondary service that didn't affect most users. Having the backend with an exception or segfault because if it would not make sense.

> Don't we have modern frameworks and systems now that can tolerate individual failures instead of completely dying just because someone tried to POST some bad data?

Yeah we do. Nothing Elixir does is magic or absolutely impossible with other systems. You can spawn an OS process to handle a request. Can have a load balancer or something to check the health of backend nodes, and redirect. Or wrap everything in exceptions and try to restart and so on. It is kind of doable. But now there are multiple libraries, frameworks, services to also maintain and look after, maybe that thread just crashed made a mess on the heap and now maybe restarting don't really fix the problem. So doable, but more awkward.

> Plus, having reliability and coming back up if it goes down doesn't solve the root problem,

Obviously, nothing will write the code to fix the problems except the programmer. Is it worth to completely crash the system because some bug in one thread or some unimportant component failed? It is a bit like having a human drop dead the first time they scratched their finger. Yeah they can't write as fast for a while, but they can still largely function.

I mentioned Rust in another comment. Perhaps you'd prefer to have the compiler check correctness, that's very reasonable, something like Rust or Haskell might be what you'd like to use to sort of ensure you have less bugs when the system runs. Higher level of proof of correctness are possible, that is done for avionics software, crypto and other life-critical systems. So it is a continuum there. But there usually dollar signs attached to it.

I think you are misunderstanding something. "Let it crash" doesn't mean crash on everything. It's about handling the _unexpected_.

Someone posting bad data may or may not be expected. The primitives are such that you can _choose_ how much or how little supervision you want, for a given complexity or maturity of a project, or what you learned as users put the system through the wringer. This is a _choice_ you make as an engineer, and OTP makes it easier for you to make such a tradeoff. Other systems and framework do not necessarily let you make such a tradeoff as gracefully.

This not only allows the software to scale gracefully with the number of users, this allows the software to evolve gracefully in complexity and maturity.

The supervisor is what is responsible for restarting the worker or not. If you have a process that saves files to local disk and the disk is full, causing the worker to crash, the supervisor can recognize that reason and conclude that the worker process should not be restarted. Additionally, the supervisor can take other actions like informing the rest of the system and/or potentially even spawning a different worker process that stores files to a slower, more expensive cloud storage option.

I can offer you my understanding. I think this way we allow multiple light processes to be created and this should work well with multicore processors.

Overhead you are talking about can only be mental as processes in Elixir are very light. I think compared to other similar languages, at least for me Elixir doesn't require as much overthinking. Now, you could also say that I wasn't working on complex apps with it, so that might be why.

Anyhow, this is my view.

You don't need to start with supervisors, application trees and message passing for the initial MVP.

However, let's take Ruby on Rails as an example. I have gotten MVP-quality software out in production. We can quickly validate whether this is something customers want. However, when the app starts getting traction, I have found that I need to reach for things outside a RoR monolith. Usually, the first thing is getting some kind of background processing job going, with Sidekiq and Resque.

At this point, we have shifted from a synchronous, no-shared-state architecture to one that requires background processing. This is where we start thinking in terms of concurrency and how to deal with it.

Since Rails is built on top of Ruby, Ruby does not have a sufficiently robust concurrency primitives that work well at scale. To use something like Sidekiq, we rely on using Redis. I have had projects where I need to pass data through a series of background jobs, like a pipeline. Now I am implementing what are effectively mutexes as a database column. I'm processing data that are effectively being streamed from third parties, so each individual Sidekiq job now has to have guards so that it can be idempotent. I have to be mindful of queues, analyzing where the bottlenecks, and how to structure the queues and tune how many of what kind of jobs should get resources. I have had to rely on outside monitoring solutions to make sure these processes stay up and running, and write things in a way so they can still recover on restart. I had to fork Sidekiq and rewrite core parts of it in order to create a different set of guarantees to fit a use-case -- a big deal to every Rubyist I had talked to, and yet, seems normal in the Erlang world.

In other words, in the Rails world, concurrency is handled in a very coarse-grained way, often relying on software outside of Ruby (such as Redis).

I found myself reinventing, in a very crude ways at a coarse-grained level, what OTP already offers. I adopted subversion and later git because I was crudely reinventing version control (shell script and tar); I dropped PHP and Perl CGI scripts in favor of Rails back when Rails was version 1.1 because I knew what a project that doesn't use what Rails offers would look like (a messy puddle). And likewise, now that I've found myself badly reinventing ideas OTP already has, it's the tool I'll reach for now.

With Erlang and Elixir, it is easier to reason with concurrency because the primitives are baked in deep, and it costs little to spin up something asynchronously. Reasoning with concurrency in the Rails world is treated more like black magic -- obscure, non-obvious, forbidden, potentially dangerous, and socially unacceptable.

This is exactly my thought process recently. I ran away to the Ruby/Rails land several years ago because monsters like J2EE, portlets, Tomcat/Spring/Hibernate XML configurations going amok, dependency injection frameworks making it absolutely impossible to find the origin of a complex error, etc. etc. were making me consider growing tomatoes and potatoes as a job for life.

Now however, I am regularly mad at Rails (and somewhat at Ruby; 10+ years should've been enough to adopt a proper concurrency model, or something close to OTP) and I am finding myself almost constantly working around imperfections. Sooner or later the question of "am I using the right language and/or framework?" was bound to become relevant.

This is an excellent explanation. Might reuse it if you don't mind.

Go for it.

It depends on your use case. The overhead is much smaller than you probably think and the Erlang / Elixir programming model works extremely well for some cases, networking for example. I've been writing pieces for a Bittorrent client in Elixir. Using a process for every connection to another client or for every torrent makes structuring your code really easy.

Is your project's code public? I'd gladly use it as an educational source since I am learning Elixir right now.

"Maybe it's just due to my problem domain (web development), but it doesn't seem like as big a draw to Elixir and Erlang as pattern matching and FP are."

That seems like the perfect domain for this, to me.

HTTP is (traditionally) stateless. So, in the past CGI apps were independent entities. Every new hit was a new world in which the app ran. And, every interaction between each CGI app happened via a standard communication medium (a database or writing to files rather than messages, but if you squint and don't look too closely, you might see the analogy there). There are negatives to that...but, there are real benefits, too. That provided a tremendous level of flexibility in how apps were built and in how they interacted with each other. I'd argue that in some regards (though certainly not all), going to an always-on appserver model was a step backward.

So, to me, in some ways Elixir looks like CGI, only without most of the negatives of CGI. A lot of the mental overhead, like parsing parameters and setting up database connections, can be abstracted away with a modern/fast language designed just for this kind of work. And, the end result is a robust system without a lot of mental overhead other than learning the abstractions and how they interact.

You call out supervisors and message passing and the like as being negatives; complexity you don't want to have to deal with. But, you aren't really comparing apples to apples. Node.js is also asynchronous, and in its bare state is a mess; callbacks are vastly more difficult to interact with and reason about than a system like Erlang/Elixir, and a lot of other languages use the callback model for asynchronous programming.

So, I guess what I'm trying to say is that if you're doing asynchronous programming, Elixir is going to be easier than most (though ES6 has concurrency primitives that are starting to make sense, and Python 3 and Ruby have started getting them, Perl 6 has good options, Perl 5 has somewhat clumsy modules for it, etc.).

All that said, I'm mostly spending my time learning JavaScript/ES6/Node lately; despite its warts, the ecosystem is huge, and there's so many smart people working on the language that by the time of ES7, there will be a very convincing concurrency story. Maybe still not as convincing as Erlang/Elixir and the OTP. But, probably good enough for web apps and services.

I looked into Elixir and Phoenix briefly; watched a few videos, read a few tutorials, never wrote any code. And, came away with a distinct impression of a tiny, tiny, ecosystem. I'm accustomed to going to CPAN or Ruby Gems or PyPI or npm, and finding not just one, but several options for whatever task I want to accomplish, even relatively obscure stuff. It'll be a while before Elixir comes close to even CPAN (which is smaller than all the others these days, though still manages to have modules for nearly every problem I tackle).

Seems like this is mostly a bug fixing release. Solid stuff nonetheless. I recently learned Elixir and even though I came from an object oriented and imperative language background, I've fallen in love with the language. Switching back to others like JavaScript really leave me missing so many of the functional features, especially pattern matching. I think I've been spoiled.

What makes Elixir a sort of "perfect storm" is the combination of a battle-tested, corporate funded, philosophically correct language model and VM (Erlang/BEAM/OTP) combined with a syntax (Elixir) that's both beautiful and comprehensible to an average programmer.

Even if you loved Erlang, I'd argue the language is just too esoteric and jarring to most programmers to ever gain serious traction. I remember reading about Erlang's magic back in ~2007 (maybe [1]) and giving it a brief shot but deciding there was no way I wanted to look at that kind of code 8 hours a day. But coming from writing fairly FP-style Ruby/CoffeeScript/ES6-7, Elixir is feels only a step or two more down that path - in many ways actually conceptually simpler - and with enormous benefits.

[1] https://pragprog.com/articles/erlang

Replace "beautiful" with "familiar" please.

I find Elixir uncomfortable because I left behind that syntax and have no desire to return to it.

I think "beautiful" is subjective. I personally like Elixir's syntax a lot. It is concise, easy to read, and familiar. It also doesn't have all those unnecessary curly braces and parenthesis, which is nice. That's what makes it beautiful to me.

I also like Elm's syntax, which I use for doing front end work on my Elixir projects. Both make me happy, and that's important when I have to stare at both of them all day, every day. If something else makes you happy, then that's what you should do all day, I suppose.

Can someone explain why I should use Elixir instead of Erlang? I haven't used either, but from a very high level it seems that Elixir has a "magic" syntax like Ruby, where as much as possible is hidden from the user, whereas Erlang has a much clearer and more concrete syntax. Even though Erlang's syntax is not standard or traditional, as someone who's used neither neither language, I find it much easier to read and understand exactly what's happening.

I don't think anything is hidden, it's just a different syntax. The only thing that's really changed is that in some of the Elixir modules they've changed the order of parameters to make them consistent.

I use Elixir and I like it, but I wouldn't tell you to use Elixir over Erlang. There's no competition, what's good for one language is good for the other. So use whatever makes more sense to you and hopefully all of us in this Erlang community win.

It's not about syntax, it's about

* Dependency managment

* Rake-like cli tasks (mix)

* Compile-time metaprogramming

* Clean and consistent standard library

* Really good UTF8 support

* Well, syntax in a sense "easier to understand for switchers from ruby/python/etc"

I've only dabbled with both, but the reasons often given is that (apart from surface syntax) the advantages of Elixir over erlang is a powerful macro system, a more cohesive standard library (with some nice things like the pipeline operator), and some very nice tooling (mix is great). With all that being said, erlang is a great language as well, so just pick whatever you like the most.

Here is a short article that outlines some advantages. http://theerlangelist.com/article/why_elixir

Ruby doesn't really have "magic" syntax anymore than any other dynamic language does. Too much "magic" is a frequent criticism of the Rails framework, but I've never heard that directed at Ruby.

Phoenix is filled with macro magic, and just like Rails is the overwhelming reason to use Ruby, is pretty much the main reason why people pick up Elixir.

Anecdotal evidence: meta-programming and class monkey-patching in the gems I am putting in my Rails projects has been a steady source of WTFs for me over the years.

These are awful techniques IMO but I can't deny that they are life-savers in the Ruby land sometimes.

I use Erlang primarily and like it better -- as you said, there is a bit less magic, things are more explicit and it is the primary target language of the VM.

Some people like one syntax, some the other. Elixir has some constructs like pipe and in some cases macros can be nicer, don't feel like I need them and am quite happy using Erlang. I rather like that syntax is not like Ruby or Python or other language I know, it helps my mind switch contexts. So that is a personal choice. Also Elixir has very welcoming community for beginners. That's big bonus if you are starting out, but Erlang also has more existing literature, books and learning materials.

Take a look at both and see which one appeals more to you. At the end of the day, they both use the excellent BEAM VM and you'd end up learning largely similar concepts anyway.

Elixir has a cleaner syntax that I find a lot easier to read and work with (subjective), and it has powerful macro capabilities that make it possible to do metaprogramming (objective). There's nothing "magic" about Elixir syntax, and it is easy to interface with Erlang libraries.

Good stuff. I've started using Elixir and Phoenix in a new project (I wanted concurrency for making a lot of HTTP calls) and while functional programming does take getting used to (you can not write something like a counter that increments itself in a loop) it's been relatively easy to pick up. Compared to learning Node.js, it's been a more relaxed and productive experience (although maybe I'm not making a fair comparison because I've implemented way more things in Node that I have yet to try in Elixir.)

Elixir counter in a loop:

    defmodule LoopCounter do
      def go, do: go(0)
      defp go(n) do
        IO.puts("n is #{n}")
        go(n + 1)
Or you can use processes and message passing to maintain state. More examples at: http://dantswain.herokuapp.com/blog/2015/01/06/storing-state...

Interesting. I was trying to do something like:

  tasks = for i <- urls do
    Task.start(fn ->
      :timer.sleep(5000 * the_counter)
      # more stuff
So each task would run 5 seconds after the previous one, etc. I definitely need a deeper understanding of the whole environment and language though. Will check out the article.

Here's a couple ways of doing that, packaged as a runnable .exs script that you can mess around with:


The key ingredient is Enum.with_index, which pairs your enumerable with the indices that you'd have in an iterative 'for' loop in another language. I've also changed from Task.start to Task.async so that I can collect the tasks and wait for them at the end of the script.

Edit: The recursive approach jdimov10 explained is really useful too, you'll see that pattern of using a recursive function's arguments to hold state all over the functional programming world.

Thanks for the code sample! Will give it a try.

Not exactly what you want, but I've found this snippet useful from time to time:

    for {item, counter} <- Enum.with_index([:a, :b, :c]) do
      IO.puts("#{counter} -> #{item}")

    0 -> a
    1 -> b
    2 -> c

Or if you like pipelines as much as I do, you can write it as:

   [:a, :b, :c]
   |> Enum.with_index
   |> Enum.each(fn({item, counter}) -> IO.puts("#{counter} -> #{item}") end)

This is interesting. I'm a huge fan of pipelines like that and I think they're unmatched for being able to compose operations well, but I prefer the non-pipelines version of what is basically just an imperative loop.

That's actually a perfect example of looping over data structures. Thank you.

(Yes, not what the parent poster asked but I still found it useful.)

If all you want is a counter to loop through something a given number of times, you could do something like this:

  defmodule Counter do
    def loop(num) do
      Enum.each(0..num, fn (i) -> IO.puts(i) end)
If you really want to write your own recursive function (Enum already does that for you, but it's always good to try different ways, so you can learn the language better), you could do something like this:

  defmodule Counter do
    def loop(num), do: _loop(0, num)
    defp _loop(idx, num) when idx == num, do: IO.puts idx
    defp _loop(idx, num) do
      IO.puts idx
      _loop(idx + 1, num)
The thing that is nice about Elixir is that it does tail-optimization on its recursive functions, as long as the recursive call is the very last thing in the function. This means you won't have 10 copies of the _loop function in memory waiting to unwind.

We have been looking for an Elixir code beautifier. If you are aware of one please let us know: https://github.com/Glavin001/atom-beautify/issues/545

Not sure if it's what you want but Credo (https://github.com/rrrene/credo/) exists

Do you mean something like Python's PEP8 plugin, or JSHint?

Why are redditors and ycombinators so annoying? A new version has been released and instead of discussing what is important and notable about this new release,you all go off on a tangent discussing the pros and cons of Elixir.

Why don't you guys take the discussion elsewhere so that interested readers can focus on what is new and relevant about this release? The subject of the post is about a new release,not about people showing of their knowledge and opinions about computer programming and languages etc.

It just makes this forums intolerable especially those which are about new releases new products etc.

You have a very adequate and concise changelog linked for this purpose.

IMO people discussing here aren't "showing off their knowledge". Posting a version upgrade serves as a visibility reminder and people start inquiring about the features of the language/framework. Eventually a technology gains enough critical mass so that more people start using it.

Absolutely nothing wrong about that.

having to put 'end' leaves bad taste, which is why i stopped learning ruby.

i learned lisp instead and now i'm learning lfe (lisp flavored erlang)

do i miss something by not learning elixir?

> do i miss something by not learning elixir?

Verses LFE? Not really, at the end of the day they're both ways of making programs for the Erlang VM that aim to improve on Erlang. You can use any of erlang, elixir, and LFE libraries from any of the three[0]. See Jose's comment at https://groups.google.com/d/msg/lisp-flavoured-erlang/ensAkz... .

[0] though beware string types -- LFE string functions generally take charlists, same as Erlang; Elixir ones take utf-8 binaries. So you lose a bit of unicode-just-worksness compared to Elixir, but it isn't really a problem, you just need to be aware of the mismatch and be prepared to handle it

Elixir seems to have more in the way of books and videos if you like that.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact