Hacker News new | past | comments | ask | show | jobs | submit login
Comparing Elixir and Go (codeship.com)
489 points by iamd3vil on Jan 27, 2017 | hide | past | favorite | 194 comments



If you want to really understand the philosophy that makes Erlang ( and Elixir ) beautiful ( and why it made me a better programmer ), this conference by Greg Young is a kind of eye opener : https://vimeo.com/108441214 .

You realize then that clustering, hot reload, availability etc... are not only features but the logical consequence of a beautifully crafted environnement that aims at developer productivity.

I'm sometimes amazed on how easy I can achieve stuff on the Erlang VM that would take ( if it's not impossible at all ) at least 10 times the time in a more usual language ( Ruby or PHP when you work in the web industry as I do ).

My last example was when I needed to batch sql inserts in an events database. In a normal language I would have needed a queue, the libraries for it, workers, new deployments and infrastructure to monitor, monitoring, supervision, etc... In Elixir, in 20 lines of code, it's done.

If you do not need complex calculations, the Erlang VM can basically become most of your architecture. It's already per se a SOA.


The way I find it simplest to "enlighten" people about Erlang's peculiar philosophy is its approach to scheduling:

The Erlang VM is "reduction"-scheduled. This means that the given Erlang process currently running on a scheduler thread can get pre-empted, but only as a result of executing a call/return instruction. (Effectively, the pre-emption is a check inside the implementation of the call/return opcode.) As long as you don't execute any call/returns (don't call functions and don't return from your own function), your function body can run as long as it likes.

This is a design choice: because processes won't be pre-empted "in the middle" of a function, any Erlang process can feel safe executing an instruction that calls into native code, while not having to worry that that native code could itself be pre-empted and leave dirty state in the Erlang process's heap while some other process gets scheduled and tries to then message or introspect that process. It gives you a lot of leeway "for free."

So how does Erlang ensure that processes don't hog a core forever, given that you could theoretically just write a loop that spins forever? Well, in Erlang, you can't write a loop. Instead of loops, you have tail-calls with explicit accumulators, ala Lisp. Not because they make Erlang a better language to write in. Not at all. Instead, because they allow for the operational/architectural decision of reduction-scheduling. Without loops in the language, every function body will execute for only a finite amount of time before hitting one of those call/return instructions, and thus activating the reduction-checker.

The Erlang "platform" has been shaped around the choices of how to best construct a production runtime that gives you "hard things" (like calling into native-code libraries while maintaining thread-safety) for free. Or rather, you could say that where everyone else pays these costs when they hit the particular problem, Erlang pays the cost up-front in the design of the language+platform and how you're forced to code at all times, in order to make these hard things easy.

The same is true of so many other Erlang things:

- how synchronous messaging has to be implemented on top of asynchronous messaging with expected reply-refs and timeouts, so as to make the sender process, rather than the receiver process, be the thing that defaults to crashing if the receiver doesn't recognize the message;

- how OTP-framework code has to be structured as delegate functions that return to the framework, so that the framework can "be there" in each process to handle hot code upgrades and process hibernation;

- how sockets either block (when {active, once}), or will saturate a process with packet messages (if just active) until that process crashes on overload--because the network listener is a separate part of the runtime that lives in a hot loop and wants to just be given a place to stuff packets into, and isn't allowed to do anything that's not an O(1) operation, like expanding the size of a process's message inbox;

etc.

Erlang is not a programming language in the sense that other languages are. Erlang was not designed from the language in. Erlang (ERTS) is a runtime, and was designed from the runtime out, with Erlang being effectively a pure side-effect: the language that ended up being required to interact with the features of the ERTS runtime.

Of course, you can also go back and apply some design sense to the language, and then you get something like Elixir. But, despite large visual differences "in the small", your large Elixir app will end up looking very much like a large Erlang app. And this is because a large part of what you're doing in an ERTS language is not programming using the language, but rather weaving together the features of the runtime. (Contrast: using DirectX vs. OpenGL to manipulate the GPU. Two very different APIs, but one "runtime" they're both speaking to, consisting of features like shaders et al.)


>My last example was when I needed to batch sql inserts in an events database. In a normal language I would have needed a queue, the libraries for it, workers, new deployments and infrastructure to monitor, monitoring, supervision, etc... In Elixir, in 20 lines of code, it's done.

Can you provide more details on this? AMQP is pretty recent and people have been batching SQL inserts for much longer than it has been around. An external queue is not a requirement of non-Erlang languages, but I'm curious about specifically about how the implementation in Erlang would substantially differ/allow new approaches from that in other languages.


A simple GenServer ( a OTP behaviour ) linked to an ETS ( erlang in memory data store ) table would do the trick. Basically, It receives by message the inserts, and once the counter reaches x or timer reaches y secs, it inserts in the db. Thinking about it, 20 lines of code is already a bit verbose for it :)


In Elixir, you can use GenStage.

(In fact, I'm writing a GenStage consumer right now.)


True. But haven't played with it yet. It seems nice and straightforward also.


That was a great video, thanks! :)


how easy is it, even if you do need complex calculations, to get the best of the Erlang VM and call out to say, Python/Numpy or C when necessary? Can these external processes still be supervised, for example? Are decent sized matrices (for example 100x20000 floats so an 8MB data structure) easily movable around the Erlang VM via message passing?

IE is it viable in your opinion still to use Erlang as a system for distribution and routing of lots of heavy calculations to many users, if said calculations are performed outside of BEAM? I am looking at building a multivariate financial calculation engine, which must be interactive for up to 1000 users, with large firehose of real time data coming in, being massaged, and then distributed, with the calculation graph being customizable for each user interactively.


It is possible to start external processes from BEAM and interact with them. I've blogged a bit about it at http://theerlangelist.com/article/outside_elixir

You can also write NIFs (native implemented functions) which run in BEAM process (see http://andrealeopardi.com/posts/using-c-from-elixir-with-nif...). The latter option should be the last resort though, because it can violate safety guarantees of BEAM, in particular fault-tolerance and fair scheduling.

So using BEAM facing language as a "controller plane" while resorting to other languages in special cases is definitely a viable option.


I spent 30 minutes looking at NIF, but I was scared away. My understanding is that if the NIF crashes then BEAM crashes. Which leads me to think that if you need NIF then you need safety guarantees on the Native side that C can't provide.


Think of NIFs as Erlang's equivalent to Rust's unsafe{} blocks. It's where you write the implementations of library functions that make system calls, and the like. But, like unsafe{} blocks, you do as little as possible within them.

For example, if you want to call some C API from Erlang where the C API takes a struct and returns a struct, you'll want to actually populate the request struct--and parse the return struct--on the Erlang side, using binary pattern matching. The C code should just take the buffer from enif_get_binary, cast it into the req struct, make the call, cast the result back to a buffer and pass it to enif_make_binary(), and then return that binary. No C "logic" that could be potentially screwed up. Just glue to let Erlang talk to a function it couldn't otherwise talk to. Erlang is the one doing the talking.

On the other hand, if you have a big, fat library of C code, and you want to expose it all to Erlang? Yeah, that's not what NIFs are for. (Port drivers can do that, but you're about the right amount of terrified of them here: they're for special occasions, like OpenSSL.)

The "right" approach with some random untrusted third-party lib, is to 1. write a small C driver program for that library, and then 2. use Erlang to talk to it over some IPC mechanism (most easily, its stdio, which Erlang supports a particular protocol for.)

If you need more speed, you can still keep the process external: in the C process, create a SHM handle, and pass it to Erlang over your IPC mechanism. Write a NIF whose job is just to read from/write to that handle. Now do your blits using that NIF API. If the lib crashes, the SHM handle goes away, so handle that in a check in the NIF. Other than that, you're "safe."


Precisely, which is why I always advise to consider ports first :-)

However, in some situations the overhead of communicating with a port might be too large, so then you have two options:

  1. Move more code to another language which you run as a port.
  2. Use a NIF
It's hard to generalize, but I'd likely consider option 1 first.

If you go for a NIF, you can try to keep its code as simple as possible which should reduce the chances of crashing. You can also consider extracting out the minimum BEAM part which uses the NIF into a separate BEAM node which runs on the same machine. That will reduce the failure surface if the NIF crashes.

I've also seen people implementing NIFs in Rust for better safety, so that's another option to consider.

So there are a lot of options, but as I said, NIF would usually be my last choice precisely for the reason you mention :-)


Aren't dirty NIFs on the horizon as well which help with the whole scheduling issues currently associated with NIFs?


Dirty schedulers can help with long running NIFs, but they can't help with e.g. a segfault in a NIF taking down the entire system.


Apparently people are working on this using Rust for writing NIFs https://github.com/hansihe/rustler


Love your blog and book Sasa. Could elaborate on the fair scheduling disruption by NIFs? Don't recall ever reading about that


Thanks, nice to hear that!

Basically a NIF blocks the scheduler, so if you run a tight loop for a long time, there will be no preemption. Therefore, invoking foo(), where a foo is a NIF which runs for say 10 seconds, means a single process will get 10 seconds of uninterrupted scheduler time, which is way more than other processes not calling that NIF.

There are ways of addressing that (called dirty schedulers), but the thing is that you need to be aware of the issue in the first place.

If due to some bug a NIF implementation ends up in an infinite loop, then the scheduler will be blocked forever, and the only way to fix it is to restart the whole system. That is btw. a property of all cooperative schedulers, so it can happen in Go as well.

In contrast, if you're not using NIFs, I can't think of any Erlang/Elixir program that will block the scheduler forever, and assuming I'm right, that problem is completely off the table.


As linked elsewhere here, tight loops that never preempt are being fixed in Go 1.8/1.9[0]. Looks like a flag may been added to Go 1.8 called "GOEXPERIMENT=preemptibleloops" that adds a preemptible point at the end of a loop. It's behind a flag for performance/testing reasons, but they are working on it.

[0] https://github.com/golang/go/issues/10958


Won't pre-emptible loops lead to more irreproducible race conditions as a negative consequence, unless the preemption is done deterministically?


Are you asking about BEAM or Go? Preemption already works in BEAM and doesn't lead to race conditions because of nothing shared concurrency.


I was asking about Go. I understand BEAM's advantages in that area


There are libraries that allow C programs and Java programs to interact with Erlang as though they were Erlang processes. From the Erlang Interoperability guide: http://erlang.org/doc/tutorial/overview.html#id61008

The team at Github took another approach, described here: https://github.com/blog/531-introducing-bert-and-bert-rpc It's an old article - so I no idea if it's still in use. The Github sources haven't been updated since 2010.

I don't know how effective it is to chuck around 8MB data structures via message-passing. I have no experience of this myself.


> I don't know how effective it is to chuck around 8MB data structures via message-passing.

It's my understanding that above a certain size and within the same running Erlang VM (machine), a reference is passed instead of a full copy.


only for binaries. But if your matrix is mainly modified outside of erlang, it may make sense to only use it as an opaque binary inside.


Or compile Elixir to native code with HiPE by adding:

    @compile [:native, {:hipe, [:verbose, :o3]}]
to the top of the module with the arithmetic. It might not be as performant as C but it can be about 10-15X more performant than pure elixir (in my tests anyway).

I've no idea of the relative performance against numpy...


You can write NIFs in Rust (which is made even simpler by supporting libraries like Rustler). scrogson, for instance, is using this to fiddle with lower-overhead json.

Re: supervising external processes, an easy hack if you're writing the processes is to add a deadman's switch to both sides, and then launch the processes from a port in BEAM-land.

This effectively makes them supervised; kill the beam process and the external process will die, and whether they both get relaunched depends on the restart strategy the supervisor launched the child with.


Another solution is to make your non-erlang process run as a foreign erlang node ( example of a Go library to implement this : https://github.com/goerlang/node ). There are some other messaging libraries for python and ruby that do exist for this.


It's doable. Erlang is used to talk to the hardware and C code. Can build a C functions (NIF), an driver (for IO for example), spawn a process, or implement the logic of Erlang distribution protocol (what is used to talk between VMs) in C.

With 20.0 coming up it will be even easier. A nifty feature called "dirty schedulers" will become stable and so building long-running calculation in C will be much easier. Previously had to take care not to block the running scheduler thread.

All-in-all Erlang is really good at connecting to and managing things, C and hardware being one of those things.


Very easy - many solutions to do just this. Porcelain is one such library for Elixir that lets you call C executables, interact with CLIs. I have a lib called pricing on my github that uses the lib to price options using a simple C executable.


You may want to have a look at talks like this one

https://www.youtube.com/watch?v=xj3smNjGLaE It is a common pattern to use erlang has a controller plane.


This has been a very informative sub-thread. Many thanks to all contributors for your thoughts and experience. You have helped to move me forward in confidence on using Erlang/Elixir(basically BEAM) for the distribution and routing side of my enterprise-scale soft realtime data-interpretation project. A great testament to the quality of contributors on HN. I will post at a later stage on progress.


One thing that is completely missing from the Erlang side of the article are the OOB monitoring and operating capabilities.

An Erlang VM is a living system that has a shell which you can connect to, and control both the VM and the applications running in it. You can also remotely connect to another VM, execute arbitrary code, debug, stop processes, start processes etc. It really is an operating system in itself, that was _designed_ to be that way.

And the best part is that you get all this for free. Whether that is a good thing depends entirely on your needs. You probably wouldn't want to replace your bash scripts with Erlang programs :)

What Erlang is not really suited for is where you need multiple levels of abstraction, such as when implementing complex business logic. You would think that the functional nature of the language lends itself to that, but then you quickly realize that because the primary concern of an Erlang engineer is to keep the system alive, and for that reason you must be able to reason and follow the code as it is running on the system, all kinds of abstractions are very much discouraged and considered bad practice (look up "parameterized modules" for an example of a feature that was _almost_ added to the language but was discarded in the end).

I think that from this perspective Erlang and Go are actually very similar - both prefer simplicity over abstractions.


Totally agree. Erlang is quite against "magic" which improves greatly readability. Debugging is really straightforward 99.9% of the time.


I think the part about cooperative/preemptive multitasking isn't saying it all.

Go multitasking is based on the compiler inserting switchpoints on function calls and syscall boundaries. But this affects the scheduling of a single OS-level threads executing that specific goroutine. The number of OS-level threads that the Go scheduler uses can arbitrarily grow, and OS-level threads are preemptively multitasked.

So I think the description is focusing a narrow view of the problem. What is usually required by applications is low latency in reply to system events (e.g.: data available on network sockets), and Go performs very well in this context. For instance, the fact that Go is transparently using a epoll/kqueue based architecture under the hood is probably affecting latency much more than the whole "cooperative" issue as depicted.


> I think the part about cooperative/preemptive multitasking isn't saying it all.

That's still not the entire story:

GC & tight loops in Go: https://github.com/golang/go/issues/10958

Per process vs per runtime GC: https://news.ycombinator.com/item?id=12043088


Can't update parent.

Please /do not/ read my OP as a dig/boost at any level. Just pure geek interest in language architectures and sharing info.


When you say "the number of threads that the Go scheduler uses can arbitrarily grow..." is it not set by GONUMPROCS or something like that? Or is it a dynamic thing -- new threads appear as needed?

The cooperative issue is simply that until a thread does become free, the application can not respond to an event -- even given the epoll/kqueue architecture you describe.


No, those are the "procs". In Go scheduler's lingo, there are three different concepts:

* "G": these are goroutines

* "M": these are OS-level threads (the ones I was mentioning). These are bounded by debug.SetMaxThreads (default: 10000).

* "P": these are basically locks that Ms must acquire to run Go code. These are bounded by GOMAXPROCS.

So when you way "GOMAXPROCS=2", you're saying "I want at most two Gs executing code simultaneously", but you can still have a very high number of OS-level threads that are I/O blocked. Notice that when a G does a blocking I/O there are two possibilities: if it's a pollable operation, the handle is passed to the epoll/kqueue thread, the M releases its P and is recycled to do something else. If it's a non-pollable operation, the M releases the P but stays allocated to wait for the operation to finish (e.g.: the syscall to return). At that point, given that at least one P is free, any ready G can be scheduled, but a new M might be needed.


This is more for people looking at erlang/elixir than a critique of the blogpost or a suggestion for a change.

> Within Elixir, there is no operator overloading, which can seem confusing at first if you want to use a + to concatenate two strings. In Elixir you would use <> instead.

When this popped up, it reminded me of something people try to do often and then have issues with performance. You probably do not want to concatenate strings.

"Yes I do" you'll first think, but actually erlang has a neat commonly used thing to help here.

Let's say you're doing some templating on a web-page. You want to return "Welcome back username!". First pass (a while since I wrote erlang so forgive syntax errors):

    welcome(Username) -> 
        "Welcome back " ++ Username ++ "!"
Now it's going to have to construct each string, then create a new string with all three. More creation & copying means things get slower.

Instead, many of the functions you'd use to write files or return things over a connection will let you pass in a list of strings instead.

    welcome(Username) -> 
        ["Welcome back ", Username, "!"]
Now it's not copying things, which is good. But then we want to put the welcome message into another block with their unread messages.

    full_greeting(Username) ->
        welcome(Username) ++ unread_messages()
More appending than is good here, concatenating lists is going to take time. Of course, we could put it all in one function, but then we lose re-usability in the templates and have horrible massive functions. While this is a simple example, I hope you can picture a larger case where you'd want to split up the various sections.

Anyway, there's a better way of doing this. The functions that take lists of strings actually take lists of strings or other lists. So we can just do this:

    full_greeting(Username) ->
        [welcome(Username), unread_messages()]
You can keep going, nesting this as much as you want. This saves a lot of copying, allows you to split things up and avoids having to keep flattening a structure.

So, for people about to get started, try not to concatenate your strings, you can probably save yourself and your computer some time.

For more info on this, you want to search for "IO Lists" or "Deep IO Lists".


This is a great advice.

Erlang was the first functional language i really learned and i can definitely attest to the pains mainstream developers go through when switching the paradigm. In fact with Erlang you're jumping two paradigms at once - functional/immutable and actor model of concurrency.

Eventually it clicked and i think learning Erlang was single most impactful thing on my way to becoming "a better developer".

Throw away your best practices of procedural/oop world and unlearn your optimization tricks when diving into functional/immutable. It's a totally different way of thinking about problems.


This is a similar idea to a rope: https://en.wikipedia.org/wiki/Rope_(data_structure)


That's pretty much what iolists are, except you don't usually read iolists (or modify in ways other than concatenating) so you don't pay for that complexity cost. iolists are also mixed-type so you can have an iolist containing integers, strings and binaries (at the same nesting level or different ones) which means less transcoding pressure.


>Now it's going to have to construct each string, then create a new string with all three. More creation & copying means things get slower.

There's nothing about the ++ syntax that makes it necessarily so. The compiler/interpreter could understand that we're getting a concatenated string and automatically only create one.

Which is the case in several languages.


What is different in Erlang is that strings are a bit messed up in that they are linked lists of numbers. What you are suggesting (constructing the actual string in-place) works fine in Erlang, too, but has a bit of a funky syntax:

    welcome_user(Username) ->
        << <<"Welcome ">>/binary, Username/binary, <<"!">>/binary >>.
However, the main advantage of using io_lists instead is that `["Welcome", Username, "!"]` has constant complexity with regard to the length of `Username`.


Note to anyone not already familiar: in Elixir, character strings are all binaries from the start. Just to avoid the notion that Elixir follows the mistake of strings-are-linked-lists from Erlang.


This could be true in some restricted cases, but can't be in general.

If the username came from the database, it's going to be stored in memory before this function is called, and so concatenating must require building a new string.


It can require building a new string. But what is argued is that it wont require building several new strings incrementally for each concatenation.

E.g. no:

A + B + C + D -> AB + C + D -> ABC + D -> ABCD

And for immutable strings, it might not even require to build a new string at all, even if the string is in memory: it can just use a string data structure pointing to the various parts.

(


Just be aware that there are libraries that don't work with io lists, like erlang's http client httpc, in that case you need to flatten the io list.


Go's philosophy around error handling (or lack thereof) is arguably atrocious compared to BEAM's "Let It Crash (And I'll Just Log It And Restart In 1 Millisecond With Exponential Backoff)" philosophy.

To review: https://gobyexample.com/errors

Manually checking every possible error (and then, only in the spots where you can imagine an error occurring) is a heck of a lot of extra work for the programmer (and code for the code reader/reviewer) and still won't catch all possible errors (both conceivable and inconceivable) properly. And arguably, the fact that an unchecked/undetected runtime bug in Go will basically send it into an "indeterminate state" which is impossible to reason about (much less debug), is an incredibly strong argument against this philosophy, IMHO. As far as I'm concerned, as soon as my code goes "off the beaten path" state-wise (read this as: "significantly differing from my mental model"), it should crash, ASAP. Isn't every bug literally a situation the programmer didn't account for? Aren't runtime errors by nature unexpected by the programmer? Why would you then give bugs and errors even more room to corrupt the state of the world, then? ;)

We are all obsessed with computers and languages when the real limit is the programmer's mind and ability to reason about the code s/he's building and the states that code can get into. I think BEAM langs and purely functional langs more generally (along with functional/immutable data structures, etc.) do a much better job of addressing this root problem. I'm going to quote John Carmack from his great blog post about functional programming here (http://www.gamasutra.com/view/news/169296/Indepth_Functional...):

"My pragmatic summary: A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in. In a multithreaded environment, the lack of understanding and the resulting problems are greatly amplified, almost to the point of panic if you are paying attention. Programming in a functional style makes the state presented to your code explicit, which makes it much easier to reason about, and, in a completely pure system, makes thread race conditions impossible."


While I agree wholeheartedly that Erlang's design is better:

To be fair, you can't get Erlang's semantics without also inventing processes and supervisor trees. And those things would be largely meaningless without immutability.

Erlang's magic comes from how all the puzzle pieces fit together into a whole: For example, with immutability comes the ability to always be able to restart from a known state, and probably greatly simplifies the internal implementation of the per-process heap.

Those things fundamentally change the language; Go couldn't adopt Erlang's semantics without eschewing almost everything that makes it "Go".

Error handling is really the least of it. Attempting to replicate some of Erlang's high-level design principles at a quickly lead to roadblocks. For example, goroutines cannot be killed; this means it's impossible to fully implement supervisor trees in Go. It's possible that the runtime could support this at some point, but it's a big can of worms.


Immutability probably does simplify a lot of things; but it isn't actually required to get Erlang's model, since Erlang isolates the state of processes from each other. Each process could have its own isolated mutable state as opposed to an immutable one and it wouldn't change the "let it crash" philosophy.


>Each process could have its own isolated mutable state

Erlang processes do have their own isolated mutable state.

But it's walled off and accessed through calls rather than variable references; in the non-SMP implementationthis enabled copy-free local (same node) message sends (and it still does, but only for large binaries -- the more general use turned out to be less efficient than copying in SMP because of locking issues related to garbage collection and some other implementation details.)


You can't guarantee that isolation without immutabilty, or at least forced copying.

In Go, it's extremely easy to accidentally send a pointer (or a data structure that contains a deeply nested pointer somewhere; this includes embedded maps, slices and channels, which are all reference types) on a channel, which is bound to break mutability guarantees at some point.


Mutable locals do not necessarily imply pointers...


No, but how do you pass "mutable locals" to other processes? That's the whole question.

If you pass a "local" on a channel to a different goroutine, Go can't guarantee to the receiver (which now has it as a "local" variable) that the sender (which still has it as a "local") won't change it:

    d := map[string]int{}
    ch <- d
    d["foo"] = 42  // Oops!
The only way to do this safely is to to write a channel abstraction which uses gob or similar to marshal and unmarshal all data, thus copying it and preventing concurrent mutation, and then always use the channel abstraction. But always copying tends to be terrible for performance. Erlang is able to do zero-copy sends to process mailboxes because the data is guaranteed to be immutable.

Sure, you could instead mandate that senders never hold on to data sent on channels, but can you ensure this 100% across a codebase contributed to by an entire team?


> Sure, you could instead mandate that senders never hold on to data sent on channels, but can you ensure this 100% across a codebase contributed to by an entire team?

Isn't that precisely what Rust is all about?


Pretty much, yes.


Agree, and thank you for pointing those additional considerations out!


Author here. This was published a week earlier than expected so just a heads up that there are a couple of edits coming.


May I suggest some changes to the type parts:

* Elixir doesn't infer types. The language is dynamically typed, just as Erlang is. Using the word "inference" is somewhat dangerous since it means something very specific to a type theorist.

* Likewise, the compiler catches nothing, unless you are passing in a constant or literal. If you have `X <> Y` for arbitrary X and Y, the compiler can only catch it in statically typed languages (with or without type inference)

I think the main power of Elixir/Erlang compared to Go is the ability to handle arbitrary errors implicitly without the programmer having to write a single line of code. In Go, since there is a large shared heap, a goroutine must clean up after itself. In Erlang, memory is truly isolated, so a failing process can be cleaned up much like a traditional UNIX process can by the kernel. This creates system software far more robust.


I was talking about Dialyzer when mentioning that but I wrote it that way because I didn't want to derail into an explanation of Dialyzer and how it works. I did link to a talk on it though.

It's such an automatic part of the stack with no drawback to using it that it's why I wrote it that way. Check that talk though, it's really interesting.


I had a hunch you were talking about the dialyzer, but unless you happen to know about it a priori, this part reads like false information, and it almost had me tune out of the rest of the article.

I fully understand you intended to write something else, and I very much agree with the power of the dialyzer :) It was more a "I dualized this section by accident, perhaps other readers would do the same?"


I'll get it adjusted. Article release was bad timing for editing sake because I had to be in a meeting all day today. Should have it updated soon though.

One of the links around that section does include a link to a conference talk on Dialyzer, for what it's worth.


Perhaps some quick parentheses--'(via Dialyzer)' would be enough. I'm familiar with Dialyzer but missed the reference in your post and thought wow, I didn't realize Elixir/Erlang had all this type checking built in... so definitely some potential for misunderstanding there. Overall excellent post though! Thanks for writing it.


I think this is an excellent suggestion. Another point could be to push the aside down into a footnote. Readers can then peruse that part on their own leisure, without having to stop your flow in the article.


Submitting that change.


This had the effect of making me want to spend time with both languages, so good work, I'd say.


That was definitely the idea. :)


Maybe already fixed, but here are a couple things I noticed:

Go methods have to be defined in the same package as their receiver's type. As a result, you can only add methods to your own types, not someone else's.

Also, the article claims that Go interfaces are similar to Elixir's pattern matching but I don't see a resemblance. Perhaps clarify that?


But you can define an interface that matches someone else's types and then attach the method to it.

The main similarity I was going for was the way that an interface fits anything that matches it, rather than being directly implemented by the type. A method could be attached to an interface that matches a struct with two specific variables while Elixir could define a pattern on a function that worked for any struct that contained those two variables.

They are both doing a similar thing a different way.


The confusion with GP might be that you can't, in fact, define an interface and attach methods to it. More info @ https://golang.org/ref/spec#Method_declarations


Great comparison. I'd love to see this expands to more languages.

A quick question, you talk about Elixir yet mention Erlang a lot. Is the choice of Elixir over Erlang purely syntactic preferences? Or is Elixir just an all around win above Erlang?


I'd say Elixir is the sharper tool. It cuts better - and I don't mean the syntax, but the macros - which can be a bad thing if you slice yourself by mistake.

Erlang, with its small and hard to extend syntax, its libraries, frameworks, and conventions, is geared towards stability. Erlang is meant for creating reliable systems, which will be maintained for decades. Erlang is the most conservative (in the sense Steve Yegge's post) of dynamic languages out there: it's verbose, predictable, boring, consistent, reliable and focused solely on fault-tolerance (ie. long term stability).

Elixir tries to be a less conservative Erlang, but also good enough scripting and general purpose programming language. It's not a drastic change in capabilities, but rather re-focusing the language a bit to be more friendly to abstraction and extension. Elixir delivers on its promises magnificently: for my latest project, where I need the most expressive language I can get, I considered Elixir among such languages as Smalltalk, Io, Common Lisp, and Lua.

So it's more than just syntactic preferences, but the two languages remain very closely related. Elixir functions are Erlang functions, you can use Elixir modules from Erlang just fine and so on. Erlang is a simpler language, so my advice is to start learning Elixir and you'll learn Erlang when reading articles about its libraries that you want to use.


I am not by any means an expert. I started using Elixir several months ago then gradually made my way to Erlang's standard library.

In short: Elixir is more expressive and easier to write. There are a few syntax/stdlib peculiarities but frankly, every language has them. I'd only classify Go and Java as languages with slightly less WTFs in their stdlib compared to Elixir. I'd give 2-3% of Elixir's stdlib functions a slightly bad color. Perfectly acceptable IMO.

Erlang is probably just fine for people who are used to it but for me Elixir was much easier to pick up. Also it started gaining additional functionality which uses OTP's primitives. Check `GenStage` and `Flow` for higher level complex multi-process scenarios, they are really next-level stuff and they help a lot.


ZeroMQ is a library written in C++, not a server written in Erlang.


The presence of ZeroMQ in the list of projects built with erlang also made me frown as well. But in the author's defence, there is a pure erlang implementation of zeromq [0]

  [0] https://github.com/zeromq/ezmq


Perhaps the author was thinking of RabbitMQ


Working on getting it removed. That was my mistake.


Very interesting article, thanks. I'm somewhat familiar with Elixir (learning it/have written some small things in it) and only know roughly what Go is, why it was created and I had watched a talk on it a while back. This is a great starting point for actually jumping into some Go code (at least it made me want to try it).

Small typo I found: """Go channels implement buffers which can either receive a a certain number of messages"""

(2x a)


That comparison was way better than I was expecting it to be. Go is still on my to-do list, but Elixir parts seem to be quite thoroughly described and without major mistakes. I also like the conclusion and to be honest this is how I always felt about the two. Definetly recommend reading this is if you're new to any of the two.


I think a good way to increase creativity and productivity is to use the right abstractions of thought and craft. Every good(non leaky) abstraction expands the creative envelope further and lends itself to creation of new higher order abstractions for the next generation.

Having coded in imperative languages like Java, Python and C++, I had been on the lookout for a practical general purpose language which provides good abstractions/high expressiveness. Elixir appealed to me more than Go in that regard. It's been six months since I started writing Elixir and it's been a pleasure.


I still don't understand what all the hype is about pure functional programming.

Sometimes mutations are useful. There are lot of good programming design patterns which depend on mutations.

Also, always copying objects by value every time you call a function seems very expensive; especially if you're dealing with very large objects/structs/maps/strings which have to be processed by many functions.


> Sometimes mutations are useful

especially in a hello-world sized projects. as you scale up mutations become a constant source of bugs, bottlenecks and complexity.

> always copying objects by value every time you call a function seems very expensive; especially if you're dealing with very large objects/structs/maps/strings which have to be processed by many functions

structural sharing in functional persistent data structures makes this problem go away 99%

what you get in exchange for giving up mutability are programs that are easier to reason about, have less bugs, better modularity, less man-hours wasted on maintenance. all the good stuff.


I think the person's point was that you sacrifice performance - having the ability to mutate is only a good thing there, as immutability without being able to exit out of that is too inflexible and potentially could cause other hacks to appear when trying to solve that problem.

FP with a focus on immutability is good for the default, but ignoring its limitations is bad.


Sure, but I don't know any pure languages that provide no facilities for mutation whatsoever, they just ask you to be explicit about it and add some syntactic weight (which is a good thing, you want mutation called out as a reader). Haskell provides several different mutation options from simple references to various kinds of thread-safe references depending on what semantics you need. There's even a "no-really-trust-me" option.

So where does the myth that you can't exit out of immutability come from?


Erlang is purely immutable and doesn't have any "facilities for mutation" that I know about, although it does have side effects (I/O, but no Haskell-style effect system).


That's not entirely true. I built a FlatBuffers implementation for Erlang as NIFs so that I could still take advantage of FlatBuffers memory efficiency features, which fundamentally required "unsafe" and mutable access to a byte buffer. I just had to be very aware of exactly how the runtime was going to interact with my implementation and account for it.

Sadly the place where I worked was not keen on open source posture, so it's locked away for all of time probably. However, the place I now work is a lot more open source friendly and I expect to take another whack at it, but using Rust on the NIF side this time around instead of C.


Well, with NIFs you can do anything — it's a parallel C universe. I was really referring to Erlang the language.


Maybe GP meant something like FFI? Which is a mechanism that allows for any mutation you'd like and is present in most FP languages.


> having the ability to mutate is only a good thing there

hard to discuss without specific examples, but it is provably bad thing whenever you share the value that you're mutating.

if your problem boils down to crunching numbers, maybe even in a couple threads without needing to communicate and cooperate between them - sure, mutability is hands down best approach, but in other 99.9999% of problems it is a hindrance.


I still don't understand what all the hype is about pure functional programming.

Sometimes mutations are useful. There are lot of good programming design patterns which depend on mutations.

Study the Chomsky Hierarchy and the Theory of Automata. < https://en.wikipedia.org/wiki/Automata_theory > Here it is in a nutshell:

Languages are synonymous with computation. These can be divided into just a few types, based on how they can use state. One can observe that the complexity of an automata and its corresponding language increases dramatically with its use of state. (Turing Tarpit https://en.wikipedia.org/wiki/Turing_tarpit) You can avoid complexity by avoiding complex interactions with state.

Also, always copying objects by value every time you call a function seems very expensive

In the case of parallel/concurrent computation, it can be the less expensive option. < https://mechanical-sympathy.blogspot.com/2011/07/false-shari... > As always, the devil is in the details, and it's all about context. So you are correct. Sometimes mutations are useful. However, sometimes, you can get a benefit from avoiding them.


Erlang isn't purely functional... the time function is an example. To be pure, iirc, a function call have to return the same result everytime, there's a one to one mapping for each parameter there's a unique value. time() is an arity 0 (no parameters) and yet it return different value every time you call it depending on the time currently.

You can mutate by making a new copy, you modify the old one and save it to a new variable. The old variable is unchanged.

If you set x <- 1, you can't change x anymore but you can do y <- x + 1. Your modification is in a new variable. Elixir you can change x to a new value but that's just a new copy, unless you pin it..

You can also keep state by passing an accumulator via function parameter, it's also how you to tail call optimization. It's a weird pattern and I haven't use functional paradigm enough to give one on top of my head.


I don't know why people keep spreading the misconception that mutation isn't possible in pure languages.

    ref <- newIORef 0
    writeIORef ref 1
    y <- readIORef ref
    print y
> 1

There you go, mutation in Haskell.


Pure functional languages often do not use the same data structures common in imperative languages. Instead there's an emphasis on persistent structures that copy on write while sharing as much data as possible.

For instance, prepending to a linked list results in a new head node, but nothing is actually copied. Appending requires a full copy. A developer has to be aware of where the limitations of these data structures are, but I think that's true of using any library in any language.

https://en.wikipedia.org/wiki/Persistent_data_structure


The performance issue still stands. Linked Lists are often MUCH slower then array backed lists on modern hardware. This is true even when doing lots of inserts. Having all your list in the CPU cache is just too much of a performance gain over having it spread out in random locations.


Depending on your GC, linked lists won't be "spread out in random locations." The nodes will be allocated right next to each other in the case where you're building up a list iteratively.


How does the GC change allocations?

While I agree that this Linked list memory layout should be optimized, with real world tests Array backed normally win by a large margin.


In practice, functional languages typically use persistent data structures, which represent collections such as dictionaries, vectors, sets internally as trees. Adding, removing and updating data requires changing a path in this tree while sharing the remaining of the structure. It is not a deep a copy and a really large map won't be very expensive to modify. Rich Hickey has a good talk on the matter: https://www.youtube.com/watch?v=dzP05hEDNvs

There are also optimizations that are straight-forward to understand and implement once you assume immutability. For example, take the following function in Elixir:

    def some_list do
      [1, 2, 3]
    end
Because a list is immutable, when code is compiled we put such data structures that appear literally in the code into something called a literal pool. Anything in literal pool is loaded when the code is loaded. Now every time you call that function, we return the same list and we don't create new instances/copies on every call. That list may be embedded in a map, another list, whatever, and still point to the same memory representation because nothing will ever change it.

Here is an example that leverages this in the context of a web framework for great rendering performance https://www.bignerdranch.com/blog/elixir-and-io-lists-part-2.... While this is definitely achievable in other languages, we get it pretty much for free with immutable data structures and it is a natural mechanism to reason about. IanCal talked about it a couple comments above: https://news.ycombinator.com/item?id=13498532

Regardless, you are still correct when you say that mutations are useful. There are many algorithms that will be more performant if implemented on top of mutations. Luckily, most functional programming languages, including the pure ones, provide mutable data structures or memory references for such cases. For example, when working on GenStage/Flow for Elixir, I optimized the hot paths by using mutable dictionary and called it a day.

Many claim immutable data structures are a better "default" for writing software since it is conceptually simple to reason about code when data cannot change right under your feet. However, I won't try to argue if this is actually the case or not, as this reply is already quite long as is. The point is that many concerns regarding immutable data structures are solved and functional languages also provide alternate paths when you need mutability.


This is something I think Rust solves very nicely.

The ownership model gives the benefits of immutability (no shared mutable state and race conditions) without the need to copy most of the time.


    > Also, always copying objects by value every time
    > you call a function seems very expensive
This is a non-point. In Erlang/Elixir everything is immutable and 95% of the time data is passed by an immutable reference thus effectively eliminating data copying.

Also, if you use iolists for text processing (instead of appending strings), you absolutely will reuse all string chunks you use in the entire Erlang/Elixir OS process, even if you don't want to. It's mega-efficient.

EDIT: @josevalim (Elixir's creator) explains it much better than I can, right here in this thread: https://news.ycombinator.com/item?id=13499328.


I just had a thought; if arguments always get cloned by value every time they are passed to a function, doesn't that mean that the time complexity for an insertion operation in Elixir can never be better than O(n)?

According to this answer http://stackoverflow.com/questions/11055391/time-complexity-... it claims that for dictionaries it's O(log n). How is that possible if mutations are not allowed? Wouldn't the function have to process at least n items? Or it uses parallel processing to get a speedup there?


If you are only dealing with Immutable structures that no one can really change, there is no point in passing by value. You can always pass by reference. The language will not allow any code to modify it. Instead a new variable will have to be assigned if the function changes the values. So, it can always be pass by reference until a change needs to be made aka copy on write.


If I create an immutable string "hello world" why would it be passed by copy; it can never be changed only garbage collected.

The BEAM seems to do magic here and do much less work than you might think when creating "new" things.


You only need to copy the data if the consumer of the data changes it, i.e. copy-on-write.


That sounds a bit better than I thought but it still doesn't change the worst-case complexity.


The canonical text discussing how this is achieved refers to data structures optimized for immutability as "purely functional data structures". [1] These type of data structures are the default for Clojure as well, and there are a number of blog posts and presentations discussing their implementation. [2] [3]

It's a reasonably complicated topic, but the basic idea is that since the data structures are immutable, much of their structure can be shared between versions with few changes. Most of these data structures end up using a Tree of some sort. Performance characteristics can be influenced to an extent by using bucketing to control the width/height of the tree.

[1] : Purely Functional Data Structures (Amazon: https://www.amazon.com/Purely-Functional-Structures-Chris-Ok...) [2] http://hypirion.com/musings/understanding-persistent-vector-... [3] https://www.youtube.com/watch?v=7BFF50BHPPo


Consider a binary tree. You don't have to copy the whole tree, which is O(n) but rather only the "spine" path from the altered element to the top, which is O(lg n).

In practice, there are also many cases which ends up on the stack and never reaches the heap, so the memory allocation needs are somewhat smaller than what people think.

Another point is that short-lived garbage is collected in O(1) time in the GC. So you don't pay for its collection either. Things are slower than a mutation, but the overhead is lower than people tend to think.


Both Erlang and Elixir have access to Erlang's ETS (an OS-process-local cache) and it is sparingly used in the cases when mutability is undoubtedly the more efficient approach.

The FP folk aren't close-minded about this. They just use mutability only when it's provably more performant than their builtin immutable data structures.


Depends on the scenario, if you know a value never changes, you can share that between variables under the hood, as the article explains with the example about sorting arrays.

Passing around complex mutable structures feels more like procedural programs.


In Erlang, the pureness allows a clean split between processes and enables that clustering and hot loading magic. While it's less efficient for a single operation, it scales to handle many parallel operations way better. This is generally why pure functional is hyped.

It's just easier to think of a pure functional construct, there's less possibility for mistake.

No one claims it to be efficient at single operations. Though some people blame the hardware for that, as in theory, there's nothing slower about it.


> It has since expanded into numerous other areas, such as web servers, and has achieved nine 9s of availability (31 milliseconds/year of downtime).

I definitely want to read more about this.


This is how myths are created and perpetuated.

This claims from a study of a particular model of Ericsson's ATM switch whose software was written in Erlang.

A can tell you for sure that: * Ericsson has other switches and other telecom equipment whose software is written in C++ * other companies (Nokia, Cisco, Alcatel etc.) also built similar complex telecom equipment whose software was in C++

I'm going to bet that at least one of those devices was more reliable than that particular switch, because it's not the language that creates reliability but people writing and testing the software.

Some languages make it easier than others but attributing some magic qualities from extrapolating a single case study of a telecom equipment is just silly.

Telecom equipment is reliable because it has to be. If you're incapable of writing reliable software for telecom equipment then your company will go bankrupt and be replaced by one that can, even if the language they use is C++.


But suitability is a huge factor here. The point isn't that "nine nines" reliability cannot be achieved in C++ or whatever, it's that it's significantly harder and costlier to achieve.

But also: Of course a language creates reliability. Null pointers are impossible in Erlang, whereas they are provable impossible to avoid in C++.

So people bring this up because Erlang programs are pretty much reliable by default (assuming you stick to OTP and its design).

To help it reach insane uptimes, Erlang also has a few features that are hard to replicate in C++, if not impossible. The main one is hot code replacement; being able to incrementally upgrade a system in a safe and controlled manner without restarting it is a huge benefit in such a system.

It's also trivial to attach to a running Erlang program and introspect it as if you're truly running inside it. Goes hand in hand with hot code replacement. With C++, you get fairly crummy, low-level things like gdb, without the ability to actually write C++ code while in the debugger.

Also, remember that crashes can occur in bug-free programs. Hardware failures can be hard to predict and handle.


> Null pointers are impossible in Erlang, whereas they are provable impossible to avoid in C++.

They are trivially possible to avoid in C++ as you do not have to use raw pointers. References, value types, custom non-null smart_ptrs, etc.


Don't underestimate the cost of trivial maintenance. In any project constrained by time, you risk people skipping on these things in part of the system. Erlang/Elixir is pretty well built for robustness, so a single library is less likely to affect the system as a whole. The same cannot be said for C++, where every part has to be close to correct for the operation to proceed as you expect.

The trade-off you get however, is that the explicit control in C++ tend to produce systems which run faster under nominal operation.

In practice, many Erlang systems contains some C/C++ parts at their core for fast operation. Depending on their complexity, you try to isolate them from the rest of the system as much as you can. In short, you use Erlang as a top-level control backplane for your C/C++ code. The advantage is you get the speed of C++ where needed, but the added productivity of Erlang everywhere else.

Erlang programmers are usually practically minded in this regard. "functional purity" doesn't really matter that much if the problem isn't solved.


Not trivially, no. Even if you try to avoid "raw pointers", C++ is an unsafe language by design, and it's non-trivial to turn it into a verifiably safe one:

* Even smart pointers are still pointers, which are trivial to access after release by mistake; and references are only safe as long as the caller still exists. C++ does not have anything like Rust's borrow checker.

* C++ threads allow race conditions that can screw up even smart pointers.

* Plenty of libraries (C and C++), which are usually impossible to avoid and completely impractical to vet, will still use raw pointers.

Generally, the word "trivial" and C++ don't go well together.


Note I was explicitly objecting about null pointers begin impossible to avoid. My claim is that the C++ type system allow avoiding them just fine.

I'm making no claim about general the memory safety of C++, and its issues are well known.


I grabbed it from the Erlang Wikipedia for what it's worth. The language creates reliability by isolating running parts in millions of small heaps that can independently fail and immediately restart. It's not automatic, but the way that the language is designed makes these types of numbers much more feasible.

https://en.wikipedia.org/wiki/Erlang_(programming_language)

It also had this along the same lines though:

>As Tim Bray, director of Web Technologies at Sun Microsystems, expressed in his keynote at OSCON in July 2008:

"If somebody came to me and wanted to pay me a lot of money to build a large scale message handling system that really had to be up all the time, could never afford to go down for years at a time, I would unhesitatingly choose Erlang to build it in."


Have you ever worked in telecom industry?

I did, at one of the the companies you list there.

We had outsourced and offshored code for modules on those switches you mentioned.

Have you ever had the pleasure to review C++ code in such type of projects?

I did, and hope to never do it again.


Remember any specific examples/patterns? Curious to see how bad it could be...


Basically:

- Code either "C code compiled with C++ compiler" or C++ OOP Spaghetti like in the CORBA/DCOM days before JEE was a thing.

- functions/methods that span several screens

- barely any kind of testing

- due to emphasis on C style programming, lots of fun tracking down pointer misuses

- re-implementation of code that is already part of C++ standard library, even the ones from C

- lots of copy-paste on the same file because devs don't understand neither source control nor modularity


Many of these issues you list can be overcome by good coding practise. Although I understand that the framework/languages used allowed for these bad feats.

I wonder how Rust with its focus on memory safety could play out in such scenarios.

I need to have a look at Erlang/OTP. It is a bit of a hidden, alien gem.


> I need to have a look at Erlang/OTP. It is a bit of a hidden, alien gem.

I'd recommend it. I think even in the worst case, it's interesting and can change how you approach problems in other languages.

In the best case, it'll fit some problem you have suspiciously well and you'll be left wondering why it only took you a couple of days, a tutorial and 50 lines of easy to understand code to solve a really annoying problem you had.

If some of the syntax or tooling feels like it's getting in your way, Elixir looks like a nice setup on top. Personally I think erlang is fine as it is, though I do want to experiment with Elixir.


> Many of these issues you list can be overcome by good coding practise.

The enterprise is not the space for it, specially among companies whose bread and butter is not to sell software.

The only measures of quality management cares about, are "does it work" and "delivers what customers pay for", anything else are just costs that need cutting.


Ha the borrowing checker as a barrier to bad coders! I like it ;-)


It is.

Just like the type systems from Algol family of languages.

Every time a language outsources safety to an external tool, the majority of developers and their employers don't care about it.

Had lint been part of the compiler, like clang did it, instead of external tool, C code would have been much safer than it ended up becoming.


There has been nice study by Motorola exactly on this topic C++ vs. Erlang implementing telecom software. Study slides are online at http://www.slideshare.net/Arbow/comparing-cpp-and-erlang-for...

Interesting result from this study - erlang implementation is not only more robust but also faster.


Thanks for the link. Jives with my intuition overall, but the speed comparison is jaw-dropping.


If a rogue cosmic ray flips a bit in a running process handling phone calls (note: this exact thing apparently took down all of Amazon S3 once, http://status.aws.amazon.com/s3-20080720.html), in code that wasn't expected to error,

1) Go will crash or at minimum (arguably much worse) go into an unknown state.

2) C++ will crash, and ideally, restart, but lose all "live" state. (So calls get dropped, etc.)

3) Erlang/Elixir process will crash and then get instantly restarted by the supervisor. If the bit flip hit the supervisor instead, then its supervisor will restart it (most larger BEAM apps have a supervision hierarchy). The only way BEAM will go down is if the cosmic ray hit BEAM.

Checkmate, non-BEAMs? ;)


I'm not sure if you're being serious, but a random bit flip can easily put a process in an incorrect state rather than make it crash (say charge a user $1000000 instead of $1), or affect a pointer (of which Erlang uses a ton internally - imagine for a second how lists, maps, and code loading work) and make the whole VM crash when it tries to access it. Or it could just kill the OS itself. You would have to be incredibly lucky to get a nice, clean process crash.


Ah, you're right. Most bit flips wouldn't be detected under ANY language. I guess machine-level redundancy isn't going away anytime soon, then


Yeah exactly. To make something 9 9s reliable, there is lots of fault tolerance in the system and hardware architecture.


Not disagreeing with you, just adding a bit more info. I'm nearly sure that the fault-tolerance magic of Erlang happens at the library (OTP) level with supervisors.


That's certainly part of it, but really it's the overlap of multiple layers of the overall system.

Lightweight processes, messaging, OTP, supervision all play a role when it comes to runtime fault tolerance.

Erlang is a rare beast in that the language and its VM are both very opinionated and very focused on certain types of problems (such as telecom and fault tolerance). This is not Java and its VM trying to be all things to all people.


"nine 9s" comes from Erlang (or in this case BEAM): https://stackoverflow.com/questions/8426897/erlangs-99-99999...


I am very intrigued in both. I am a reasonably experienced software developer with about 15 hours a week of availability, anyone wants to contract me? I'd gain experience with a modern language, you'd get someone on cheaper than experience would suggest. I know software engineers can use any language but in reality, if I were to get a full time job at a company which uses either of these I'd get a lot less money than I am getting now working with Drupal with 10+ years of experience in it. So I would like to gain some real world experience to make it easier to switch jobs later.


I feel this deserves its very own thread, because I too would like to do exactly the same as yourself.


> The biggest difference between the two languages is that compilation for the destination architecture has to be done on that same architecture. The documents include several workarounds for this scenario, but the simplest method is to build your release within a Docker container that has the destination architecture.

@brightball Go has had first class support for cross compilation for a while now, no?


I need to clarify that I was talking about that being an Elixir limitation. The previous paragraph talks about Go being able to cross compile no matter what system it's on but the one you highlighted still doesn't read well.



I know people will say "apples to oranges" but this is still exactly what I wanted to read. Thanks.


Agreed. It is apples to oranges but the article does an excellent job showing when/why to use each one. I believe the conclusion was spot on and IMO combining the stability of Elixir/BEAM with the performance of Go is the best of both worlds.


There was a good talk at Strangeloop about userland thread runtime scheduling in both the Go and the Erlang VM.

https://www.youtube.com/watch?v=8g9fG7cApbc


The page (like many other pages nowadays) breaks page down - if I press the page down key I then have to scroll up a few lines because the text are obscured by the always-visible "FREE SIGNUP" banner at the top.


I'm responsible for this blog. Thanks for the feedback, we'll work on getting this fixed!


Really well written comparison that made me, as someone who has used go for a year, understand a lot more about Elixir.


If I were putting together a new awesome Thing(TM), I think I would probably use elixir on the front end and internal messaging to handle and distribute jobs, and let go do the crunching and hard lifting.


That's the ideal setup really. Golang's concurrency is pretty decent as well but IMO Erlang/Elixir are much easier, quicker to code in, and more reliable than anything else I've ever used.

So basically use Golang as a modern C for specialized tasks which require [almost] the best your hardware can do, and use Erlang/Elixir for everything else.


    The other trade-off that comes from mutable versus immutable data comes from clustering. With Go, you have the ability to make remote procedure calls very seamlessly if you want to implement them, but because of pointers and shared memory, if you call a method on another box with an argument that references to something on your machine, it can’t be expected to function the same way.
I would be happy to know how we can make RPC seamless in GO accross multiple machines (clustering). I've missed a nice lib?


The stdlib has an RPC package, but there it also has great gRPC support http://www.grpc.io/docs/quickstart/go.html


The std lib has RPC support https://golang.org/pkg/net/rpc/

Never used it, but lots of people in conferences are happy with it.


I might need to rework that sentence. I was trying to say that clustering can only be seamless with immutable data and that, while Go has great RPC support built in that it won't ever be able to cluster naturally.


Perhaps not exactly what you/they are referring to. But this is interesting:

https://github.com/docker/libchan


Nice writeup! ZeroMQ is not written in Erlang. You probably confused with RabbitMQ, which is.


In Elixir, error handling is considered “code smell.” I’ll take a second to let you read that again.

I think that this makes a lot of sense. My experience in just about any language, is that the official means of error handling already feels like a code smell, even before you start using it. And if that's not the case, then it still manages to feel that way when used in a large project.

Lots of Smalltalk projects would actually handle errors by saving the image on an unhandled exception. For server applications, this was like having a "live" core dump where the debugger could open up right on the errant stack frame, and you could perform experiments to aid in debugging.


Funny you mentioned smalltalk. Alan Kay was talking about how Erlang is really an OOP language more so than other languages. In term of message passing and each process is an object...

Erlang just let it crash and you can restart it with a supervisor process. I think most of the time this model is much better since you won't be doing numerical stuff with BEAM anyway, it's not for it and it's too slow.


How does Crystal lang compare to the two?

I know the syntax is more similar to Elixir, but the format seems closer to Go in the sense that it compiles to a binary.


Crystal is more comparable to Go or Swift. I've been working with Crystal a lot the last few months and I'm really enjoying the emphasis on speed. The ecosystem is understandably still immature, but its going to hit 1.0 this year, so I wouldn't build anything big quite yet.


Swift would be a better comparison - both Crystal and Swift are general purpose languages that compile via LLVM. Neither has any special story for concurrency (yet). Go and Elixir/Erlang on the other hand, change your approach to programming.


What are you talking about? Crystal has a concurrency solution in their Channel implementation. They are not yet running in parallel, but it most definitely implement concurrency.

Or in other words, it is very inspired by the Go concurrency solution.


My understanding of Crystal is limited, but I think of the two, Go is the only one you would probably consider comparing it to, because of its native compilation. I don't think it has a very strong emphasis on concurrency, so comparing it to Elixir likely doesn't make much sense. Take that with a grain of salt though, as I'm not super familiar with the language.


Crystal has a similar concurrency model to Go, actually. It just doesn't yet have parallelism yet (coming soon).


As far as the current interface for concurrency, it has fibers, channels, and futures.


This article has some performance comparisons between Crystal and Elixir in the comments: https://blog.codeship.com/an-introduction-to-crystal-fast-as...


The article states that Elixir functions must be in a module, during its comparison between Goroutines and one of the few methods of using concurrency in Elixir (there are others beside spawn). This description isn't entirely accurate. Named functions must be inside a module in Elixir, but anonymous functions don't have that requirement.


I realized I wasn't clear on that after the fact. I should have said defined functions. In my head I tend to think of defined functions and lambda functions as different.


The BEAM VM and lightweight processes seem amazing - I just wish they could be tied to a language syntax that wasn't so radically different from c/java/javascript and the like. Go code is almost immediately understandable because of this, whereas Erlang is perplexing.


Have you looked at Elixir syntax? Elixir is functionally the exact same as erlang, but with a more ruby-like syntax.


Definitely check out Elixir. It's extremely easy to pick up. I also looked at Erlang but decided I don't want to learn it. I only gradually learn its stdlib because it has amazing things in there by default (like a state machine, directed graph implementation etc.). But you can use those freely from inside Elixir anytime.


I think it's comparing apples with oranges. They are two different species a compiled vs VM based language, one is a functional styled vs other has duck-tapping. Only thing I can see common is GC and a somewhat but very different concurrency programming paradigms, with message passing. Erlang and BEAM is an aged old giant with battle tested proven reliability, while golang is young lad with the flash like abilities everyone has been longing for. While you get awesome stuff like Hot-Reloading, Go lang gives you compiled code that can run heavy loaded servers on my RaspberryPi (I made one and tried all Erlang, NodeJS, and Golang believe me Golang smokes them all on memory footprint http://raspchat.com ). I can go on and on and on! But picking one is totally dependent on your scenario.


Both languages seem tailored to network services. It seems fair to compare them for that purpose.


I think it's a fare comparison. Much more than Rust vs Go which we see all the time. Technically, Rust and Go are more similar to each other than Go and Elixir but Rust and Go are intended for different areas. Go and Elixir on the other hand will compete directly as both have been built to write exactly same kind of software.


Hoare (Go) concurrency is for concurrency within one computer - actor (Elixir) concurrency scales over many computers. This is because actor concurrency is compatible with the realities of networking, and most actor implementations (scala, Erlang/Elixir) provide network transparency.


Rust and Go has not been written for same purpose and they target different audiences. Rust is targeting more towards safe systems programming while Go is mostly being used for network and service stuff. Both can do more obviously but let's not confuse the core usecase.


Yes, that's what I said. When you are starting a project where you need tight control and deal very close to metal, you'd choose between Rust, C++, C, D etc. Go wouldn't even be a choice here.

Instead if you project is a higher level application server, you'd choose between Go, Elixir, Erlang, Java, Python, Node etc. Go should be compared with these languages and ecosystems.


> They are two different species a compiled vs VM based language

This is an implementation detail.

Anyone can write a VM for Go, or an AOT compiler to native code for Elixir.


Pretty significant detail though, as no one has done either.. also "anyone" is fairly strong, building decent compilers and interpreters is pretty difficult


There is an AOT native compiler that ships as part of Erlang called HiPE. The original project has been incorporated into Erlang 15 years ago: https://www.it.uu.se/research/group/hipe/

The current setup is also mentioned on Wikipedia's page about AOT: https://en.wikipedia.org/wiki/Ahead-of-time_compilation


I was wondering if I should have mentioned HiPE in my comment.. but I couldn't remember if it was fully native. It was a few years ago that I was learning Erlang/Elixir.


Anyone with a decent CS degree should be able to produce working compilers and interpreters, otherwise it wasn't a decent degree.

Of course, writing very good ones is a different matter.


>Anyone with a decent CS degree should be able to produce working compilers and interpreters, otherwise it wasn't a decent degree.

Which would be useless for production without several years of maturity, a tools ecosystem, and a community adopting them -- and of course continued support.

So this is mostly theoretical.


So now compilers should only be written, if v1.0 is production ready?!

> So this is mostly theoretical.

No, it is setting the facts straight about the widespread ignorance of mixing languages with implementations.


>So now compilers should only be written, if v1.0 is production ready?!

No, but they are only relevant for the purposes of the discussion, that is, when considering whether to adopt a language platform based on if it's AOT or interpreted etc, when they are v1.0.

That somebody can always make an interpreter for an AOT language, for example, is nothing people care about when checking whether to use a language for their projects. It's what's there that matters.


Turning on cynic mode, I would say the only thing that matters is which company backs a programming language and only languages offered on their OS SDKs are relevant.

Anything else will just add entropy and development costs to projects, due to 2nd class tooling and lack of libraries on the target platform.

Then it doesn't matter how the code gets compiled at all.


>Turning on cynic mode, I would say the only thing that matters is which company backs a programming language and only languages offered on their OS SDKs are relevant.

That's true, but for application programming. For the server side there are other considerations. WhatsApp can get away with using Erlang for example, and still get the sweet billions.


You are now commiting the moving the goalpost fallacy, arguing first that something is irrevelant because it may be changed larter, and then moving on to argue that every CS degree guy could just write it.


I wrote my first compiler using Turbo Pascal for MS-DOS at the age of 17, before getting into the university.

A CS degree just makes it easier.


I think every CS graduate does that as part of Compiler Construction course so rather than questioning people's degree I would say doing these advanced features is not a trivial task.


I am highly skeptical that the messaging and GC and introspection and hot code loading features of the Erlang VM can be as trivially converted to a compilation model as you assert.


Prolog, Lisp and Dylan are three examples of languages with AOT compilers available, with semantics similar to Erlang.

Also even Erlang has HiPE.

http://erlang.org/workshop/2003/paper/p36-sagonas.pdf


Makes me wonder if its trivial why didnt't people just do it last few decades then? Even with AOT you will be shipping some kind of runtime if you want stuff like Hot reload etc.


I guess, because most developers use languages as given to them, to create stuff they care about.

Not everyone cares about compiler design.

Every programming language has a runtime, even Assembly (microcode).


[Deleted] Originally I'd had an off the cuff remark here of "apples to oranges". I'd like retreat to say that the article is actually a good overview and comparison


Nothing wrong with comparing apples and oranges. How else would you decide which is best for any given use case?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: