First, the speaker creates an application endpoint that runs into an infinite loop on invalid input. Then, he shows how that doesn't block the rest of the requests. Using native BEAM (EDIT: BEAM = EVM = Erlang Virtual Machine) tools, he looks for the misbehaving process (the one running the infinite loop), prints some debug information and kills it. It's pretty impressive.
Another great (and lighter) resource, is "Erlang the Movie" (https://www.youtube.com/watch?v=xrIjfIjssLE). It shows the power of concurrency through independent processes, the default debug tools, and the power of hot code reloading. Don't miss the twist at the end.
(not disagreeing, but having watched the past few years as "threads bad, don't know why though..." it is amusing to see thread-style concurrency rediscovered with a new name).
> [Erlang] is a concurrent language – by that I mean that threads are part of the programming language, they do not belong to the operating system. That's really what's wrong with programming languages like Java and C++. It's threads aren't in the programming language, threads are something in the operating system – and they inherit all the problems that they have in the operating system. One of the problems is granularity of the memory management system. The memory management in the operating system protects whole pages of memory, so the smallest size that a thread can be is the smallest size of a page. That's actually too big.
> If you add more memory to your machine – you have the same number of bits that protects the memory so the granularity of the page tables goes up – you end up using say 64kB for a process you know running in a few hundred bytes.
"Multiple threads can exist within one process, executing concurrently and sharing resources such as memory"
Whereas Erlang/Elixir processes don't share memory and have their own independent heaps and garbage collection.
Shared memory can lead to complicated effects if two threads try to change the same bit. You don't have that with Erlang processes.
size, scheduling and the lack of shared state add up to vast differences. Also the message parsing primitives are way more lightweight than the thread based equivalents.
A Monad is any data structure with the follwing two methods:
(>>=) :: m a -> ( a -> m b) -> m b
pure :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
(>>=) :: Future a -> (a -> Future b) -> Future b
Future.andThen :: Future a -> (a -> Future b) -> Future b
pure :: a -> m a
pure :: a -> Future a
Future.always :: a -> Future a
map :: (a -> b ) -> m a -> m b
-- if I know how to convert a's to b's I can convert a future a to a future b
Future.map :: (a -> b) -> Future a -> Future b
Like, given a list of element,s and a way to create a future for each element, create a future that returns a list of elements:
forM :: (Monad m) => Array a -> (a -> m b) -> m (Array b
Future.forEvery :: Array a -> (a -> Future b) -> Future (Array b)
:: Defines a new function signature. The left hand side is the function name, and the right hand side is the signature.
a -> b A function which converts something of type a to type b
m a A generic m of type a. In Java/C# this might be written as m<a>. eg, `Array String` means an array of string elements, and might be written in C# as Array<String>.
It might be helpful to define a function you already know:
string_to_int :: String -> Int
map :: (a -> b) -> Array a -> Array b
1. Takes a function that converts an 'a' to a 'b' (a -> b)
2. Takes an array of 'a'
3. Returns an array of 'b'
Let's say we pass in the `string_to_int` as the first parameter, now the type checker will actually infer the following function type:
map :: (String -> Int) -> Array String -> Array Int
Although in terms of concurrency I've found Elixir/Erlang actor-esque process based system with messaging much more intuitive when building complex programs and much simpler to reason about than the various solutions Haskell offered. But I'm also not an expert at Haskell so take that comparison with a grain of salt.
 - https://www.manning.com/books/elixir-in-action
I broke the deserialization up into 3 separate processes (1 for handling the I/O, 1 for packet deserialization, 1 for message deserialization) and while this fixed all the issues (presumably because it can balance the processes more effectively since no single process went over the reduction limit) it turned the effectively synchronous parsing process into a split async process and caused a lot of extra complexity around that.
I'm sure there's more to what caused the issue than just reduction limits but it also made me wary about the magic happening under the hood.
It's a key area where I think Go went wrong. Without the ability to kill a goroutine, you can't implement caller-enforced timeouts and canceling — anything that should be able to be canceled has to explicitly check for some signal sent by the controlling party. This is why Go ended up with Context, which infects everything it touched.
And you also can't create Erlang-style supervisor trees; you can't create self-repairing hiearchies of control. And you can't do things like use a debugger to attach to a process and terminate runaway goroutines.
At least for my operation it looked very much like I would really need to become a BEAM vm expert to make sure things ran smoothly (not just with that but other potential issues I had in the back of my mind as well).
From someone with zero experience in building concurrent systems: isn't this true for every sufficiently complex system? If your system is complex enough, then no general framework will help much, right?
Is it just that context switches are a lot cheaper within green threads rather than the OS thread?
The genius of Erlang is that instead of a per machine it was designed to be distributed from the start so you have a lot less inter dependencies when you want to scale across machines (or cores since that's where Erlang/Elixir shines at the moment), that was one of the design goals since it was designed for the kinds of software where that was a desirable quality (back when hardware was way slower than now), dropping a single call/circuit wasn't the end of the world if the rest stayed alive.
Lifted from the wikipedia article :-
Everything is a process.
Processes are strongly isolated.
Process creation and destruction is a lightweight operation.
Message passing is the only way for processes to interact.
Processes have unique names.
If you know the name of a process you can send it a message.
Processes share no resources.
Error handling is non-local.
Processes do what they are supposed to do or fail.
I'm not an Erlang/Elixir programmer (so I've no dog in the fight) but I've been fascinated by the language since I read about it in the early 90's in a programming journal, it seemed alien then and it seems alien now (much like APL).
Amongst my programmer friends Elixir seems to be really popular with Ruby programmers (I'm sure there is a reason but I'm not a Ruby programmer either so I couldn't tell you).
I just wanted to mention https://gigalixir.com which is built to solve exactly this problem.
Disclaimer: I'm the founder.
For other apps, anything running in GCP us-central1 will be in the same high latency network, but some customers interop with AWS or Heroku just fine albeit with slightly higher latencies.
Happy to help anyone out if they're interested in learning!
Note: I like Clojure, F#, and other functional languages, so that's not a hurdle.
The only similarity that Elixir honestly has with Ruby is clean syntax and general emphasis on productivity. Everything else is different. Same with Phoenix. On the web side of things it has routes, controllers and views...but there's no magic involved. It's all very clear, explicit and flexible in how it does things. The view layer is honestly one of it's biggest strong points because the way it's handled is the core reason for all of those "My site was faster without caching" articles...but you don't have to work differently to enjoy those gains.
Big Nerd Ranch did a solid write up on the ins and outs of why.
This Choosing Elixir for the Code post that came out last month does a really good job of emphasizing a lot of it.
The Phoenix Channels implementation for websockets and the view layer for web requests are pretty phenomenal IMHO.
Both are useful libraries, but the fact that they work in server clusters as well as single-node deployments is the real kicker. That's just how Erlang rolls.
We're experimenting with a truly OTP driven web application right now(No API) where we connect the React app directly to the Elixir backend and only persist to a database on log out. Everything else is persisted in memory and run through genservers which are self healing and self hydrating.
The language and Erlang's virtual machine are generally a joy to work with, and I have no regrets pushing past the syntax.
Can I ask what you use on the front-end (if anything)?
There's not really an Elixir equivalent of Clojurescript to Clojure (there is an Elixirscript, but it's a work in progress and I'm curious whether it can ever capture the nice experience of using the Erlang VM). I've seen interest in the Elixir community in Elm and Bucklescript(OCaml) and other compiled-to-js languages, but I think a lot of people just use JS.
* Most of the Elixir community comes from the Rails world, so most of the libraries focus on the web.
* Coming from the Ruby world, a lot of these new developers tend to be ignorant of the underlying Erlang machinery, believing that Elixir is the magic that makes Erlang a "modern" and usable language, which is simply not true.
* The tendency to favor frameworks over libraries.
* I'm probably alone in this one, but I find the syntax off-putting. It adds lots of sigils and complexity to Erlang's very succinct syntax.
* In the end, I think Elixir adds very little besides the excellent tooling to the Erlang ecosystem.
This sounds very subjective, and doesn't agree with my experience. I don't know a single Elixir dabbler that doesn't give Erlang credit where credit is due. It's just that the syntax looks like Prolog and ML got into a nasty car wreck sometime in the 1980's, and the tooling is more like a box of parts that can be assembled into various tools. That's why I use Elixir instead, anyway.
> The tendency to favor frameworks over libraries.
I know of exactly one framework in Elixir; I assume you mean Phoenix. Phoenix is little more than a few blobs of glue around other, mostly core Elixir, libraries. The Elixir community at large is quite wary of language and framework magic. The fact that lots of us did time with Rails means we've been hurt by the downsides of that approach like few others. :)
> In the end, I think Elixir adds very little besides the excellent tooling to the Erlang ecosystem.
That's good enough for me!
But mostly you shouldn't expect people to grasp functional programming, concurrency and OTP when they are just getting started.
I agree with most of your points, from what I've seen, except the last one.
One thing that Elixir seems to get right is a sane string implementation out of the box.
Not perfect, but it does allow you to leverage the advantages of Elixir (e.g. Phoenix and the related web-dev community) while still letting you get most of your work done in Erlang.
Even if you don't like Elixir, it brings more to the table than just syntax and macros, such as tooling, Unicode support, protocols, better error messages, better debugging, tasks, etc.
If Elixir is not your cup of tea, definitely Erlang all the way.
No, you're not wrong. Elixir also supports meta-programming, whereas Erlang does not. I like both languages though, primarily due to the BEAM VM. You can't go wrong with either one.
Dynamically typed, functional language with immutable data structures and a strong philosophy about how to best handle concurrency and scaling.
The key difference is JVM vs. Erlang VM as the base and all that implies.
Personally, I like Elixir better than Clojure because memory in the BEAM VM is isolated. The JVM uses shared memory, which has caused me issues before. I also like the tooling and build tools for Elixir a lot better. Clojure is weak in this area, I think. I like Elixir's syntax much better too. For me, it feels a lot cleaner and easier to type, but that is subjective.
Regarding Rails vs. Phoenix, I cannot really speak to that. I don't do monolithic apps anymore, so I don't use Rails or Phoenix. Elixir has a library called Plug built into it that lets me easily make REST APIs for micro-services, so that is what I typically use. For me, this eliminates a lot of the issues I had with Rails.
I came to Elixir from Scala but also having a Rails background. I was quite sick of Railsy magic, because I'd gone deep with the Ruby metacrap and I got to taste the other edge of the sword. Elixir was a breath of fresh air.
If you enjoy concurrent programming with nice syntax and few surprises, you might end up liking it. Elixir is basically Erlang with modern syntax and vastly improved tooling.
Any interest in qualifying that opinion?
Superficially, `do ... end` blocks look like English, because they are English words. On the other hand, they don't really correspond to any real grammar construct of the English language. They live in the the uncanny valley where you try to read it like English, but you can't actually read it as such. For some reason, the use of keywords like `def`, `defmodule` or `defmacro` doesn't bother me as much. Maybe because they are not block delimiters?
In my opinion, braces, delimiters or parenthesis create less visual noise than endless cascades of `end` keywords:
def f(x) do
In general there is too much syntax sugar for things that don't really benefit from it.
Parenthesis in function calls are optional in some places but not others, which creates some unnecessary confusion.
The syntax for anonymous functions is
fn arg -> do ... end
The language is subtly whitespace sensitive in some places, while being mostly whitespace insensitive.
Macros can't take a variable number of arguments, which leads to the proliferation of Special Forms (it's a kind of macros that is treated as special by the compiler) which could perfectly be macros if it weren't for this limitation.
For a situation where this causes problems and requires some extra weirdness in what should be a simple DSL, look at this Github issue in the new testing library scheduled for inclusion in the standard library: https://github.com/whatyouhide/stream_data/issues/21
This is not true.
We have only two variadic special forms: "for" and "with".
"for" needs to be implemented as a special form as it emits some optimizations that are only available at the Core Erlang level.
"with" needs to be implemented as a special form as it has different lexical properties than a bunch of nested cases.
None of those could be implemented with macros.
Regarding whitespace sensitiveness, most languages have subtle issues, to varying degrees but especially if you have optional line terminators. Although it is true one or two extra cases appear in Elixir due to optional parentheses.
That's interesting. I wonder if a future Elixir version could have a way of defining macros which sould emit these optimizations. Is this a likely direction for future developments?
> "with" needs to be implemented as a special form as it has different lexical properties than a bunch of nested cases.
Could you expand on this a little? What are those different lexical properties?
To do so, we would need to compile to core, and that means tools like Cover and the Erlang debugger would no longer work with Elixir. It is unlikely we will go to this direction.
> Could you expand on this a little? What are those different lexical properties?
If you compile a "with" to a bunch of cases, a variable from the outer case would be available to both success and failure cases. Take this code:
a = true
with true <- a = false do
_ -> a
a = true
case a = false do
true -> a
_ -> a
There're other things I don't love, but that's a good chunk. Philosophically, I prefer explicit code to implicit (so, Rails is out). And I prefer functional programming to OOP in general, and find that Ruby greatly favors mutable OOP. I've had a tough time navigating around the Ruby parts of most code-bases I've found, largely due to the implicit nature, and annoying language features that allow clever generation of methods in a way that makes them impossible to search for (delegate ... prefix: true for example).
"Holy @#$% that's amazing." - notamy
There are ways to set up these systems in symbiotic configurations, like having an Elixir/Erlang/BEAM cluster that has auto-joining thanks to external service discovery. However, to use the hot-swap functionality, you are going to be working with the BEAM VM directly, which means going through the Docker container boundary.
Know that if you choose to go with BEAM and no Docker, you will still have access to amazing deployment features (like hot swapping, as you mentioned), but also clustering, service discovery, monitoring, etc. You're just going to be spending a lot more time at the command line without the pretty interfaces and the sleek modern tools.
To get a better understanding of running BEAM, I highly recommend "Designing for Scalability with Erlang/OTP". You'll have to learn Erlang to get through it, but IMO it's the best introduction.
Can you elaborate on this statement? My understanding is that you can have these things (minus hot swapping) cleanly on Docker.
Additionally, most of the cluster nodes run on Spot Instances in AWS so they are relatively inexpensive also. When a new instance comes up it connects to the cluster and starts serving requests. When an instance is killed, traffic is routed to the remaining nodes. Works great.
Debian packages are easy to work with and we keep them as our primary artifact for any given release.
The number of cluster nodes varies between 10 and 35 depending on the workload at the time.
Nodes join the cluster by contacting one of the three "static" nodes that are not spot instances. If they can talk to at least one of those they get knowledge of the entire cluster. When one of the spot instances is going to be killed there is a notice is the instance metadata. The nodes just watch for that and initiate a normal shutdown if they see it.
Yes we do use ELBs but just as TCP proxies and for TLS termination.
Code hot swapping is not really docker friendly (obviously). I haven't done that, but work's general philosophy has been to have recoverable processes such that starting up the app will bring up the correct processes loaded with the right data. However, it's definitely possible to setup hot swapping but is going to be a learning curve to it.
Sure, a telco switch that has to be up 24x7, hot swapping makes a lot of sense. Most applications can spare the brief downtime to reload the old-fashioned way (or have multiple servers to allow upgrades to be done without downtime).
Code hot swapping can often trivially be done (by simply compiling and copying the fixed module, then 'l(module)' in the shell) if the module does not change the state record or break function signatures. I have used this countless times to fix a module function with no downtime. You don't always need the full code upgrade procedure, which is indeed far more onerous.
Getting hot code reload to work is harder, but for a "normal" basic deployment it's just this. App restarts take less than one second of downtime for my app, and that's a cost I'm willing to pay.
You shouldn't do this by hand of course. Write an Ansible playbook or something else that does this for you
as a Ruby developer from Rails 2 > Rails 5.. about a year ago I made the switch to non-monolithic applications to Elixir API(no phoenix) and ReactJS front ends.. Match made in heaven.
When I was still using python, I spent days (weeks?) reading about the problems, looking for design patterns that might work for my use case, and finaly quit in frustration.
You can deploy Elixir exactly the same as any other language. In some cases, it just means making a decision that you don't need some of the extra features that are available like hot reloading...which every project doesn't need.
You can still use immutable images and take advantage of the built in distributed databases by using mounted block storage if need be.
You can use everything out there. Early on figuring out some of the port mappings and how to handle things was more difficult but as far as I've seen, those problems have mature solutions all around now.
Just think of it in the same way that you don't need every available jar file for a Java project. The capabilities of the system are expansive, but not always called for.
Distillery is the main deployment related library out there for Elixir and reduces hot reloads to a an argument on the build.
I have not started looking into phoenix as I'm still exploring elixir, but I'm happy to have started learning elixir with koans along with official elixir guide.
You won't be disappointed and you'll be surprised how many times you'll want to reach out to auxiliary services and find out "Oh I can just use ETS/OTP/GenServer/spawn".
some_function() |> IO.inspect |> some_other_function
Also with Elixir 1.5 (OTP 20) you can set breakpoints easily from within IEx.
There is also IEx.pry where you can set the "breakpoint" in the code itself.
There is the observer tool where you can inspect each process and see numerous information that can be useful. When in IEx just type :observer.start and try it out yourself.
I suggest reading Designing for Scalability with Erlang/OTP, particularly the chapter about Erlang/Elixir tracing facilities that can be really useful when debugging live production systems.
Absolutely not true for Rails. In an Elixir app with Phoenix my response times are anywhere from 0.2 to 12 ms while the Rails app that uses the same database has response times anywhere from 300 to 2000 ms.
(Before you ask, I did went the extra mile to rewrite one of the small Ruby on Rails apps from my job to Elixir & Phoenix and maintain it as a feature-complete clone, just so I can have objective data.)
If we compare web app stacks, Elixir & Phoenix app is orders of magnitude faster than Rails in particular and this is not an exaggeration -- can't talk for vanilla Ruby though, I've had positive experiences with mid-sized Sinatra apps; but they still weren't capable of more than 20-30 requests/sec if you are looking for consistent throughput.
I truly like Go but to me it has always been a much better choice for highly performant microservices. I tried 3 separate web frameworks 12-16 months ago and I simply concluded Go isn't a good fit for a full web app -- at least compared to the conveniences and dozens of other advantages of an Elixir & Phoenix app. Go is definitely better in several other areas.
TL;DR -- Rails is very far from "fast enough". It's very okay for internal apps used only in business teams where the load won't ever be more than 20 requests a minute.
Also yeah, the sexy; I get the sense that Phoenix may be a better platform straight up..
The basic unit of Elixir (and Erlang for that matter) deployments is the release. A release is just a tarball containing the bytecode of the application, configuration files, private data files, and some shell scripts for booting the application. Deployment is literally extracting the tarball to where you want the application deployed, and running `bin/myapp start` from the root of that folder which starts a daemon running the application. There is a `foreground` task as well which works well for running in containers.
My last Elixir gig prior to my current one used Docker + Kubernetes and almost all of our applications were Elixir, Erlang, or Go. It was extremely painless to use with releases, and our containers were tiny because the release package contained everything it needed to run, so the OS basically just needed a shell, and the shared libraries needed by the runtime (e.g. crypto).
My current job, we're deploying a release via RPM, and again, releases play really nicely with packaging in this way, particularly since the boot script which comes with the release takes care of the major tasks (start, stop, restart, upgrade/downgrade).
There are pain points with releases, but once you are aware of them (and they are pretty clearly documented), it's not really something which affects you. For example, if you bundle the Erlang runtime system (ERTS) in a release, you must deploy to the same OS/architecture as the machine you built the release on, and that machine needs to have all of the shared libraries installed which ERTS will need. If you don't bundle ERTS, but use one installed on the target machine, it must be the same version used to compile your application, because the compiled bytecode is shipped in the release. Those two issues can definitely catch you if you just wing a deployment, but they are documented clearly to help prevent that.
In short, if there was pain experienced, I think it may have been due to the particular tool they used - I don't think deployment in Elixir is difficult, outdated, or painful, but you do have to understand the tools you are using and how to take advantage of them, and I'm not sure that's different from any other language really.
Disclaimer: I'm the creator/maintainer of Distillery, the underlying release management tooling for Elixir, so I am obviously biased, but I also suspect I have more experience deploying Elixir applications than a lot of people, so hopefully it's a wash and I can be objective enough to chime in here.
"Go’s goroutines are neither memory-isolated nor are they guaranteed to yield after a certain amount of time. Certain types of library operations in Go (e.g. syscalls) will automatically yield the thread, but there are cases where a long-running computation could prevent yielding."
goroutines also take up about 10 times the memory.
I thought an erlang process takes up at least 309 words of memory? That would make it <4x on a 64 bit system?
I'm curious about the first table in the "Interop with other systems" part.
It seems to say that an Erlang deployment doesn't need Nginx or a HTTPServer, anybody knows how that works?
EDIT: I read the cited source (https://rossta.net/blog/why-i-am-betting-on-elixir.html) and it seems that is the case.
It looks too good to be true, yet. It would be nice, if somebody with Erlang deployment experience, could comment.
The main web framework for elixir, Phoenix, is just something that bundles cowboy (the webserver) with some utilities on top of it. No need for Nginx.
I do use Nginx in my side projects in front of my Elixir web app, but that's because of some conveniences Nginx brings (having Nginx is probably making my requests a little slower).
EDIT: I could drop Nginx and use "raw" cowboy (the webserver behind phoenix), but Nginx makes it easier to setup TLS certificates and I'm lazy enough to keep it around just because of that...
I'm not sure how these compare to Redis in terms of performance/latency but Mongo is more of a "proper" database than any of these.
The official Elixir tutorial has an OTP chapter in which you are encouraged to actively kill processes and observe them being restarted with good state in real time: