Hacker News new | past | comments | ask | show | jobs | submit login
Managing two million web servers (joearms.github.io)
388 points by timf on Mar 14, 2016 | hide | past | web | favorite | 119 comments



OMG, guys, this is getting really strange. Half of the commenters here read the word "process" and jumped to their own conclusions, possibly true in general, but obviously wrong in the case of Erlang.

It bears repeating: Erlang processes are not OS-level processes. Erlang Virtual Machine, BEAM, runs in a single OS-level process. Erlang processes are closer to green-threads or Tasklets as known in Stackless Python. They are extremely lightweight, implicitly scheduled user-space tasks, which share no memory. Erlang schedules its processes on a pool of OS-level threads for optimal utilization of CPU cores, but this is an implementation detail. What's important is that Erlang processes are providing isolation in terms of memory used and error handling, just like OS-level processes. Conceptually both kinds of processes are very similar, but their implementations are nothing alike.


Thanks. That wasn't really clear from the article.


With the same speak, you could say Facebook mangages billions of PHP web servers, though no one speak like that. (PHP has a shared nothing architecture; HHVM works simlar to the Erlang VM, if one can say so)


Exactly, also I do not understand the comparison with apache. Apache can be configured to spawn one process per connection. You also could call a thread a "server" in it's own, so it's just a matter of playing with words.

Also if an apache process dies it will not crash other requests


Apache was a bad analogy. There is a thing that needs to be multi-process and fault-tolerant with the default case of badly-written code tossing out more state than just your own session, but it's not a web server that works like that.

Instead, it's a "web application server", like Puma is for Rails apps: a server that runs your code in its own address space, where your code can crash the active worker process (which is usually handling multiple requests) for that server.

Puma ordinarily runs ~eight worker processes, and a crash will kill one of them causing it to take the time to re-fork, and discard any other sessions that were also scheduled on that worker. Erlang runs a million worker processes, and one of them crashing doesn't cause anything big to happen at all.


The web application server as such is a foreign concept to the PHP world. It's actually PHP-FPM that acts as the web application server in PHP's case. Incidentally, PHP only has the "shared-nothing" architecture partisans are so proud of talking up when you don't run it in PHP-FPM mode.


> Also if an apache process dies it will not crash other requests

can you have 2m Apache processes on your machine ?


I suppose this depends on your machine.


What if there was a model where it depended less heavily on your machine because of over 20 years of development with the goal of providing stable, robust systems aimed specifically at allowing you to run many things in parallel? What if the framework/API for it was developed for production systems from the beginning, meaning its API was made singularly for being able to easily spawn, manage and control these processes?

The whole point of this article is that there is an architecture behind this and that this architecture is believed to be generally useful.


Well that would depend how long you run it for. They don't have to be simultaneous. It wouldnt make much sense to recycle the process for each request however.

MaxConnectionsPerChild defines how often the process is recycled


> They don't have to be simultaneous.

Now you're just moving a goalpost. We're talking about Erlang processes - you can have millions of them simultaneously.

> It wouldnt make much sense to recycle the process for each request however.

It does make a great deal of sense from the fault-tolerance perspective. It also helps security.

I have an unpleasant feeling that you don't know much about Erlang and its philosophy. The "one process per connection" architecture has many benefits; it's a canonical way of getting concurrency and parallelism on Unix systems, for example. The problem is the way OSes handle and schedule processes, it simply doesn't work with a very large number of them. Erlang implements its own kind of processes, which enforce the same constraints os OS-level processes without taking megs of RAM each and without being pressed after spawning a few thousand processes.


What I'm saying is that it's misleading to call one process a web server when they really are saying "How we handle 2 million simultaneous web requests". Also I would be very interested in reading such a writeup. There are many different approaches to dealing with it (i.e. on thread per connection / one process / poll based approch). I certainly can see the benefits of having it in lightweight threads called processes in this context.


> What I'm saying is that it's misleading to call one process a web server when they really are saying [...]

No. It's simply a terminology used in Erlang, there's nothing misleading in it, once you know the definition. I mean, there is very specific thing called "server" in Erlang: http://erlang.org/doc/man/gen_server.html and also "processes" are described here: http://erlang.org/doc/reference_manual/processes.html

What's highly misleading is using more general definitions of such words instead of Erlang-specific meanings. This is a conversation about Erlang and it makes sense to use the terms accepted in Erlang; you can't blame anyone for the fact that some of these terms have meanings different than you'd expect.


It's misleading because there are now 2 different definitions of the exact same word at work. It's a computer running a process running an Erlang VM that is then running multiple Erlang processes. It is completely reasonable to be annoyed at that. Many languages have a concept of threading in userspace rather then kernel space, but they don't just call it threading, they give it another name so that it can be distinguished from the existing concept. For example, stackless python's tasklets, Go's goroutines, green-threads.


Well, you can't really expect to hold a meaningful conversation about a language without knowing, at least, its basic concepts. With Erlang "processes" are front and center; you read about them in every tutorial and reference, in most blogposts, basically everywhere Erlang related, ad nauseam. So you're guaranteed to understand what kind of processes we're talking about if you have any familiarity with it at all.

And there is a good argument in favor of calling them "processes". Conceptually, both are providing a very strong separation/isolation and cleanup guarantees, which is not true for any of other constructs you mentioned. The range of operation you can perform on a process: spawning, monitoring, sending signals - is the same for both OS and Erlang processes. You can very much think and use both kinds in exactly the same way, other than rather excessive memory usage by the OS-level kind.

I actually have a somewhat related talk tomorrow, about - in essence - replacing Celery in a Python project with Elixir. Integration via ErlPort makes it natural to think of Python worker processes as simply (Erlang) processes - you spawn and link and monitor them just like you would normal (Erlang) processes. It even works with Poolboy out of the box.


On the other had, the exact same comparisons hold true for threads vs. green threads. They are extremely similar, they perform almost identically to threads in many languages, and yet, green threads have their own name, as they are a different thing. I can understand that Erlang is built around calling them just "processes", but that's, in my opinion, a bad idea. To me, the fact that Erlang developers insist on co-opting the name "process" seems arrogant.


Time out. Let's go back to the late 80s, when Erlang was written. It ran on Ericsson's switches. The platform needed to describe a unit of concurrency that provided execution and memory isolation. Process fits that definition perfectly.

Fast forward to today. Look at projects like LING, where the Erlang VM runs directly on Xen, without a traditional operating system. Processes in Erlang are 100% analogous to OS processes.

Also look at something like Virtualbox or VMware. Is your process running in a VM less of a process than the OS process powering the VM itself?


The slides from the mentioned talk, in case someone is interested: https://klibert.pl/statics/python-and-elixir/


Jerfs comment here pretty much sums up my thoughts and what I was trying to argue (definitely in a much better way): https://news.ycombinator.com/item?id=11282501


>it's misleading to call one process a web server

How is that even slightly misleading? That's exactly what many web servers on erlang do.


Erlang processes aren't threads and its misleading (or at least misguided) to use that terminology for them. [1]

[1] http://stackoverflow.com/a/32296577


Calling them processes is even more misleading. A process has a specific meaning on most operating systems. Erlang "processes" should, if anything, be referred to as "green-processes" or "lightweight processes", as to do otherwise leads to dangerous comparisons, such as implying that a server running an Erlang server has 2 million processes running, when in reality it has 1 (or several) Erlang processes, running a VM that is in turn running 2 million Erlang "green-processes". And while "threads" is incorrect, Erlang processes are closer to green threads then to any other concept.

    The Erlang virtual machine has what might be called "green processes" – they are like operating system processes (they do not share state like threads do) but are implemented within the Erlang Run Time System (erts). These are sometimes referred to as "green threads", but have significant differences from standard green threads.


> Well that would depend how long you run it for. They don't have to be simultaneous.

The long way of saying "No, I can't, but I really don't want to accept that there might be a better model for this than my out of the box Apache setup."


Theoretically yes. Pragmatically no. There comes a point when it makes more sense to scale sideways (more OS instances) instead of up (more processes on beefier hardware).

It's no secret that Apache isn't the leanest of HTTP daemons, but that's really a separate point to the argument of whether you can reasonably argue a process as a web server.


That's very interesting. With shared nothing in this topic you mean that HHVM has a pool of N threads, and each thread runs PHP code sharing nothing with other threads, so reducing the threads-locking to 0, right?


I really love the idea of explaining the actor model as tons of tiny little servers compared to a single monolithic server. I tried to make the same comparison recently when I talked about adding distributed transactions to CurioDB (Redis clone built with Scala/Akka): http://blog.jupo.org/2016/01/28/distributed-transactions-in-...


What was the motivation to rebuild it in Scala?


If each key/value in the DB is treated as distributed (as per the actor model), a lot of the limitations Redis faces in a distributed environment are solved. I wrote a lot more about it here: http://blog.jupo.org/2015/07/08/curiodb-a-distributed-persis...


Beautiful way of putting it. Also very close to Alan Kay's vision of "object oriented"

"In computer terms, Smalltalk is a recursion on the notion of computer itself. Instead of dividing “computer stuff” into things each less strong than the whole – like data structures, procedures, and functions which are the usual paraphernalia of programming languages – each Smalltalk object is a recursion on the entire possibilities of the computer. Thus its semantics are a bit like having thousands and thousands of computer all hooked together by a very fast network." -- The Early History of Smalltalk [1]

I also personally like the following: a web package tracker can be seen as a function that returns the status of a package when given the package id as argument. It can also be seen as follows: every package has its own website.

I think the latter is vastly simpler/more powerful/scalable.

What's interesting is that both of these views can exist simultaneously, both on the implementation and on the interface side.

[1] http://gagne.homedns.org/~tgagne/contrib/EarlyHistoryST.html


> Also very close to Alan Kay's vision of "object oriented"

Yes! I remember reading a quote by Alan Key about how pretty much every modern OOP language gets it right entirely wrong, because OOP is supposed to be - in Kay's view - about classes or inheritance, but about sending messages to each other.

If one thinks of processes as objects and sending messages as "method calls", it is very object-oriented indeed.


Yep. Except: please don't think of them as "method calls" :-))

"Smalltalk is not only NOT its syntax or the class library, it is not even about classes. I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea.

The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be." [1]

My contribution towards this is Objective-Smalltalk[2], where I am working on making connectors user definable. So far, it seems to be working.

[1] http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-...

[2] http://objective.st/


Yes, that must have been the one I was thinking of! :)

I was very happy a couple of years back when I was building a toy project in C and Lua to come across a Lua library called concurrentlua[1] that implemented Erlang-style message passing (without pattern matching, though, IIRC). It even allowed you to send messages across machines.

[1] https://github.com/lefcha/concurrentlua


> The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial".

間 means "a gap", but I don't see the relation to programming. Try 絆 and 繋がる ("bonds" and "connect") out if you'd like some words popular in cheesy pop songs!


I get the feeling that people reading this and saying "it's just kind of like a pool of PHP FastCGI instances or Apache worker pools" etc. Do not understand that Phoenix + Elixir can serve minimal dynamic requests about 20% slower than nginx can serve static files. This is very very fast.

It also leads to better code due to being functional, lots of amazing syntactic sugar like the |> operator and the OTP can easily allow you to move processes (these basically have very little overhead) to different machines as you wish to scale. Pattern matching and guards are also incredible.

I really do not want to write anything else!


And I think Erlang programmers sometimes, if not frequently, forget the Erlang is not magic, it runs on the same CPUs as everybody else with the same access to memory protection, same assembler, etc., and consequently, it can't actually do anything that other many languages can't do too. (And the languages that can't do it only can't do it because they've somehow locked themselves out of it.)

Yes, it's a great default to be shared-nothing, and it's great to have a VM that supports this, the differences in affordances are important and I think Erlang was a milestone language. Very serious about that. But when it comes down to it, the practical difference between nginx (as in, a finished piece of software that exists right now, not the hypothetical space of future C programs) and an Erlang webserver is not much.

So when the Erlang community tries to rewrite the definition of "server" to be "an Erlang process", it is not an unreasonable response to point out that there are plenty of other web servers that have similar levels of isolation that just happen to be written in other languages, and that we don't run around saying "Oh, my nginx has two hundred thousand web servers in it!"

This is bad advocacy, and I'd really suggest that Erlangers stop trying to defend this point. It's not a defensible position. There's no reason to try to redefine "server" to be "the number of isolated processes", because even if you do, Erlang will not have some sort of unique claim to being able to run lots of such processes. Any two "processes" that don't write into each other's space and can't crash each other are "isolated", even if some implementation work had to be done to get it that way. And of all things, webservers are the definitive programs that have had that isolation bashed on and banged on to the nth degree; I wouldn't be surprised that nginx's isolation is more tested than Erlang itself's.


> And I think Erlang programmers sometimes, if not frequently, forget the Erlang is not magic, it runs on the same CPUs as everybody else with the same access to memory protection, same assembler, etc., and consequently, it can't actually do anything that other many languages can't do too. (And the languages that can't do it only can't do it because they've somehow locked themselves out of it.)

Well, due to Turing-completeness your parenthetical comment is wrong (since any language can do anything any other language can, if only by embedding an interpreter for the other), and your first comment isn't really relevant: sure, it's possible to write code which implements Erlang's ideas in C, or in assembler, or in AppleScript for that matter — but is it probable? Is one fighting the language, syntax and semantics every step of the way, or are they helping on? Is it even possible to see the forest for the trees when writing Erlang-style CSP code in C?

No, Erlang is not magic, and yes, it's possible to write an Erlang emulator in any language. But there are huge advantages to using a language meant to do something rather than a language which can be made to do something.


"it can't actually do anything that other many languages can't do too."

Yes, we all are familiar with Turing equivalence, but its an exceedingly pedantic point when discussing programming languages.


"it runs on the same CPUs as everybody else with the same access to memory protection, same assembler"

I'm not referring to Turing equivalence at all. I'm 100% discussing practical considerations here.


Why are you focusing so much on process isolation? It's easy and natural for an Erlang process to remain alive indeterminately and serve arbitrarily many requests to arbitrarily many clients, potentially communicating with many processes in its lifetime, both on its own machine and on other machines. That's a lot more like a web server than Nginx processes are. That's what I use it for, and I wouldn't even know where to begin to get Nginx to do it. I hope you won't claim it's just as easy and natural.

Armstrong was making an analogy, and it's useful to be charitable with analogies to get the most out of them. All analogies, descriptions, models and metaphors break down somewhere. He's been doing pretty well as an Erlang advocate so far.


Let's try to sum up your points so they can be addressed 1 by 1. I haven't used Erlang but I have done a fair bit of Elixir now.

"it can't actually do anything that other many languages can't do too"

I personally think it's really difficult to scale systems without loosing messages. Having the OTP available in GenServer and GenEvent in Elixir is amazing and provides a nice way of creating your own servers. A good example of these primitives in use is the Redis Driver; it uses two processes - one to send messages to Redis and one to Receive them. This means that essentially all messages going to redis are performed in a set and forget way. Most other clients (even in erlang) block and wait for a response from redis.

Now I am not saying that it's impossible to write this stuff well in other languages/virtual machines but I am saying that Erlang/Elixir/the OTP makes it easier.

"Yes, it's a great default to be shared-nothing"

I'm not certain you could call elixir shared nothing. It's simply wrong; I'd look at the Elixir primitive/server called Agent which a specific construct of sharing state.

"it is not an unreasonable response to point out that there are plenty of other web servers that have similar levels of isolation"

I'm not certain you understand what the OTP is giving you over a webserver or an operating system process, it's much more about having the ability to have a message queue system baked in and easy to use that happens to be supervised and has the ability to be moved to different machines transparently.

"This is bad advocacy, and I'd really suggest that Erlangers stop trying to defend this point"

You are arguing against a straw man for me here. Maybe the original poster is a bit about this but I couldn't care less about the number of processes. I'm much more concerned that Elixir allows me to think about building better and more scalable software with a lot of edge cases solved for me. Once you get that every section of your program can be thought of a server which you can send messages to and this is super efficient, you don't need to wait for anything to block anymore it really is a better model.

"Erlang will not have some sort of unique claim to being able to run lots of such processes"

It can definitely be done in other languages but it's much harder.

"I wouldn't be surprised that nginx's isolation is more tested than Erlang itself's."

It's never been about isolation, it's about not blocking and sending messages instead. And getting supervision and robustness and fault tolerance and easy of use. Yes you could have 300 rabbit queues in front of every part of your program but I'd say it's easier and more scalable to just use Elixir/Erlang/OTP.


"I haven't used Erlang but I have done a fair bit of Elixir now."

I've programmed in Erlang for somewhere around 6 years. Which really makes things like "I'm not certain you understand what the OTP is giving you" rather silly. [1]

You're falling into the exact trap I was talking about, which is implicitly assuming there's some sort of magic sauce that only Erlang brings to you. You also are so busy disagreeing that you didn't notice that I already said pretty much everything you said. (What is it about Erlangers explaining back my own points to me, only angrily as if I didn't get the points I just made? I get this more from Erlang than any other community.) I'm not kidding about Erlang being a milestone language. But it's not magic.

"You are arguing against a straw man for me here."

What I'm arguing against is Joe Armstrong's attempt to redefine terms to suit his advocacy. If that's a "straw man", take it up with him, not me.

[1]: See also http://www.jerf.org/iri/post/2930 . And for goodness' sake, please do not reply with another laundry list of the points I already made in that post about how that isn't exactly what Erlang has back to me as if I didn't make them already. I've already been through that. (You know the differences because that post tells you exactly what they are.)


All I said was Elixir/Erlang/OTP makes it easier to write scalable, fault tolerant software with explanations as to why.

I'm glad you clearly get the OTP but that is definitely not clear to me from your previous response. Good luck.


This article got me to go figure out precisely what Erlang processes are. Tl;dr - they aren't OS processes. So it is still conceivable that an error in Erlang can bring down all your web servers.

http://stackoverflow.com/questions/2708033/technically-why-a...


An error in the beam VM (or a nif) could lead to crash of the OS process, and related termination of all of the erlang processes within. However, the beam VM is pretty robust, because of the design, it turns out to be quite small and relatively simple[1], so you don't tend to end up with things were weird edge cases cause terrible crashes that are hard to find. For nifs you write yourself, it's up to you :)

Process isolation isn't complete though, one erlang process can use all of many of the limited resources and end up with the whole vm shutting down. A single process can't use all the CPU, but it can use all the memory, all the atoms, all the filedescriptors, all the ets tables, etc; basically everything in the system limits[2]

[1] Calling it simple is a compliment -- a normal person can look at the vm code and understand it, if they need to.

[2] http://erlang.org/doc/efficiency_guide/advanced.html#id70200


Thanks, yours was finally the comment I was hoping for.

Could you point me at the right place to read (or read about) the beam sources? I'm looking at https://github.com/erlang/otp/wiki/Routemap-source-tree -- is there something better?


You're in the right place; unfortunately there's not a lot of good documentation on the internals (although do check out erts/emulator/internal_doc ). It's much easier to dive into the source if you have something specific on your mind (like debugging a performance problem, or if you find an beam crash...). erts/emulator/beam has the most basic parts of the virtual machine -- processes, memory allocation, process switching, message passing, interfaces to the operating system (although a lot of that is in erts/emulator/drivers), etc.


> So it is still conceivable that an error in Erlang can bring down all your web servers.

No it won't. Not if you mean an error in an Erlang process. It almost sounds impossible, but that is the beauty of the BEAM virtual machine. It is really a marvel of engineering.

Now if you mean an error in Erlang VM itself, then yeah, if that crashes it will bring down the all the connections. But even that there is a answer -- distributed Erlang. Erlang VM can talk to processes on other Erlang VMs, even if those are running on another server on other side of the planet.


> ... So it is still conceivable that an error in Erlang...

are you talking about erlang-runtime here ? it is not very clear what you are implying. can you please elaborate ?

if i take your statement to mean 'erlang runtime' then this is no different than saying, if there is a bug in glibc which gets tickled when i do something, then my process will crash, or if there is a bug in jvm, then the application hosted on the jvm will crash etc. etc.


From OP: "In Erlang we create very lightweight processes, one per connection and within that process spin up a web server. So we might end up with a few million web-servers with one user each."

I don't see the difference between spawning a thread for each connection in other languages and spawning a process for each connection in Erlang, so I was curious if the word 'process' implied some extra level of insulation. Reading all the comments here it's still not clear to me what benefit is being obtained by having "2 million web servers" rather than 2 million connections.


Shared memory space is one thing. A process has isolated memory. An error in a spawned thread could mess up shared memory, crashing multiple threads. An error in a process can't mess with the memory space of other processes.

Since there's no shared memory, performance characteristics change. Garbage collection is done per process and so "stop the world" doesn't happen, meaning each individual "web server" isn't at the mercy of other requests for performance.

It's lots of little things like that.


> I don't see the difference between spawning a thread for each connection in other languages and spawning a process for each connection in Erlang,

That is a huge difference. Otherwise by now, why would Erlang not just run pthread calls and just spawn a thread. Why bother doing all that work?

The reason is because Erlang processes do not share heap memory. That is one of the most important features of Erlang runtime. A crashing process can just crash on its own without leaving all the other 1999999 processes in an undetermined state.


It also simplifies garbage collection. Rather than having to stop the world for garbage collection, it happens on a per-process basis.

Additionally, if a process terminates before a garbage collection cycle is run the VM just reclaims all of its memory without any additional computation needed.


I don't follow, how does Erlang processes not being OS processes relate to what could happen if an Erlang error occurred?


Well, not if you distribute your erlang processes across several physical nodes.


You mean just like you can with Apache or Nginx?


Yes - it's exactly like that. The Erlang VM itself has to be rock solid just like you expect Apache and Nginx to be rock solid.

Fortunately this more or less seems to be the case.


I would say a better comparison is the Linux kernel. If the kernel crashes, everything on that box goes.


No. Not at all like those.


Can you please expand on why?


The processes can communicate with processes on other servers in exactly the same way that they can communicate with processes on the same server.

If you have one traditional application server and want to do some form of cross-user interaction - let's say chat - you can do that trivially, put it in a queue for that user in a global map. Now when you outgrow that server, you need to rewrite all your code to understand the concept of users being on other servers or use an external message queueing system.

In Erlang, all of this is built in by default - if you write code for the OTP framework (the standard library for dealing with messaging, process supervision, etc), all you need to do is connect the two servers together and point them at the same shared user->process mapping process (which you have to build whether you're dealing with one server or 20, as there's no global data otherwise).

Of course, if you have absolutely no direct interaction between users, it's trivial to scale anything - fire up a new server and direct some portion of your traffic at it. Erlang's trick is to make it that easy even when you do have direct interaction between users. And of course that works for even backend workloads - if you have a backend server that your frontend servers talk to and need to scale it, if you've coded in Erlang and put as much logic as possible in per-connection processes, you're probably a significant chunk of the way there.


What does any of these have to do with the equivalence between Apache/Ngix and Erlang when running multiple servers of them to avoid crashing all?


You can't use traditional servers to scale in lots of cases without a lot of additional development work. You can use Erlang servers to do so. Therefore, they're not the same - Erlang covers a broader range of use cases.


A constructive comment would be appreciated.


The Erlang error in code could be replicated to every single physical node and all of them would crash.


Presumably one would catch such a blatant error early in testing. The problem is the odd case, the code that only crashes in a rare state, never seen in test. When the odd state does occur the effect is mild, as only one actor dies as opposed to the entire OS process.


I was responding to the assertion that distributing code to multiple "processes" would avoid crashing all of them, which is simply not true. Even the rare state you mentioned would crash all the processes. The rare state would be an input that testing failed to catch. The code in any of the processes encountering the rare input could crash.


That is not what happens.


The title is somewhat misleading. I clicked expecting to read how someone is managing 2 mln web server instances such nginx or apache. I was curious what kind of company would claim that.


This is Joe's blog, he co-created Erlang 30 years ago.

So he's not really affiliated with any company, but rather a voice for Erlang itself.


It's not a clickbait of course, it's just that the term 'web server' and 'manage' made it easy to misinterpret the subject. Typically, by managing a web server we mean administrative tasks involved in configuring and running a multi-threaded http daemon. Perhaps "2 million web server processes" would have been better.


Even that would be confusing, as "Erlang process" != "process" for anyone that's not an Erlang programmer(or who doesn't parse "process" as the Erlang variant by default)


As has been said, Joe is known for creating Erlang which is known for running as many processes as the need arises, giving you access to all the cores in a machine as a result.


Utilizing multiple processor cores was something of an afterthought, IIRC. Initially, the Erlang VM only used a single CPU, and in order to utilize multiple CPUs, one had to start one Erlang VM per CPU/core.

One of the nice things about Erlang's general approach is that this was a lot less painful than in many other languages. (I have no idea, though, how much work it was to get the Erlang VM itself to use multiple threads.)


I always assumed it was one of the main benefits of Erlang since the beginning, back when I tried to attempt to learn Erlang years ago it was one of the main things I learned about it. I've yet to really get down into Erlang since then though.


FWIW, I think Erlang gained SMP-support at least ten years ago.

And running multiple Erlang VMs on one machine - from the Erlang application's perspective - is no different from running them on several machines. The really nice thing about Erlang is that it scales across multiple nodes very smoothly.


In the late 90s I implemented the same concept for a web application written in Perl. (It's still running today.) There were three tiers to it:

Tier 1: a very small master program which ran in one process. It's job was to open the listening socket and maintain a pool of connection handler processes.

Tier 2: connection handler processes, forked from the master program. When they started they would load up the web application code, then wait for connections on the listening socket or for messages from the master process. They also monitored their own health and would terminate if they thought something went wrong. (ex: this protected them from memory leaks in the socket handling code.) When an http connection came in on the socket, they would fork off a process to handle the request.

Tier 3: request handlers. These processes would handle one http request and then terminate. When they started, they had a pristine copy of the web application code (thanks to Copy-On-Write memory sharing of forked processes) so I knew that there was no old data leaked from previous requests. And since they were designed to terminate after a single request, error handling was no problem; those would terminate too. In cases where a process consumed a lot of memory it would get released to the OS when the process ended. We also had a separate watchdog process that would kill any request handler that consumed too much cpu, memory, or was running much longer than our typical response time.

This scaled up to handling hundreds of concurrent requests per (circa 2005 Solaris) server, and around six million requests per day across a web farm of 4 servers. That was back in 2010; I don't know how much the traffic has grown since then but I know the company is still running my web app. This was all very robust; before I left I had gotten the error rate down to a handful of crashed processes per year in code that was more than one release old.

BTW, while my custom http server code could handle the entire app on its own, and was used that way in development, for production we normally ran it behind an Apache server that handled static files and would reverse-proxy the page requests to the web app server. So those 6 million requests per day were for the dynamic pages, not all of the static files. That also meant that my web app didn't have to handle caching or keep-alive, which simplified the design and makes the one-request-then-die approach more viable.


I would like to see the code for the chat or presence server. I have a hunch it will look different depending on the experience of the programmer.

I'm especially interested in how they manage state. Because when you do not have to manage state, everything becomes easy and scalable. With state I mean for example a status message for a particular user.


The Phoenix Framework (web framework for Elixir) already solves this for you.[1]

It's called Phoenix.Presence. They used a combination of CRDTs with heartbeats to implement it.

You are right, it is a hard problem because of the distributed nature of Erlang/Elixir. That's why Chris provided a framework level solution for it.

[1] https://youtu.be/XJ9ckqCMiKk?t=921 (Erlang Factory SF 2016 Keynote Phoenix and Elm – Making the Web Functional)


State is one of the big differences between CGI-esque scripting languages and how things are usually done in Erlang. As processes are cheap, one way is to fire up a session process for each user that connects to the system, and keep this around for the lifetime of their session. The process will hold within it any state that is needed between requests, without needing to refer back to the database every time.

Whenever a request comes in, you then route the request to that session process (even when it's on another physical machine). This not only provides a clearer mapping between your external and internal APIs, but also allows you to fairly easily ensure that a user can't perform concurrent actions.

Edit: Not sure if that's exactly what you mean, so let me know if you have more questions :)


> why does the Phoenix Framework outperform Ruby on Rails?

Ruby is known to be a slow language. Most things will easily outperform it


Language performance rarely impacts web applications performance.

Actual bottlenecks are the database, and your webserver/architectural choices.


Could someone explain how in the context of the article, "process" differs from a "thread", in say Java or Python? Or are they one in the same?


Process is the correct term. It denotes a unit of execution with an independent memory space, just like an OS process. However, instead of using OS processes and relying on the OS scheduler, the Erlang VM has its own scheduler and manages its own processes. Those processes are incredibly small. 100k erlang processes consume just a few hundred MB of memory.


So it is like go routine?


Somewhat, except goroutines have shared memory and the tradeoffs that come from that.

Also, much better tooling around Erlang processes than goroutines.


Yes. In fact go routines and channels are touted as an implementation of Tony Hoare's CSP -- Communicating Sequential Processes.


In Erlang, a "process" is analogous to a Java or Python thread in that it's not spinning up a separate OS-level process, but it really functions as its own self-contained process within the Erlang VM. Processes communicate with messages (like OS-level processes) but can't directly share state (like threads would). The result is that in a single Erlang VM, you can have a butt-ton of processes, each effectively runs a full copy of whatever it needs to run, and it can fail, crash, or otherwise explode without affecting the other processes in the VM. The model is what gives Erlang its remarkable resilience and scalability.


An Erlang process isn't really equivalent to a Java thread. A Java thread maps to a native thread, whereas an Erlang process is entirely in userspace. Erlang processes are basically green threads without shared state. Java has no equivalent in the standard library.


It's not equivalent, but it's roughly analogous from the Java programmer's perspective. You would spawn a new process much like you would start a new thread, but the similarities end there.


"Whenever you would spawn a new thread in Java, you would spawn a process." While that is true, it represents a subset of cases where you would spawn a process.

Processes are meant to model the real world. I'd say the truth is closer to "Any time you would instantiate an object in Java, you spawn a process." That's not always going to be true, but more true than you might expect.


It is really something different. Depends on what axes you are comparing. It is analogous in one aspect -- a unit of execution. But is very different in that it doesn't share memory. That second part is crucial and it what makes it have better reliability, better GC characteristics and so on.


All interesting replies, but as I am not an erlang programmer could someone briefly explain the threading model? Whatever the unit of code packaging is ultimately it all has to get scheduled onto a core. On what platform is 2,000,000 threads a realistic scenario? If you don't have 2,000,000 threads you don't have 2,000,000 web servers. You might, for example, have 2000 web servers each using an event-driven framework to handle 1000 concurrent requests. Is that closer to the erlang processing model?


A single preemptive event driven framework is probably a closer match.

BEAM has a very efficient multi core fair scheduler. All 2M process share time on all available cores and are handled by a single scheduler. Because it's preemptive, starvation problems one might see on an event loop are avoided. A CPU intensive process can run alongside low latency IO processes and they are all fairly scheduled.


As others have said, Erlang processes are not OS processes, nor are they Java threads. They're closer to Go's goroutines or Python's eventlets; they are well-suited to run even a single function concurrently.

In the analogy, it's like having 2M web servers, but where only, say, 1000 of them are actually running. As a developer having independent web servers helps you to think about what should be done instead of how it's going to be done. After that, the fact that a server is actually running or it is stalled because it's reading from disk or it is waiting for available CPU is an implementation detail and something that monitoring will show you.


The erlang process model, since SMP support is more or less, each cpu core runs one OS thread which is running a 'scheduler'. Each scheduler has a list of processes, and determines the order they should get to run in; the currently scheduled process is run in the scheduler's thread, until it has used its quota of cpu (reductions) or it hits some other trigger that causes its execution to be suspended (waiting for a message, sending a message to a busy port, maybe some other cases?). Schedulers have a method to balance their work queues, etc. (Leaving out dirty schedulers, io schedulers, configuration of schedulers etc)

Each erlang process has its own (erlang) heap and stack, and there is no ability within the Erlang language to access memory by address, a process will only access its own memory, or special types of shared memory (such as the shared/large binary heap).

From the OS perspective, it's one process, with one memory space, and several threads.

A realistic Erlang example where you would have 2 Million processes is something like a chat server (1 process per connected user), either via a traditional tcp protocol or a websocket protocol for a browser. In another language, you would likely need to multiplex multiple users onto each thread, in Erlang each user gets their own process and the code ends up being quite straight forward.


Generally, processes do not share memory, but threads may.


The main difference between a thread and an Erlang actor is that with threads you have to manage resource sharing with locks and with actors you don't share resources. Instead you pass the needed data to each actor through messages. The end result is that it's way easier to manage concurrency through actors rather than threads. It also buys you more features; imagine being able to access all of the actors of a cluster of machines or determining what to do when an actor fails (eg restart it or fail all other actors under the same actor supervisor). I feel that the closest thing to the actor model in the thread world is probably Java's Executor (minus the cool features)

You can do actor model concurrency on Java (and Scala) with the Akka framework (http://http://akka.io). It's a pretty mature one as well. I've been using it for a while now.


Threads share heap memory. Processes don't.

Erlang processes are very lightweight (only a few KB of memory) yet they have the property of OS processes of keeping their data isolated.

Think about it like an operating system. In most programming platforms today you are essentially programming in Windows 3.1 or DOS. So when the calculator process crashes, it could take down your game or your word processor because they all share their memory space. That is all cool, but it is cool for 1994, not for 2016.

When programming in Elixir/Erlang/LFE, is like moving up the ladder and using a real OS (preemptable execution, processes don't mess with each other's memory, and so on).


A little while ago I wrote an extremely short introduction to distributed, highly scalable, fault tolerant system.

It is marketing material for my consulting activity anyway some of you can find it interesting.

The PDF is here: https://github.com/siscia/intro-to-distributed-system/blob/m...

The source code is open, so if you find a better way to describe things feel free to open an issue or a pull request...


I can't help but think this is madly in-efficient with cache misses and the like.


Why?


I think what he's trying to say is that silver bullets don't exist. Only pros and cons.


I'm specifically asking about caching.


Presumably he's talking about processor caches. A shared nothing means that every time you "context" switch to another "process" you're going to have to reload all your cache lines in the L1 D-cache.


In Erlang you have a "reduction count budget" of 2000 reductions. This is fairly low, less than 1ms of execution, but during that time you have exclusive use of a CPU. At the end of your budget, you might be preempted, or you might get another window. So you take a bit of a hit to cache, but it's not like you are infinitely context switching. In practice it works fairly well.


Erlang process != OS process.


Processes can communicate. I reckon you could have a process that's responsible for managing the cache instead of managing a separate cache from each process.

I'm no Erlang programmer, though.


processes do not share memory, but threads may be,


does anybody know how to undo an upvote here in HN ?? This reading was a terrible waste of time :(


You cannot alter your vote.


You don't need to "crash the server" in response to an error from a single user - It is sufficient to just close the connection and destroy the session.

I doubt that erlang spawns millions of OS processes because that would be extremely inefficient due to CPU context switching. So in reality, all erlang is doing behind the scenes is closing the connection and destroying the session... It's not actually crashing and restarting any processes... You can easily implement this behavior with pretty much any modern server engine such as Node.js and tornado.


Yes, you can (mostly) implement this behaviour in pretty much any modern server framework or language. The point is, you get this behaviour for free as a core part of the language. I write a lot more javascript than erlang these days and I really miss being able to just say, "let it fail".


Consider uncaught exceptions. You basically have two options: catch and ignore, or restart the entire application, usually with something like monit or upstart. Both aren't great. Erlang/OTP brings true fault tolerance in a number of ways [1]. Think of it like having a monit baked into the language as a first class primitive.

[1] restricting global shared memory, enforcing fault isolation through the process model, and supervisors


> I doubt that erlang spawns millions of OS processes because that would be extremely inefficient due to CPU context switching.

You are correct, Erlang uses something akin to "green processes" where it manages its own threads but they do not share anything in the way that normal threads would.


Joe Armstrong is one of the original authors of Erlang. He's using the erlang nomenclature for servers and processes.


> sufficient to just close the connection and destroy the session.

Unless that connection managed to write over some memory already. Then you can't just close the connection. You have to restart the whole server OS process.

> I doubt that erlang spawns millions of OS processes because that would be extremely inefficient due to CPU context switching

Erlang does spawn millions of processes. You can try it yourself in a few lines of code. They are not OS processes though. Although they are similar in that OS processes and Erlang ones don't share memory with their peers -- a very useful property indeed.

So Erlang in a way can have it cake and eat it took. It gets the benefit of OS processes but at a much lower cost.


OS processes !== BEAM process

He's referencing processes on the Erlang virtual machine, not OS processes. A BEAM process is much lighter weight.


How is that different than just throwing an uncaught exception? Most web toolkits will catch that, log it, and 500 the client for you.


Your web service is stateless - for those cases it's fine. However with this model, you are offloading all your state to another layer - likely the database layer.

With Erlang (or rather, the Actor model), you can build state-full pieces of code that are also resilient to crashes that cascade across the entire system, by having a standard pattern of restarting individual actors from a known clean state through a clean supervisor hierarchy.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: