

Explain “Event-Driven” Web Servers to Your Grandma - daverecycles
http://daverecycles.com/post/3104767110/explain-event-driven-web-servers-to-your-grandma

======
jasonkester
Terrible analogy, especially when there exists a better one within the food
service industry. How about rewriting that article from the perspective of a
coffee shop?

We've all been to the neighborhood coffee place where the girl will take your
order, turn around and make your entire drink, hand it to you and ask for
payment. We've all stood in that line.

We've also all been to Starbucks, where the girl takes your order, writes it
on a cup, takes your money, then moves on to the next customer. And by the
time you walk to the other end of the counter the guy in front of you already
has coffee in his hand.

It still doesn't fit web servers exactly, but at least it fits the real world.

------
giberson
Grandma, "Event-Driven" is just a fancy word for magic. Don't worry about it.

~~~
Jinyoung
Why are people upvoting this comment? not fully implemented and readily
available != magic.

~~~
rbranson
Not fully implemented? I'm pretty sure all of the billions of requests that
hit nginx, varnish, haproxy, and Zeus would disagree with you. What about all
those massive sites using memcached on the backend? You know -- Facebook,
Twitter, Digg, Craigslist, YouTube?

------
amalcon
"Event-driven" architecture doesn't imply a fundamentally different way of
handling requests than thread-based. The only difference is that in an event-
driven architecture, the scheduling is handled in the userspace code. In a
thread-based architecture, it's done in the kernel. The advantage of doing it
in the userspace code is that it can be done in a simpler, more specialized
way.

The hardware is doing fundamentally the same thing either way, but in the
threaded model, it's also doing a lot of other stuff that you probably don't
care about.

So, to explain it to my grandma: It's just a simpler way to think about it.
There isn't really a big difference.

~~~
jerf
"The advantage of doing it in the userspace code is that it can be done in a
simpler, more specialized way."

No, the advantage of doing it in userspace code is that you can paper over the
deficiencies of the layers under the one you are working in with significant
manual effort. Node.js _introduced_ the concurrency problems when it selected
Javascript as one of the layers, it doesn't get much credit in my mind for
then solving them at great effort and with horrible damage done to the
resulting program structures. The problems Node.js solves are not fundamental
to programming, they are fundamental to _Javascript_.

Pick something like Erlang and the problem never exists in the first place.
You don't have to paper over the deficiencies in the lower levels, because the
levels below the code you're writing aren't deficient for concurrency in the
first place.

I should point out that in general this is not necessarily a bad thing; alas,
there's always some way your lower layers are deficient, and it's far worse
when they make it impossible to paper over the problem. Still, you will never
end up with simpler code. More specialized, oh my yes, but certainly not
simpler. And the wisdom of picking a layer that is fundamentally deficient for
your core target problem then papering over it seems pretty limited to me.

~~~
amalcon
_Pick something like Erlang and the problem never exists in the first place.
You don't have to paper over the deficiencies in the lower levels, because the
levels below the code you're writing aren't deficient for concurrency in the
first place._

Now, be fair: Erlang papers over the deficiencies of the lower layers (namely,
the kernel) with significant manual effort. It's just that someone else has
already gone to this effort. BEAM _is_ an event-based server behind the
scenes, with lots of syntactic sugar to make it look multithreaded. The PLT
Scheme webserver is another example of this.

I also didn't mean to suggest that the code itself is necessarily simpler. It
can be, but (as the node.js example proves) it is certainly not always. The
_scheduling algorithm_ , on the other hand, is typically much simpler. Namely,
it's typically "round-robin cooperative multitasking." No serious operating
system since Windows for Workgroups has actually tried to use round-robin
cooperative multitasking.

All the other "thread stuff" is typically simplified as well: smaller stacks
(if any at all), simpler context-switching code, that sort of thing. The
actual encoding of the business logic? That depends on the problem.

~~~
jerf
"Now, be fair: Erlang papers over the deficiencies of the lower layers
(namely, the kernel) with significant manual effort."

Since the various Xpolls were added to the kernel, I would disagree. Even
based on a select loop the kernel has still been happy to do many things
asynchronously for a long time now, it's just a bit of a klunky API. What
stopped it from being easy was that C had no good concurrency story except
"lots of manual effort", so nobody could really effectively use those things
in C. (The kernel has actually had the "lots of manual effort" applied to it.)
The vast majority of concurrency failure has been layered in at the high-level
language point. There is a large class of "better C than C" languages (or the
implementations thereof as the case may be) that simply shoot you before you
can even think about fine-grained concurrency, CPython, Perl, Ruby,
Javascript, etc. And another class of languages that permit it but don't
really help much, like C(++), Java, C#, and anything where you might seriously
use a semaphore directly.

This is the core error that Node.js hype and its partisans make, mistaking the
deficiencies of a set of languages for deficiencies in programming itself.
It's not even in the OS; the OS doesn't mind threadlets/green threads/OS
processes, it's all in the high-level languages.

The kernel may not have provided you a threadlet system out of the box, but
the Erlang VM isn't particularly _fighting_ the kernel either, it's _building_
on it. In this context, that's not particularly what I mean by deficiency.
Missing things can be added, I'm talking about things where the underlying
layers actively fight you.

Also, yes, in some respect I'm still really responding more to the hype than
directly to you. Saying that BEAM is an event-based server behind the scenes
is basically the point I think needs to be made more clearly. You can be
"asynchronous" and "event based" without having to embed the asnchronony and
eventedness visibly in every function and nearly every line in the code base.

~~~
wmf
_the OS doesn't mind threadlets/green threads/OS processes_

Except for the blocking system calls and blocking page faults that block all
your green threads.

~~~
jerf
To the extent those can be papered over, they have been by most good runtimes;
the remainder is evenly applied to all languages and runtimes under discussion
because nobody can escape from them at all, no matter how awesome the top
layers are.

This is what I was referencing when I offhandedly mentioned the fact that some
layers can make papering over them actually impossible; if the kernel can not
be convinced to do it with any series of syscalls, or worse yet the hardware
itself can not be convinced, you lose, game over. Another example, Haskell's
very strong type system can do a fairly good job of making really damned sure
you don't escape from the system, which is useful for making an STM that is
actually usable precisely because the smallest escape hatch tends to bring it
down. But the downside is that a library or something can actually make it
_impossible_ to hack around a problem (with any reasonable degree of effort).

------
bartl
Explain "web server" to your grandma. Oopsie.

~~~
mishmash
> Explain "web server" to your grandma. Oopsie.

It's like a waiter at a restaurant- you tell it what you want and a few
moments later, the waiter (or server) returns what you've ordered.

Good?

~~~
noblethrasher
The waiter is the user's agent, the kitchen is the server. Each food preparer
in the kitchen is a thread/process.

The waiter takes the customer's order and then waits for the food to be
prepared before returning with the dish.

Asking the the waiter to ask the chef whether or not the fish is fresh is like
making a head request.

~~~
mishmash
Your metaphor is clearly better; grandma performing HEAD requests on fish (a
URI) is priceless.

However, if granny were in a hurry, she may ask the waiter "Will my order be
up soon?" to which the waiter would, after checking with the kitchen and
receiving a 304 Not Modified on the still raw fish resource, he could decide
to then 307 to some free breadsticks resource.

Of course, after lunch, dear granny will eventually receive a 402. Let's hope
she has money today, or she may end up 401'ed and have to start washing
dishes.

------
jlew
While a great attempt at an analogy, I don't think that it really helps
things. I've never misunderstood THAT part of event-based asynchronicity (is
that even a word?) The part that is confusing to me is HOW it works and
eventually to the point of WHY and/or HOW it is supposedly better than
traditional threading (other than cleaner-looking code). I've never seen a
good explanation in non-OS programmer terms.

To me, it seems that no matter how you take the "messages" to do work, that
work still has to be done. It surely doesn't magically use less resources
because you told the OS that it could just call you back when it is done, as
opposed to you having to hang around? Something has to be hanging around on
one side or the other, and the "call back" takes resources as well, surely? It
seems that you are just trading tit for tat. Maybe the reason is to not
utilize some specific resource in the meantime?

------
sophacles
What a terrible analogy. I mean, there is a pizza-shop or other food service
analogy in there, I've made it plenty of times. The problem is that when you
make the phone call = request in the analogy, you make the phone connection
analogous to the socket. At least have the operator put the caller on hold!

Of course then Grandma, not being an idiot, will say "why not just have driver
bring the pizza and not have the phone all tied up to begin with?" and she is
_absolutely correct_. It is better to just set up a scenario where you have
waiters, and customers show up a the shop, and in blocking your waiter doubles
as the cook, so you need one waiter per meal... and so on. This analogy passes
a slightly closer examination.

------
Jinyoung
Hmm, perhaps a closer analogy:

Traditional Web Server:

The pizza shop receives a call for the initial order and starts the pie. Then
the customer calls back periodically to check if the pie is done because the
pizza shop cannot call back or deliver.

~~~
sausagefeet
No I don't think so. He is talking about a single request here, you seem to be
talking about a session.

------
newhouseb
Oh hey, I came up with the exact same analogy about 6 months ago in explaining
Tornado and epoll on quora with a bit more technical detail, just replace
pizza with pies:

[http://www.quora.com/Can-someone-explain-poll-epoll-in-
Layma...](http://www.quora.com/Can-someone-explain-poll-epoll-in-Laymans-
terms-How-is-Tornado-taking-advantage-of-this-technology/answer/Ben-Newhouse)

------
sambeau
What is being described here is blocking and non-blocking IO. The analogy is
pretty good.

However, the pizza company can probably still only cook 256 pizzas at the same
time (due to running out of pan-handles).

------
kqueue
event driven development should be avoided whenever possible. coroutines exist
for a reason.

Writing event-driven applications is very prone to errors and
invalid(impossible) states.

------
vyrotek
I feel that this analogy seems to overlook the work that goes into having a
'customer' being available for, listening and receiving the call back.

------
contextfree
No! I refuse to explain event-driven web servers to my grandma. You can't make
me! I am a free man!

