
JavaScript Concurrency Model and Event Loop - abhikandoi2000
https://developer.mozilla.org/en/docs/Web/JavaScript/EventLoop
======
twic
One thing this doesn't touch on is the different handling of microtasks and
macrotasks:

[https://jakearchibald.com/2015/tasks-microtasks-queues-
and-s...](https://jakearchibald.com/2015/tasks-microtasks-queues-and-
schedules/)

[http://www.c-sharpcorner.com/article/overview-of-micro-
tasks...](http://www.c-sharpcorner.com/article/overview-of-micro-tasks-in-
knockoutjs/)

~~~
eriknstr
>In what order should the logs appear?

Firefox 53 produced a log that matched what I expected but then I ran it a few
more times and then it happened in another order which I then read was the
order they were supposed to appear.

>Microsoft Edge, Firefox 40, iOS Safari and desktop Safari 8.0.8 log
setTimeout before promise1 and promise2 - although it appears to be a race
condition. This is really weird, as Firefox 39 and Safari 8.0.7 get it
consistently right.

So Firefox 39 may have gotten it consistently right on his machine, Firefox 40
did not on his machine and Firefox 53 still does not on my machine.

------
tejohnso
> A very interesting property of the event loop model is that JavaScript,
> unlike a lot of other languages, never blocks.

That seems like a dangerous, misleading statement.

~~~
baron816
One thing that really helped me understand the JS event loop was that the
callback from an asyc call CAN block. Something like

setTimeout(() => {

    
    
      var x;
    
      while (x < 2000000) {
    
       x += 1;
    
      }
    

});

will block. Event listeners, timeouts, and xhr requests just wait to receive a
response. They're not actually doing anything while that happens. And their
callbacks basically just become 'synchronous' when they get that response.

~~~
saurik
I am highly confused as to what you might have thought other than that the
callback itself could block... would you mind sharing? (I am an educator and
teach a class on programming languages, so my question here is one of some
serious interest: to understand the beginner's mindset better.)

~~~
baron816
I just had a hard time building a mental model of what the asyc function was
doing, and the analogy I had heard before made it sound more like a multi-
threaded process (something I also didn't fully grasp).

The analogy I had heard was that of a check-in window at a doctors office: the
nurse would give you a form to fill out and when you were done, you could get
back in line and hand the form back to the receptionist. I think the analogy
was trying to show that there weren't multiple windows to check in at, nor did
you have to stand at the window while you're filling out the form, preventing
people behind you from getting the form. But the reason it's a bad analogy,
and why I was confused, was that it made it appear that you (the patient) are
the callback function, and you're doing some processor intensive work when you
sit down to fill out the form. It would have been clearer had the analogy
specified that you are actually a backend server/web-worker, but even that
doesn't fit right.

So, I think it would be best to avoid using analogies. The event loop isn't
that complicated: async functions just sit around waiting for a response, and
they stick their callback on the event queue when they get that response. When
the call the stack is cleared (from it's synchronous functions), the first
callback in the event queue moves to the call stack (ie, it gets called), and
that process repeats until the queue is cleared.

------
bsaul
Side topic :

The event loop architecture is also heavily used in iOS / Cocoa, although it
is often not well understood by developper. Each thread has an event loop,
including the main UI thread, and many weird behaviors can be understood
better once you know a bit about them.

Which made me wonder if a simple implementation of agent based concurrency in
swift server couldn't simply be one agent - one event loop, plus a way to
prevent direct calls across agent boundaries. Server is not iOS, but i suppose
some language facilities should already be there and make it easier to
implement.

/side topic

~~~
pacaro
Also win32/16 and X both have event loops at the core

~~~
toast0
Xlib has an event loop at the core, but if you write your client directly to
the socket protocol, you can arrange it however you like.

~~~
pacaro
You could probably do the same with PeekMessage in win32, maybe call it
periodically in a green threading library, or similar. But the most common
mechanism is an event loop. I'm sure people have developed X applications that
talk the protocol directly, but it isn't the typical way to do it.

------
iso-8859-1
This doesn't mention server-side JavaScript at all. There are lots of blocking
routines in Node.js, the standard library is full of them.

But it is interesting that it doesn't mention the oft repeated meme
"JavaScript is single-threaded". Would be nice with an example showing
parallel number-crunching without WebWorkers. Is that possible?

~~~
girvo
I haven't read the article, but my simplistic understanding is that Node.js
for your _app_ code is single-threaded, but the async IO that happens is done
via an internal thread-pool. So, number crunching in your code will pretty
much always block the rest of your code (but your existing IO routines will
continue, until they complete, and their callbacks can't run until your number
crunching is done). I am very tired, and not thinking straight, so I've
probably messed up the explanation a bit.

Lately, I've been playing with libev and thread-pools together in Nim; so you
write async/await stuff everywhere with futures, including number crunching,
which gets delegated out to the thread pool and yields back to your main
thread when it's done. It's quite nice, but I forgot how complex this stuff
gets!

~~~
elmigranto
> the async IO that happens is done via an internal thread-pool

Not quite, since using threads internally (thread per IO operation) would
defeat a purpose of async system, which is created specifically to not pay
threading cost (context switches, etc.). In layman terms, kernel will simply
invoke your C callback function when "stuff happens", and libuv propagates
that to JS in platform-agnostic manner.

There are still threads and thread pools in V8 and node for stuff like
calculating PI (or, more realistically, password hashing and other number
crunching), but that is mostly unrelated to async IO itself.

Some google queries: select, poll, epoll, kqueue…

~~~
zbjornson
Are you sure the thread pool isn't used for IO? I thought that was it's
primary use. This explanation is a bit old but I think still correct:
[http://stackoverflow.com/a/20346545](http://stackoverflow.com/a/20346545) and
[http://dailyjs.com/post/libuv](http://dailyjs.com/post/libuv)

~~~
elmigranto
That is true only for file system calls. Everything else is using non-blocking
OS-provided mechanism.

[http://docs.libuv.org/en/v1.x/design.html](http://docs.libuv.org/en/v1.x/design.html)

------
Lerc
In my time I have encountered quite a few descriptions of how the event loop
works in JavaScript.

What I would like to find is _why_ do they work like this. What is so
important that this is the way it must be done. We live with browsers that
lock up for a while until they figure out that some code has inadvertantly
done a while(true). Promises and similar callback amelioration techniques took
ages to turn up, when without the event model they might never have been
required at all. Is there a reason for this? Is it a good reason?

~~~
pavlov
The reason is that JavaScript runs within the same event loop as the browser
GUI itself.

The history of JavaScript is that it started as a sort of "quick and dirty"
scripting language at Netscape. Creating a completely separate event model
wasn't possible in those circumstances, so JavaScript ended up being shaped by
the single-threaded lowest common denominator of Win32 and other platforms
that Netscape ran on.

------
migstopheles
This reminded me of Phil Roberts' excellent talk a couple of years ago at
ScotlandJS: "Help, I'm stuck in an event loop". Well worth the 20 minute
watch. [https://vimeo.com/96425312](https://vimeo.com/96425312)

------
jordache
practically speaking, I think for the most part, when folks talk about
blocking execution, they are referring to the browser locking up while the
long running process is running.

To gracefully establish expectation in the user - A processing indicator will
buy the application some time before the user gets impatient. The setTimeout 0
pattern is useful to delay the long running process enough for the browser to
redraw the UI to throw up the processing indicator.

One can further achieve another state in the processing indicator by
leveraging time delayed CSS animation which runs in a thread that's parallel
to JS. So you could use CSS to augment the content of the processing dialog
while the blocking JS is running.

------
jlward4th
Misleading title. There was actually nothing about concurrency in the article.

~~~
thereIsCon
+1

If the article had used I/O (http request) as an example of event handling
(message queuing), it would have added some concurrency context.

~~~
Pherdnut
setTimeout uses the same message queuing system as XHR, DOM events, or
communicating with stuff that has its own stack like iframes or workers.
That's why a function with a timeout of 0 will not fire until after it's
enclosing function has popped. Functions block. Waiting for messages doesn't.
That's the model for JS concurrency.

