
It ain't about the callbacks, it's about the flow control (2013) - majke
https://idea.popcount.org/2013-09-05-it-aint-about-the-callbacks/
======
Shank
I think that, like programming in general, there are different styles that
click for different people. Different analogies, like the "push" vs "pull"
described in this article come with different trade offs, and different people
see them more easily than others.

In reality, I just want more options. I know that synchronous IO on UI threads
is bad, but I also know that I can avoid bottlenecks by not pulling from slow
IO sources. Choice means that a problem can be expressed in the most natural
way to a programmer or a team, which ultimately helps convey the ideas better
than trying to work in an unnatural convention.

I love Ruby primarily because you can usually accomplish tasks multiple ways.
You can usually find the most natural way _to you_ and use that, over, say a
prescribed "best" way.

~~~
scj
If you are writing an API that others are incorporating, create equivalent
pull methods for every push where possible (or alternatives if necessary).

The reasoning is simple, sometimes programs get in bad states. When a bad
state is detected, a program _should_ be able to go into a recovery mode and
return to normal. But I've seen push-only libraries (that I could not modify),
where access to the data I'd need could not be guaranteed.

This particularly holds true in a client-server relationship. The client
should have a method of asking the server for help without needing to reset
the session.

Ignoring state effects, a pull model will permit recovery by default, while it
needs to be planned in a push model.

------
convolvatron
the important part of flow control is to accept enough work to keep yourself
busy, but not more than you can eventually retire at that rate, and not more
than would require holding more intermediate state that you have room for.
[edit: those are really two sides of the same coin...ideally you would also
come close to saturating some throughput limit (network/disk/memory
bandwidth). and even better come close to saturating all of them]

if the work includes tasks with variable delay (using the disk, or an external
network service), or uses variable amount of compute, then you need to look at
creating appropriate adaptation and overall scheduling strategies (i.e. retire
older work first)

so while having a queue depth of 1 and implementing everything as fully
blocking in the naive threading case is maybe a better place to start than
having an infinite queue depth in the naive callback implementation....neither
one really gets you to where you need to go without further structure.

the only real frustration I have with the threaded approach in trying to build
the 'right' system is that at some point you have to deal with the system
scheduler being largely workload oblivious.

once you build you own scheduler, then whether you are assuming synchronous
returns from a programming perspective really is a matter of style, and
probably involves more language-provided safety and less boilerplate.

------
lxe
I don't quite get the mapping of "pull" and "push" and "synchronous" and
"callbacks".

    
    
        foo.asyncPush(callback);
        foo.asyncPull(callback);
    

and

    
    
        const result = await pull();
        await push(data);
    

Both feel similar in terms of idioms, capabilities, etc... I know that
async/await was not a thing in 2013 JavaScript, but as a pattern (promises,
futures, etc...) it's been around for some time.

~~~
ufo
I had the same confusion reading the article.

IMO, the reall issue with callbacks is that they split your functions into two
sets of uncompatible functions that cannot call each other -- the "async"
functions with callbacks and the "sync" functions without them.

[http://journal.stuffwithstuff.com/2015/02/01/what-color-
is-y...](http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-
function/)

async/await mitigate this a bit because although the two kinds of functions
are incompatible at least the syntax is the same.

~~~
maskedSlacker
The two kinds of functions are also more clearly declared.

------
mjpuser
I agree that not having a natural flow of control is more prone programming
debugging. However, he does say things that are wrong, ex: "It's impossible to
slow down the pace of accepting new incoming client connections." You can slow
down the pace by doing the following:

    
    
         const http = require('http');
         let count = 0;
         const THRESHOLD = 1;
         const server = http.createServer((req, res) => {
           if (count++ < THRESHOLD) {
             // do work
             res.write('success');
             res.end();
             setTimeout(() => count--, 10 * 1000);
           } else {
             res.write('limit');
             res.end();
             count--;
           }
         });
         server.listen(10000);

~~~
jholman
First, this post is not about flow program control, it's about controlling
data-flow. I made the same mistake at first, but the post literally makes zero
sense with the other reading.

Second, your solution does not limit the acceptance of new incoming client
connections, it only limits the frequency with which "work" is done. That
solves some possible problems, true, but it does not solve the problem that
the post is about.

On the other hand, I see no evidence in this blog post that callbacks are the
inherent cause of the flow-control problem; it's rather that high-level
languages that abstract the connection event are the inherent cause. Callbacks
or no callbacks, the problem is the level of abstraction.

------
niahmiah
I still like highland.js to solve these issues

