"I have just delegated the bookkeeping to the compiler."
That's not obviously a good thing. Debugging the compiler (or just figuring out why it did something, even if correct) is far more difficult than debugging application code. Given the choice between implementing behavior with application code (or a library function) or adding semantics to the language, I prefer the former because it's much easier to reason about code written in a simple language than to memorize the semantics of a complex language.
This is a nonsensical comment, and I voted it down. The same point can be made about any time languages got a level higher. This kind of rejection of powerful in favor of complex-but-familiar is precisely what Bret Vector warns against in the Future of Programming talk[1].
If anything, `await` makes debugging easier because you don't have to untangle callbacks and jump back and forth. You're not supposed to “debug the compiler” because, well, you know, there are test suites and everything.
Yes, this is something that takes getting used to. Just like `for` loops, functions, classes, futures, first class functions, actors and many other useful concepts and their implementations.
As for your edit, I still can't agree with you.
You're saying:
>I prefer the former because it's much easier to reason about code written in a simple language than to memorize the semantics of a complex language.
The point of `async` is making the semantics more obvious. Is it much easier to reason about Assembler than C? I say it's not. Would it be for somebody with years of experience in ASM and none in C? Yes it would.
I think it just comes down to that. Callbacks seem simpler to you not because they are simpler (try explaining them to someone just learning the language, and you'll see what I mean), but because you got used to them. Even so, error handling and explicit thread synchronization make maintaining callback-ridden code painful. I think setting `Busy` to `false` in `finally` block is a great example (in the blog post). You just can't do that with nested callbacks—they are not that expressive.
Async allows you to think in structure (`for`, `if`, `while`, etc) about time, that's why it's powerful.
"Callbacks seem simpler to you not because they are simpler (try explaining them to someone just learning the language, and you'll see what I mean), but because you got used to them."
No, they're simpler in the literal sense: they introduce no new concepts into the language or runtime semantics. (The dynamic behavior is still complex, of course.)
"Even so, error handling and explicit thread synchronization make maintaining callback-ridden code painful. I think setting `Busy` to `false` in `finally` block is a great example (in the blog post). You just can't do that with nested callbacks—they are not that expressive."
Right -- nested callbacks aren't the answer, either. In JavaScript (where most of my non-C experience comes from), a good solution is a control flow function:
busy = true;
series([
function (callback) {
// step 1, invoke callback();
},
function (callback) {
// step 2, invoke callback();
},
function (callback) {
// step 3, invoke callback();
}
],
function (err) {
// finally goes here
busy = false;
if (err)
// ...
});
This construct is clear and requires no extension to the language or runtime.
This is fundamentally a matter of opinion based on differing values. I just want to point out that there's a tradeoff to expanding the language and to dispel the myth that callbacks necessarily trade off readability when control flow gets complex.
I still don't agree though, I edited my post as well to explain why I think this is exactly the moment you need to tweak the language, and not the libraries. (And this is the point Miguel was trying to make when he differentiated `async` from “futures” libraries, even from the one `async` uses, because they are irrelevant to the discussion.)
While that can be true, I feel like the author inadvertently overstated the amount of work that the compiler is doing here. This is really more of a case of syntactic sugar and not heavy-duty code reordering.
That's not obviously a good thing. Debugging the compiler (or just figuring out why it did something, even if correct) is far more difficult than debugging application code. Given the choice between implementing behavior with application code (or a library function) or adding semantics to the language, I prefer the former because it's much easier to reason about code written in a simple language than to memorize the semantics of a complex language.
[edited to replace sarcasm]