I call shenanigans. Maybe you don't intentionally throw exceptions very often, but that's hardly the point of exceptions.
Javascript is an interpreted language. Everything is an exception. Do you ever fat-finger a function name during development? That's an exception. Ever accidentally try to use "this" when it's pointing at "window"? Exception.
Those simple, natural mistakes can be lurking anywhere. They're far easier to detect and correct when the failure is propagated backward correctly to the right callers.
Also, the DOM. It's a sort of blackbox that has a mind [implementation] of its own, and if you don't treat it as such and/or if your code isn't bullet-proof (and honestly, most code isn't) then you'll get exceptions from even routine and mundane operations.
You guys are talking about things that should be discovered during development, in which case, I'd rather have the exception thrown and showing in the console than to try to write a function to handle them.
Even in that case, it's easier to understand the exception if you have a meaningful stack trace. A good promise implementation gives you that.
Also, I guarantee you will always get exceptions in production that you didn't see in development. Browsers are different, users do things you didn't think to test. If you care about quality, you need a system in place for reporting those exceptions back to the mothership, again including meaningful traces.
For a more in-depth example of the way that Promises/Futures should work, check out the implementations in Scala or Haskell. The concept of a delayed computation is actually nothing new - the idea has been around for quite some time. I'm actually a little surprised that it's just becoming popular in the Javascript world. From what little experience I've had with reading and writing javascript code, it seems like you are forced by the language into heavy usage of callbacks/continuations. Promises are really the only way to elegantly manage and reason about heavily async code.
So I wasn't clear what implementation in Haskell you were talking about, since Googling haskell and promise didn't come up with much. I did not realize that "IVars" and "MVars" are considered types of promises until hitting the Wikipedia Futures and Promises page [1]. While I can see how they are promises, that's not how I thought of them.
Also note that the .Net AsyncTask is also a promise, and in .Net 4.5 had some powerful syntactic sugar (async/await) that allows you to create sequences of async actions in a very clear way
Not exclusively, since a Future doesn't necessarily have to be monadic to be useful - the Scala version of a Future is (of course) monadic, but you don't necessarily need to know about that to use it.
1) Yeah I guess that's what semaphores are for, but the way to mitigate that is to block in a different queue. :-) Not saying this is the right way, I am saying this is part of me refactoring deep nested blocks. It's the middle ground before doing it properly.
2) Not sure what you mean here. A semaphore has a counter. You can signal for one run and only one if you want.
3)Not sure what this means, I suppose I gotta' read up
They are, but in a similar way to how objects are basically just structs in disguise. You can use semaphores to build promises, but like all other abstractions it's better to have it be shared amongst all the code you use. The "get the result of this computation or tell me when it ends and maybe tell me about progress" pattern is common enough that it might as well get written once and be a generally available tool.
Unfortunately, due to JavaScript's single-threaded nature, you don't get what I consider to be the most useful part of Futures on other languages: ability to block on async calls while work happens in a different thread. In JS, if an async operation rears its ugly head into your synchronous code, you're still in for some rewriting.
I'm not sure the two are related. The reason you need to rewrite your code in JS is because it lacks coroutines of any sort. That's not related to blocking using threads.
As a concrete demonstrate, in ECMAScript 6, which has generators (= shallow coroutines), you'll be to do transforms like this:
function logUserName() {
var user = getUserSync();
console.log(user.name);
}
// oh no, getting a user became async!
var logUserName = Q.async(function *() {
var user = yield getUserAsync();
console.log(user.name);
});
Note that the language is still single-threaded, and `getUserAsync()` doesn't block execution.
Generators haven't bought you much here. This example isn't realistic because logUserName() is screwing over its caller. It has no way to report errors, nor can the caller tell when the operation has completed. It's just fire and forget (and hope it was successful).
A better logUserName() would let the caller handle that either by taking a callback or returning a promise. But now you've passed the buck to the caller. And now it has to be async too, all the way up the callstack.
This is the worst thing about async code regardless of whether its callbacks or promises: going to sync to async requires you to transform all code all the way up the callstack.
And yes, you always have to keep inserting `yield` all the way up the stack; you can't "hide" the asynchronicity.
I think this is good. It's important to explicitly call out the asynchronous points of execution in your code. But at the same time, the only transformation you end up needing to do is inserting `yield`.
YES. Can we just give these poor js programmers threads and queues and concurrency and be done with it. Not for their sake, but so that I can finally stop reading strange rants about strange semantics every time a new half-feature makes it into webkit/firefox.
Also, fix the semicolons. Apparently that's a thing.
Promises and Deferreds are anti-patterns, in my book. They pollute the call stack and explicitly introduce uncertainty. Why would you want to code against an object representing a thing that may or or may not be complete? Good software strives for determinism. Passing around "maybes" is the opposite of that.
Promises are a poor solution for people not willing to properly model their tasks.
A far better way is to actually think about what you are trying to achieve and stick with a simple sequential control flow, rather than using some sort of overwrought API hammer to bash nails into your codebase everywhere.
Don't nest inline callback functions. When you pass callbacks, pass references to object functions. Model these asynchronous flows as objects. Don't pass around uncertainty. Embrace events.
All your criticisms can apply to events as well, except for the "model" comment.
On the model comment, modularity definitely makes writing code easier, but it reduces locality of code. It makes control flow harder to understand (and thus code harder to read). Promises 'solve' that by being explicit about the callflow schematic.
Hmm came here thinking we were going to talk about the philosophical implications of keeping/breaking promises and how deals are still made with handshakes over Ommegang Iron Thrones.
I call shenanigans. Maybe you don't intentionally throw exceptions very often, but that's hardly the point of exceptions.
Javascript is an interpreted language. Everything is an exception. Do you ever fat-finger a function name during development? That's an exception. Ever accidentally try to use "this" when it's pointing at "window"? Exception.
Those simple, natural mistakes can be lurking anywhere. They're far easier to detect and correct when the failure is propagated backward correctly to the right callers.