Hacker News new | comments | show | ask | jobs | submit login

I honestly find "callback hell" a lot easier to follow and understand than the vast majority of fixes everyone is coming up with.

They're just continuations, seriously, what's everyone's problem? You define a function, it gets access to the current scope, it defines the rest of the program flow.

If you feel like your code is nesting too deep, you define the function elsewhere and just reference it by name. Then you don't get access to the current scope.

Why is this so difficult to people?

It's not difficult, it's just gross. And these promises don't solve the main problem, which is that synchronous functions return the result, until at some point you add some IO, so now it takes a callback. And everything that calls it now has to take a callback. And hours later all you've done is add a network call in some basic function but your diff looks like a total re-write.

A function that doesn't do IO and a function that does IO sound like totally different functions to me. Why aren't you writing a new function?

That isnt the point, its that you have some nice code that does |var x = someBigFunction()| and at some point in the programs life, something inside someBigFunction turns async, you have a whole restructuring to do.

Then that's a breaking change of the someBigFunction()s interface and it should be treated with care, not just quickly patching on a promise and hope for the best. Javascript is single threaded so when you wrote someBigFunction() you didn't have to think of parallel side effects. If you suddenly pause the execution of someBigFunc() and allow the user to mess around with the interface, the global state that someBigFunc() is working on might become changed.

Simply slabbing on a promise inside such a function only works if the function is pure to start with. Usually that's not the case with slow functions in javascript since the slowest thing in javascript is messing with the DOM, i.e global state.

The fact that I changed a calculation from a constant to reading its value from indexeddb shouldnt need to be an intrusive change. It is makes things more likely to break, there isnt some 'oh you decided to do io so you deserve to do a big refactor' point

And this is why handling state in imperative programs is such a mess. You are fundamentally changing the way state is treated in that application - why shouldn't it require a massive refactor?

Functional programs don't make this easier.

And don't say "monad transformer stacks" somehow solve this problem in an easier way.

Not necessarily because it is functional, but Erlang makes this vastly easier.

I find the "oh you changed tiny piece of logic, you should totally have to refactor every line of code that follows it" very strange. It obviously sucks, there are far better ways to handle it and they are slowly making it into the language.

Erlang makes it easier because it enforces no shared state between actors, and each actor has it's own independent thread of control.

Haskell definitely makes it easier. You can use preemptive green threads with almost all the ease of cooperatively multitasked code because of the prevalence of immutability.

You get the performance benefits of non-blocking code. The simplicity benefits of blocking code. And (almost) none of the threading hell you get in imperative languages.

Are they really preemptive? How does the scheduler decide when to switch?

Yeah, they are preemptive (though there had been a long-standing bug where threads that have no allocations don't get preempted, I believe it is fixed now).

This is some documentation of the scheduler: http://blog.ezyang.com/2013/01/the-ghc-scheduler/

Fair enough. It might be a breaking change but wouldn't it be nice if instead of having to rewrite everything into continuation passing style (a boring and error prone process!) you could just do something simple like

    var a = await someBigFunc()

      a <- someBigFunc()
Javascript in its current state is not built for supporting CPS programming very well. You often end up needing to reimplement control flow primitives like while loops, break statements and exception handling since those don't play nice with async stuff.

This syntax already exists in LiveScript (http://livescript.net/#backcalls) which compiles to straightforward JS.

    a <- doFirstThing()
    b, c <- doSecondThing(a)
    doFinalThing(b + c)

This is just not something that happens in real life. You know what functions are going to be async up-front, and likely make them async even if you don't know, just in case. Since all IO in js is async by default, having a code that does a simple calculation and introducing IO into it is actually a very big change that warrants the refactoring you'll be required to do.

> This is just not something that happens in real life.

You never change code in real life? You don't always know what functions are going to be async up-front. They might become async later, 4 levels down, when someone decides to do some IO. What happens then? Exactly. You have to ripple that code all the way up to the top of the API.

This is definitely something that happens to me a lot in real life.

And there isnt some magical point at which a change becomes large enough that it warrants a refactoring of unrelated code just because of the control flow.

It's happened to me in real life. When you want to go from keeping a value in external storage to keeping it in memory, you've gone from async to sync. The reverse happens too. These are not huge conceptual changes. When they require restructuring an entire program, it's reasonable to wonder why that program's structure is so brittle. "Likely make them async even if you don't know, just in case" sounds to me like an admission of this problem. Why would I want to make an in-memory lookup async, thus forcing it to wait in the event queue and defeating the purpose of RAM? The only reason to do that is that the programming model imposes a large complexity tax for not doing it.

Consider the simple case where one wants to look up a value synchronously if one has it in memory, and go get it from storage asynchronously if one doesn't. That's a natural thing to want, but it's problematic in Node. The problem is not syntactic—you can easily write a function that calls back immediately in the one case and asynchronously in the other. It's that sync and async semantics don't compose well, so when you start to do anything a little complicated (e.g. launch N requests and call back when all N have either returned or failed), the two trip over one another. Working in a Lisp that compiles to JS, I had to write some surprisingly complex macros in order to get correct behaviour. I wouldn't dream of writing that JS code by hand.

Are you serious? This happens in real life all the time.

doing/not doing IO may be an implementation detail for a function. For example, say you have a function isPrime(x) that you implement using some primality test, with no IO. At some point in the future, you may notice that computing the primality test takes a long time, so instead you decide to submit the number to the server which will then return the answer. The function itself remains the same, it takes in a number and returns its primality, however the function is now asynchronous and performs IO.

The function isn't the same, it now how side effects.

Not conceptually. From the perspective of the rest of the program, isPrime(x) has no side effect regardless of which method you use to implement it. If you are working in a purely functional language, then it makes sense to have introducing IO be a major change, as it has huge potential to make a function impure, and assuming IO is a pure function is roughly the equivalent of unsafePerformIO. However, in this case, we are not even working in a pure functional language. In concept, there is no reason that implementing a function with IO should require writing code outside of that function differently than implementing it with CPU.

Of course it's different. What if the IO fails?

Then the program dies with an exception. What if a memory allocation in a Haskell program fails? It can happen just as well because allocating memory is an impure operation which you still have to perform from pure functions. Haskell just happens to pretend it has infinite memory available because forcing the programmer to attempt handle out of memory conditions would be impractical and annoying. Haskells "purity" is just a chimera, albeit a useful one. And it's just as useful being able to pretend network operations are pure too.

So your whole program just crashes because the prime server was not responding? I think that changes the semantics of isPrime quite a bit.

What if pure code throws an Exception?

If every function is effectively impure, it still sucks.

generally you are right, it would be far easier to hide these kind of choices if the language allowed it.

However currently javascript has a purely event driven model and this means that you have trouble fitting in even a purely CPU bound computation, as it would block the responsiveness of your UI (or other concurrent requests if your are doing server side JS). That's why there was a need for creating something like WebWorkers.

This is an example of the "stack ripping" problem, described here (section 3.2): http://www.stanford.edu/class/cs240/readings/usenix2002-fibe...

That is not the main problem, that is you making a change with huge implications.

The big problem with callbacks is that they hold dynamic state and behaviour but, unlike other dynamic (and many static) objects in most languages, do not offer any interfaces to manipulate and reason about them. That's what higher level abstractions like promises provide.

> but, unlike other dynamic (and many static) objects in most languages, do not offer any interfaces to manipulate and reason about them

Sure they do. You just need to start thinking of javascript as a functional language and all this stuff becomes much ... simpler.

First of all, why would you even need to reason about a function's internal state? That's a sign of a leaky abstraction.

Furthermore, every time you want to manipulate the internal state of a function, what you're really after is defining a better API to provide arguments to said function.

And if you still for some reason need to dick around with a function's internal state, just use partial function application.

You need to reason about async operations when for example you need to do something when multiple async operations have completed; even more often if some may have failed.

If all you have to implement async operations are plain callbacks then yeah, they are not really an abstraction of anything, and you can probably call them 'leaky'. Which is why you need to create real abstractions around them, like promises.

I think we basically agree.

If they were really continuations they'd be fine. But they're not. They're a broken, terrible approximation of continuations.

A real continuation captures the entire execution context including the current call stack.

Instead what you get in javascript is a stack that's completely meaningless. There's no way to automatically route your exceptions to the real calling context, and there's no automatic way for you to be sure you're seeing all of your callee's exceptions.

If you really want to be sure you'll hear about the ultimate success or failure of an asynchronous call, every single asynchronous step needs to manually install it's own exception handler, trap any exceptions, and pass them back via a failure callback. You're basically doing by hand what the runtime environment is actually supposed to do for you (that being the whole point of exceptions).

And you do this when you have 10 different steps and each one of those steps should also have an error case. And you do this for 100s of resources and you have callback hell.

If you start seeing do1() do2().. or cb1(), cb2() and so on function that is the "hell" everyone is talking about. Of course you should name your functions better -- but that's not the point. Logically you might not need an extra function but you are forced to add it because of the way your framework forced you to handle IO.

>They're just continuations, seriously, what's everyone's problem?

That they're badly made continuations, without support from the language.

>If you feel like your code is nesting too deep, you define the function elsewhere and just reference it by name. Then you don't get access to the current scope.

At the cost of moving stuff out of where it's invoked, so making code harder to read.

The problem with callback hell is that is pushes the programmer to write FOR the machine, in the way the machine likes it. Those things should be an implementation detail, and in good languages, are.

My favourite example of this is loops. Its very annoying to put an async call inside the loop since you need to rewrite the loop as a recursive function.


Once I started using the async library, all of these problems everyone is illustrating basically disappeared.

Something lie this? http://taskjs.org/

Unfortunately task.js requires javascript generators which are only implemented in Firefox right now.

I mostly agree, every time I see one of these I find trying to read it as a transformed version of the vanilla callback version is the easiest way to understand it. After all, when callbacks do get out of hand, approximating one of these is exactly how I end up dealing with it.

I wish `let` was a little different, syntactically. With something closer to what OCaml does, you don't need to surround everything in parens and braces. This would take all the visual grossness out of it (which I think is the biggest problem people really have):

    let (cb = function(x) { 
    }) {

    let cb = function(x) { 
    } in
It doesn't look like much, but when you have a number of nested callbacks to define the latter syntax keeps everything on the same level where the former nests just as poorly as defining them in the parameter list. (I also hate seeing '})' anywhere, but not many people seem to care about that so much. It's why I switch to Monaco if I am doing JS.)

[Edit: formatting.]

This is what I have being thinking. These proposals for ways to refactor always back themselves up by using actual use case examples of callbacks vs toy examples of their fix

"there! See how much cleaner that is?"

No, I don't, and this stories solution in particular, is particularly ugly.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact