Hacker News new | past | comments | ask | show | jobs | submit login
Escape from Callback Hell: Callbacks are the modern goto (elm-lang.org)
247 points by wheatBread on Nov 2, 2012 | hide | past | web | favorite | 147 comments



If you don't want to use another language and compile down to JavaScript--which is what Elm offers--there are some interesting options that are just JavaScript libraries.

The one I've personally played with is called Arrowlets[1], which introduces a control structure called an arrow that lets you abstract over callbacks and event handling (among other things). Using that style of programming can significantly simplify some fairly common tasks in JavaScript; the drag-and-drop demo on their site is a good motivating example. However, unless you are already familiar with functional programming and arrows, you should probably read some background before diving into the examples.

[1]: http://www.cs.umd.edu/projects/PL/arrowlets/

Another interesting option I've toyed with is RX.js[2]. This is a JavaScript version of C#'s Reactive Extentions (RX). If you are familiar with Linq, then this library's style should seem natural immediately. The core idea here is to abstract over events as streams that can be composed and manipulated conveniently.

[2]: http://rxjs.wikidot.com/

If you don't mind using a different language, but want something that mostly looks like JavaScript, another option is FlapJax[3]. I haven't tried it myself, but I've certainly heard good things about it.

[3]: http://www.flapjax-lang.org/

There are probably more options in the same vein that I forgot or don't know about. However, I think these three are a good starting point and could help clean up much of your event-driven JavaScript code in the short term.

Of course, if you are willing to use a language radically different from JavaScript, then Elm is a great option. Once you get used to functional languages with good type systems, there is really no going back ;). The syntax is also simpler and more minimalistic than JavaScript's, which leads to more readable code.


Bacon.js[1] is another functional reactive programming library for JavaScript. Many similar concepts to RX.js, but the library is much smaller. While Bacon.js might not be the most mature library around, I've used it successfully on (almost) daily basis for few months.

[1] https://github.com/raimohanska/bacon.js


Promises are a good option as well: https://gist.github.com/3889970

Node.js had promises early on, then were removed, but they're slowly gaining traction again.


My understanding is not that promises weren't valued, but that there were conflicting opinions on how to implement them, so promises were left in user land, in hopes that the community would vet a better solution than could be prescribed by the node team. https://groups.google.com/forum/#!msg/nodejs/jaufClrXU9U/ov5...

q seems to be a popular choice. https://github.com/kriskowal/q


It would be interesting to see the different libraries/languages side by side solving a benchmark problem like the drag and drop example.


Callback hell is certainly a real thing, but that Javascript snippet is a poor example for a goto comparison, since it's pretty much as linear as you can get.

The problems with Javascript and callbacks are usually (in reverse importance): noisy verbosity (all those "function()"s), the deeper and deeper indentations, and then ensuring execution order on interdependent async steps while keeping it readable. In the blog post's example, you pretty much of a serial chain of dependent steps, and the only thing really wrong with it is that it's just ugly and approaching unreadable (syntax highlighting will help quite a bit, though).

I think most people heavily involved with Javascript recognize those problems, though. Promises/deferreds have entered mainstream js usage. They can be somewhat confusing for newcomers, but several libraries can help, as others have pointed out. Language support is evolving: "let" as an option for more control over scoping, the arrow syntax for simpler function expressions, yield for shallow continuations, etc. These will in turn feed back into making libraries smaller and easier to use (I'm really looking forward to when I can use http://taskjs.org/ for all my async needs. Combined with the arrow syntax, I feel like I can pretty much always avoid a callback mess and retain clarity of (high-level) flow at a glance).

This isn't a knock on elm (this article is the extent of my knowledge of it), and it isn't a dismissal of the problem, but it isn't clear to me from this article what is broken in JS that is fixed in elm. In other words, this could be another tutorial on promises in Javascript and make the same points about excessive callbacks being poor coding style and bad news for readability and maintainability.

Syntax that makes clear code the lowest energy state is a feature, but (if we limit our discussion to callbacks) in JS it's partly solved, partly being worked on, and it's not clear to me yet what the energy differential is in typical elm usage between this code and the nasty spaghetti code you can always write if you try.


Personally, any solution to callback hell will also need to be a language that supports returning multiple values. The following convention is simply too useful to me:

  foo.doSomethingAsync(function(err, result) {
    if (err) {
      ...
    }
    ...
  });
You can obviously accomplish this with exceptions, but then you have a million try/catch blocks floating around all over the place and the code becomes even harder to read (and more verbose to boot).


There's a way without exceptions. Use an ErrorT[Promise[A]] monad. I've done this in Scala before and it is so much simpler than the JavaScript convention you pointed out.

It allows you to write code like this:

    val query = "Brian"
    val result = for {
      db <- connectToDatabase
      user <- userFromDatabase(db, query)
      friends <- friendsFromDatabase(db, user.friends)
    } yield friends
Whenever one of the functions above returns an error, the whole expression is that error. The outcome is either an error (if there was any) or the end value of Set[User]. No need to manually handle an error until you get to the end result.


Similar to this is the Scala fold convention.

An example from the Play! Framework for form submissions:

    def submit = Action { implicit request =>
      contactForm.bindFromRequest.fold(
        errors => BadRequest(html.contact.form(errors)),
        contact => Ok(html.contact.summary(contact))
      )
    }


That's basically what the promises spec for Javascript gives you.


Exactly. The promises spec defines the Promise monad. The problem is that JavaScript doesn't have monadic syntax, which would make the code a lot more readable.


I've always seen this as an odd convention. Why not

if (typeof result === Error) {


There's a very good reason for returning a result as (error_code, resulting_data) instead of just returning a single value of resulting_data and then having the caller check to see if resulting_data is an error.

The problem is that if you count on the caller to check on the error value for the data, he might forget to do that. And the code works just fine in testing, and it seems like it works. But in the error case, we merrily continue on and try to pass the error around as if it were data.

When we return two values, error and data, then the developer is forced to think about the fact that the result could be an error, and think right then about what should happen. THis is a good place to be thinking about it.

Also, the developer may not even know at-a-glance that the function could sometimes return an error, unless consulting the documentation.

Or more succinctly: it's not about making the code easier to write (which the version you mentioned does); it's instead about making it easier to do the right thing.


  foo.doSomethingAsync(function(result) {
    ...
  }, function(err) {
    ...
  });


Yeah, but try to chain three async functions in that style and it becomes evident what pufuwozu meant when they said monads made the code simpler.


Deferreds in Twisted make it easy to chain asynchronous code with error handling together:

    def parseData(data):
        return doManyFancyThingsWith(data)
    
    def parsedAsyncResult():
        return asyncResult().addCallback(parseData)

    def callback(data):
        print 'Parsed data:', data
    def errback(err):
        print 'Oh no, something went wrong!'
    parsedAsyncResult().addCallbacks(callback, errback)
If an error is raised by either asyncResult() or parseData(), it will be caught by the errback function, even though parsedAsyncResult() didn't explicitly do anything to pass errors along.


That looks like an unnecessarily confusing way of writing this code:

    try:
        data = doManyFancyThingsWith(asyncResult())
        print 'Parsed data:', data
    except:
        print 'Oh no, something went wrong!'
That's the code I would write if I were using Eventlet or Gevent, and it would work just fine. Why settle for less? It's 2012, damn it; we shouldn't have to dick around with deferreds except under exotic circumstances.


This is possible in Twisted, using inline callbacks:

  try:
      data = doManyFancyThingsWith(yield asyncResult())
      print 'Parsed data:', data
  except:
      print 'Oh no, something went wrong!'
I find that I use inline callbacks for the common case and work with deferreds directly only when it makes more sense to handle callbacks explicitly.


I don't know, I find this quite readable:

  foo.doSomethingAsync(function(result) {
    ...
  }, {
  error: function(err) {
    ...
  },
  afterward: function(then) {
    ...
  },
  evenLater: function(wat) {
    ...
  }});
Which is not meant in any way to imply that there's nothing better :) Just that I see no reason to make the comparison to an abnormally-bad version.


When would the "evenLater" callback be called?

Besides, that's still only one function call. When you need to call many async things one after the other (that's what i meant by "chaining"), that nested callback-based code becomes much much convoluted than the analogous "normal" return-based code.

With promise monads [1] or some sort of language transformation that lets you write code that looks sequential but then is transformed into chained callbacks, you can basically get rid of all those extra callback functions and their associated complexity (i.e. all the mangled "success" and "error" callbacks from chained async functions) and have code that looks normal function calls one after the other. Notice that pufuwozu's example had three consecutive async calls, but it didn't need to create all the callback functions to pass between the calls.

[1]: This is a nice little spec for chainable promises: http://wiki.commonjs.org/wiki/Promises/A. And here's a post that explains how they are used: https://gist.github.com/3889970


The "callback hell" example is a rather tame one, since the code is readable in a single location just with some nesting. So when I see the FRP solution which is the same amount of code I'm not certain that in a complex example this actually solves the problem. You can still have lift statements scattered around a program just like you can have callbacks to various functions scattered around.

The solution to GOTO was to remove it and replace it with a few control structure forms that enforced locality. I remember converting someone's GOTO-laced code once and basically everything could be re-written with some clever use of do-while with break statements and an occasional flag variable. do-while, while, for, etc. replace GOTO in 99% of cases and enforce the desired code locality for readability.

So what syntactical structure could enforce locality of time-connected variables?

E.g. some idea like this:

    data, err <- $.ajax(requestUrl1)
    if( err ) {
      console.log(err)
      return
    }
    data2, err <- $.ajax(makeRequestUrl2(data))
Where the <- syntax could be like an = statement but say that the assignment and all following statements are placed in a callback.



Async looks nice, but this (http://www.jayway.com/2012/10/07/asyncawait-in-c-a-disaster-...) makes me wonder whether it is the right solution for this problem. Breaking what most programmers think are the 'laws' of refactoring is not a good thing.


The problem in the linked article is not async/await, but rather specifically async void, which you don't get by following the straightforward sync->async refactoring rules. A normal void method would start returning Task when turned async (and a non-void method returning T would start returning Task<T>). Async void is a special, distinct beast which is fire-and-forget by definition, and it only exists in async land. Perhaps they should have used a special new keyword for that instead, but in any case, it's a completely orthogonal problem.


I think there are a couple of async Javascript dialects that already do this:

http://tamejs.org/

The main advantage tat not many people see is that you get backwards compatibility with sync code (if statements, while loops, etc) basically for "free".


Yes, there are numerous libraries that do this. For instance, Tamejs is now rolled into CoffeeScript, creating IcedCoffeeScript. I didn't think CoffeeScript could get better, but it does.

In addition, there is:

Q (implementation of the CommonJS Promises spec) https://github.com/kriskowal/q

Streamline.js https://github.com/Sage/streamlinejs

Await.js (New, inspired by Tame/IcedCS) https://github.com/greim/await.js

jQuery.Deferred http://api.jquery.com/category/deferred-object/

...And less current libs like Node-promises and FuturesJS.

Here's a comparison from last year: http://www.infoq.com/articles/surviving-asynchronous-program...

And IcedCS's debut on HN: http://news.ycombinator.com/item?id=3522839

In JS devs seem to like to reinvent the wheel. I'd love to see the community rally around one of these solutions, instead of creating new half-finished ones and abandoning them. I'm partial to IcedCS because it's the least verbose, and it also works as a lib outside of CS. The Promises spec has been (partially) implemented by jQuery and I'm sure is here to stay.

Elm's syntax won't easily translate to any existing solution I'm aware of. It is a nice research project, but I will be sticking with established, well-tested methods instead of risking my business on another half-finished wheel.


More than 99% of cases.

If you have functions, loops, and nested loop control, in 100% of cases you can replace GOTO code with code that is equally efficient. With flag variables and if checks, you can likewise always replace the code, BUT efficiency and verbosity suffers because of the need to check the flag variable repeatedly in the loop.

As for your syntax suggestion, I am reminded of http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html which manages to use a preprocessor trick to generate working coroutines in C. (This trick is actually used in PuTTY.) I also think that the strategy that you describe would work well enough for my tastes.


Callbacks are different than gotos in that they are aren't even remotely close to gotos.

With a callback, you can get into 'callback hell,' however the root cause of that is that you probably don't understand the nuances of properly architecting a solution that involves the power of first-class functions.

JavaScript is nice because the scoping of the callback is easily controllable through your invocation method, and if you've created a good object model then it's relatively easy to maintain an understandable state.

When you explicitly define callbacks like in the examples, you're tightly coupling the response handlers to their requests, which is a relatively poor implementation and will bite you in the ass later on.


Callbacks are different than gotos in that they are aren't even remotely close to gotos.

The analogy is that callbacks create non-linear control flow. Using a monadic syntax like in Roy, we can easily have callbacks without having them look non-linear:

    let deferred = {
      return: \x ->
        let d = $.Deferred ()
        d.resolve x
        d.promise ()
      bind: \x f -> x.pipe f
    }

    let v = do deferred
      hello <- $.ajax 'examples/helloworld.roy'
      alias <- $.ajax 'examples/alias.roy'
      return (hello ++ alias)

    v.done console.log
Which compiles into continuation passing:

    var deferred = {
        "return": function(x) {
            var d = $.Deferred();
            d.resolve(x);
            return d.promise();
        },
        "bind": function(x, f) {
            return x.pipe(f);
        }
    };
    var v = (function(){
        var __monad__ = deferred;
        return __monad__.bind($.ajax('examples/helloworld.roy'), function(hello) {
            return __monad__.bind($.ajax('examples/alias.roy'), function(alias) {
                return __monad__.return((hello + alias));
            });
        });
    })();
    v.done(console.log);
Anyway, continuations (with call/cc) are definitely a controlled form of goto. Take a look at an example from Paul Graham:

http://lib.store.yahoo.net/lib/paulgraham/cint.lisp

    ((call/cc
      (lambda (goto)
        (letrec ((start
                  (lambda ()
                    (print "start")
                    (goto next)))
                 (froz
                  (lambda ()
                    (print "froz")
                    (goto last)))
                 (next
                  (lambda ()
                    (print "next")
                    (goto froz)))
                 (last
                  (lambda ()
                    (print "last")
                    (+ 3 4))))
          start))))


All of structured programming consists of various controlled forms of goto.


Yesterday, I found an email from Jon Callas where he was giving advice on where SAML falls in the Chomsky hierarchy of languages:

"If it has backwards gotos in any form, it's Turing-complete. Loops, recursion, etc. are backwards gotos. If-then-else is a forward goto."

I hadn't ever thought of things that way, but I thought it was interesting.


Once you've worked in assembly language, you realize everything is just another form of jmp.


That's the point of the article, though: callbacks don't offer enough structure to effectively reason about.


> Callbacks are different than gotos in that they are aren't even remotely close to gotos.

Well, continuations are the functional equivalent of GOTOs, and callbacks and continuations are closely related, so I don't know how you can make that statement.

Even if you want to argue that they avoid some of the pitfalls of GOTOs (which I'd still contest), you still can't say that they're "not even remotely close to gotos".


Along with a trampoline, callbacks can be used to implement continuation passing style (CPS), which is more or less a fully generalized goto - not just static goto, but computed goto. CPS is sometimes called "goto with parameters".


"When you explicitly define callbacks like in the examples, you're tightly coupling the response handlers to their requests, which is a relatively poor implementation and will bite you in the ass later on."

I agree, if anything this is an argument about notification vs delegation. Callbacks should be used when you need to delegate a feature, like when an array object needs to call outside its scope to ask for a sorting function. Notifications should be used when the calling object doesn't care who handles the information, like in the case of an asynchronous load, and in a proper notification environment you wouldn't have the spagetti code this blog is illustrating.


One way to escape callback hell is to use a async.js (https://github.com/eligrey/async.js), which uses yield to abstract away callbacks. It's Firefox-only (JS 1.7+) though, but that can probably be resolved by using a JS parser and replacing every yield with JS 1.5 callbacks.

Full disclosure: I wrote async.js.


There's also this library: https://github.com/caolan/async


That library uses callbacks and isn't really related other than the name. I don't think that the author googled async.js to see if the library name was already taken.


I think it's more likely he found it was not taken on npm so he took it. Async is, these days, one of the most popular libs on npm.


https://github.com/caolan/async: 3293 stars on Github

https://github.com/eligrey/async.js: 15 stars on Github

Sorry, you lost this one.


Sorry, but what do stars have to do with it? That's like saying DuckDuckGo should be allowed to rename itself as Google if they become more popular than Google. https://github.com/caolan/async had zero stars (i.e. didn't exist) back when I released async.js.


I'm just saying you're going to have a hard time convincing everyone that your project is the "async" JavaScript library. I presume you don't plan on suing them for trademark infringement.

It's annoying (it's happened to me too) but I don't think there's much you can do about it. Especially when the name is incredibly generic.


The problem is not callback, the problem is that callbacks exists in Javascript.

Callbacks themselves, when used wisely, can often enhance code readability, hell LISP has had function references since forever, but I think the most complain about callbacks are actually complains about callbacks in noisy languages, mostly likely languages with noisy syntaxes like Javascript and Java. When read that way, the disgust towards callbacks do seem to have merits. As the author has pointed out, the 2 getPhoto() functions at the end express and do exactly the same things, but obviously the CoffeeScript version reads better.

Callbacks have been around a long time and I've never heard of people complain as much about them as people have for Javascript and I conjecture the reasons are as follows:

1) There's no named parameters (keyword arguments) in Javascript, so people pass around objects literals into functions to emulate them. 2) Making lambdas in JS is too easy, but the syntax is too noisy. 3) Oh so many aliasing of this; 4) Self-chainable JS libraries like jQuery makes the style of calling multiple functions too easy. But lines can only go to long before becoming unwieldy, so people tend to indent chained method calls multiple times. 5) No modules and global namespace pollution is frown upon, so people are hesitant to flatten deeply nested callback chains. 6) There are a dozen ways to make a JS class and/or object depending on frameworks, and they are not at all compatible.

All of these "features" coagulate in JS into giant blobs of snot like this:

  <script>
    $(document).ready(function() {
      var $main = $("#main");
      $main.
        hide().
        click(function(e) {
          $.ajax({
            type: "POST",
            dataType: "json",
            contentType: "application/json",
            success: function(data, textStatus, jqXHR) {
               data['rows'].forEach(function(line) {
                   $main.append($("<p>", {
                      className: "row"
                   }).append(line));
               });
            }
          });
        }).
        show();
    });
  </script>
Words for the wise, when you see a shiny new jQuery plugin, stop, think for 3 minutes, and then put it inside a Backbone View or whatever your favorite framework is other than jQuery*. If you don't know anything other than jQuery, now is probably the best time to learn a few.


This reminds me of the pain of dealing with python's twisted library, albeit before inline callbacks were implemented.

Inline callbacks as implemented in python can make asynchronous code a lot easier to read: http://hackedbellini.org/development/writing-asynchronous-py...


And using something like Eventlet or Gevent can make asynchronous code downright pleasant. Of the two, Eventlet has the better docs, and both are easy to get started with, and stable enough for production use:

http://eventlet.net/doc/

Give it a try! You almost definitely won't regret it!


Isn't `yield` the solution to all the problems? It makes things responsive and avoids the callbacks entirely.

For example: http://www.tornadoweb.org/documentation/gen.html


It is a solution, kind of, but annoying on a few accounts:

0. it plays hell with threadlocals-as-dynamic-scoping, which is the only way most "modern" languages permit dynamically scoped variables

1. it needs to be correctly passed along to callers, and given it's usually used in dynamically typed languages there's a high risk of forgetting and dropping a yield

2. yield being also used for iteration, it can be confusing to keep them straight.

It's definitely a better solution than callback hell though. An other approach is runtime support as in gevent, where the "yielding" is done by the library/IO code and invisible to the caller. The final two I know of are baking lightweight concurrency and communications into the language itself (Erlang, and to a lower extent Go and Rust) or monadic systems (Haskell)


You've got to ask, why is async programming used at all? The reason is twofold: first, the C10K problem, where too many threads kill performance, and second, sometimes you want to have multiple tasks run in parallel.

There are fairly simple syntactical solutions to both problems.

  result = doSomeAsyncTask()
  result.yield() // drops the thread, picks it up on response
  // do stuff with result here
This magic yield() doesn't exist (to my knowledge), but if it did, it would preserve linear code and also solve the C10K problem.

You could have similar code to solve the multiple task problem:

  result0 = doSomeAsyncTask0();
  result1 = doSomeAsyncTask1();

  while (result = getNextCompletedTask(result0, result1)) {
    // do something to result
  }

A Future in Java does something like this, but it doesn't drop threads.


The author offers an alternative that would require a change to the language. Callbacks and their use in "callback hell" are a little different than use of "goto"; "goto" appears to have an obvious alternative that was more logical to use already implemented in the language. For javascript, there is none of the nice syntactic sugar (reminds me a lot of C# recent async changes) that the author suggests and is not even being proposed for ECMA 6.

I agree it would be nice to have that stuff, and that callbacks can get a little hairy, but they are the best solution available at present. Shall we stop developing applications in the mean time while the language catches up, or even worse, browsers actually consistently implement the changes?


No, we should stop writing javascript and start writing elm - a language that compiles to javascript, works like described in that page, and is the subject generally of the site hosting the article.

At least, that's what the article is saying. There are a few interesting things on the horizon there, but I've been watching elm with some interest.


If you had a program back then full of gotos and an assembly language that didn't support structured blocks, your problem was just as bad.

At its core, FPR only requires higher-order functions to work. (And nearly every modern language supports them to some degree).

The things that Elm provides are additional niceties:

* A type system with parametry polymorphism (aka generics) helps you spot otherwise nasty runtime errors ("expected a function, got a signal").

* Abstract data types - The only way to create a signal is through the API. The only thing you can do with a signal is pass it around and feed it back into the API.

* Language purity - This one is probably the hardest sell for average languages, since every modern language (save Haskell) allows for unrestricted side-effects. However, as long as you don't bypass the API and update the UI directly, you don't actually NEED purity.

The nice thing about Elm is that it compiles directly to Javascript. You can integrate it into new pages on your existing site without giving up anything. I think the language -- and more generally FPR as a basic tool in your toolkit -- has a lot of potential in the future.


There are alternatives that don't require extending the language or having to work directly with a mess of nested callbacks. A bunch of flow-control libraries here: http://dailyjs.com/2012/02/20/new-flow-control-libraries/

This is definitely more hairy to use than if the language supported it natively, but not so bad, and can be implemented in a very lightweight way (this approach http://daemon.co.za/2012/04/simple-async-with-only-underscor... is a few lines of code on top of Underscore.js which a lot of sites are using already).

(PS: sorry if OP's article already mentioned this stuff, I can't load it at the moment)


This kind of confuses two important ideas, both discussed by Dijkstra.

The most popular was his article about gotos.

Another idea in his writings was that time-dependent programming was dangerous. He was talking about interrupt based programming specifically, and also addressed the common practice of some hardware to have asynchronous IO. You would start an IO operation, and go on and do other things, come back later and see the values there.

So these two things are not alike. They both cause confusion about what the program is doing, but they are not "like" each other.

To be a better programmer, it is good to read Dijkstra. It is really all about avoiding errors in programming.


As someone who writes C code for a distributed system that uses event-driven callbacks ( Zscaler) (yes,the binding is at compile time), I was aghast when I saw goto's in the codebase. I mean,I believed programmers were indoctrinated with " using goto = goto hell". I have realized that if used smartly,goto's cause no problem-say in error handling. I can confidently say I have not seen a single bug because of improper usage of goto in the last 1.7 years. And we do a lot of interesting things in C,including talking to a Postgres database,having a messaging protocol layer,doing shared memory manipulation etc.


One thing I'd like to see from languages that compile to js: Some kind of evidence that output readability is a concern. You can make some beautiful abstractions, but if I can't debug it when things go wrong, then there's no way I would use it.

Not making any comments about Elm's output, but the author clearly doesn't consider it a priority in the post.


But how? Does your C code compile to readable assembly? Does Haskell compile to readable C? The high level language is created because the lowlevel language is unreadable when solving problems the high level language solves.


While I agree with your premise, the thing that is driving the parent comment is the fact that when debugging a language such as Elm, there is no tooling to make sense of the output code. This forces the user to use standard JS debugging tools on a pile of JS that was not written to be debugged. This is a contrast to Java or C, where debugging tools and hooks have been built to indicate where the low-level code is mapped to high-level code.

At the end of the day this is more a call for language authors and the people around them to develop tools to debug the language.


This seems like a bad analogy. Dijkstra's paper was in favor of "structured programming",and the problem was that goto was too-unstructured. If anything, callbacks are excessively structured.

Also, why is nonlinear code a bad thing? If the program behavior should be nonlinear, then neither should the code.


Because people have trouble reasoning mon-linearly.


Two observations. First, great how do we debug it? How can we see our signals between each step? How about beyond simple print/logging?

And two, I like his contrast between async vs synchronous flows, and recognizing synchronous style programming has many benefits that CPS doesn't. However, I think even this style still hasn't solved the bigger problem with asynchronous style programming. The ability to reuse it easily. In synchronous style programming I can reuse code and add to that block by calling the method, then after that method is done add my code.

   ... my code before ...
   var result = someMethod()
   ... my code after ...
It's just that simple with synchronous style. With async style the author has to provide you a hook to hook onto the end or beginning of this flow (adding a callback param, returning a promise, etc). I think even with using signals you have the same issue. Without explicit hooks you can't hook more code onto it like you can with good old fashion synchronous programming. Not to mention error control flow is turned upside down too.

I'm intrigued by the ideas of signals over callbacks, but I don't know if they fix enough problems with callbacks yet.


The following is a dead comment by seanmcdirmid... reposting since it seems perfectly legitimate and I have no idea why this would be voted down. (Glitch?)

Debugging is one of the problems with FRP/signals, or functional code in general. No one has come up with a good dataflow debugger yet, and it might not even be viable. The best you can do is interpose "probes" on your code like you would take measurements with an oscilloscope. Disclosure: I did my dissertation on signals (object-oriented ones to be precise), and am a bit disillusioned with it.

On the other hand, the argument from the declarative community is that you don't need to debug your code.

A better alternative to FRP/signals might be immediate mode user interfaces. Since they are conceptually called on every frame, you get the benefits of FRP while still being able to debug in the old way. On the other hand, they are quite inefficient, though I think we could play some tricks with technology to make them better (memoize, refresh blocks of computations only as needed via dependency tracing).


So, this Functional Reactive Programming stuff compiles to Javascript, right?

Is the resultant Javascript just a bunch of nested callbacks, as in the example the blog post uses to illustrate spaghetti code?


The 'big problem' with callbacks isn't the callbacks themselves - it's the resulting difficulty to the developer to keep track of the flow of control.

This is analogous to goto's - programming with goto is a nightmare, but all code 'compiles to' some sort of immediate form using goto's. The goto is not the problem, programming with goto is.


Yes. Elm is a FRP language that compiles to JavaScript. View source on some of the examples:

http://elm-lang.org/Examples.elm


Actually, callbacks are the Intercal's COME_FROM instruction (http://en.wikipedia.org/wiki/COME_FROM).

So, it is even worst!


Callback Hell is certainly a real thing. I decided 12 years ago that I would never use callbacks if I could avoid it (the only wai you can't avoid it is if an API forces you to use them); I have never looked back. Regular, simple, straightforward imperative flow control is a very powerful thing, and any time you give it up or make it more squishy and indirect, you had better be getting something big in return. Usually you aren't.

That said, what the article proposes as a solution is bananas. You don't need to do crazy functional acronym things; just don't use callbacks. Good C/C++ programmers in the field where I work (video games) do this all the time. It's not hard except that it requires a little bit of discipline toward simplicity (which is not something exhibited by this article!)


Actually, you're wrong. This is bananas (based on very closely related FPR concept)

http://www.haskell.org/haskellwiki/Reactive-banana

Secondly, you come off sounding defensive and ignorant. This is a new programming paradigm. Hopefully it will give people new ways to approach the same difficult problems. (And I really hope you believe GUIs are inherently difficult...)

No one is twisting your arm to learn FPR. If callbacks work for you in your job, then stick with what works.


When I was in college, and shortly afterward, I was very much into "new programming paradigms" and would get excited about lazy evaluation or continuations or whatever was the new cool idea going around. I have designed and implemented several programming languages built around new / wacky features; the most recent of these was ten years ago.

What you are hearing now is not ignorance, it is experience. I am a tremendously better programmer than I was in those days, and the way I got better was not by getting excited about wacky ideas; it was by really noticing what really works, and what doesn't; by noticing what are the real problems that I encounter in complicated programming projects, rather than what inexperienced / pundit / academic programmers tell me the problems are.

Clearly you didn't really read my comment, though, since you are saying "If callbacks work for you in your job..." and my entire point is that callbacks are terrible.


Also, no, I don't believe GUIs are inherently difficult. I do think most GUI libraries are just terrible though, because they have bought into bad GUI paradigms.

If a GUI is your example of something that is difficult, we are just living in different worlds and it's a challenge to have a productive conversation. I think a difficult task is something like "make this ambitious AAA game run on the PlayStation 3 performantly". That is pretty hard.


FRP isn't new. It is almost as olds as, or older than JavaScript


Ok, I understand that you don't use callbacks. But you forgot to mention what you use instead of them, in situations such as described in the article. Polling?


It depends on what the application looks like. The most straightforward and robust thing is to block on events. But if you are doing tons of this kind of thing, and the data is relatively self-contained and packageable, then I would do something like spawn a worker thread that gets the data and then puts the data into a result list (that, again, the main program blocks on).


If you are coding for the browser, neither of the options you suggest are available to you. Now what?

Answer: you do it with callbacks, because they are literally the only mechanism available. Welcome (back) to callback hell.


Hence my caveat about not being able to avoid it if an API forces you to use them.

But if I were making a replacement language that runs in the browser, among the highest priorities would be to make it not work via callbacks.


Boost bind and fast delegates do certainly suck. (I used to work in games in a past lifetime.) But this is most certainly a C/C++ problem.

In Objective-C, the @protocol keyword gives the language first class delegation and works really, really well. More details here: http://developer.apple.com/library/ios/#documentation/Cocoa/...

With respect to the original article, he's talking about callbacks with respect to Node.js. That's not a callback issue. Async is unnatural for the mind to grasp. What did he expect?


I don't see it as fundamentally different. Callback means some or all of: "I don't know when or where I am being called from, or what the state is of the rest of the program at this time." All of those are bad things if you are trying to write robust software, so you want to avoid them unless there's a really good reason.


> Callback means some or all of: "I don't know when or where I am being called from, or what the state is of the rest of the program at this time."

This sounds a lot like what function means.


Nope, because you can do static analysis (a.k.a search through your program text) to find out who calls a function and when. The whole point of a callback is that this doesn't work.


How does that help with program state, which changes based on user input?


"I don't know when or where I am being called from, or what the state is of the rest of the program at this time."

Gotcha.

A parent object should own a child object. The parent can directly call a method on a child. The child object shouldn't really know about the parent. Hence, it uses a callback/delegate/protocol.

Callbacks are a mess if there isn't a clear parent to child relationship.


I don't believe that parent/child relationships are a good way to structure programs. I use them sometimes, but very rarely. (Current codebase is 180k lines of C++).


What do you do for networking, though? That seems to be the motivation to introduce callbacky code in web programming.

(FWIW my own architectures tend to turn callbacks into queues and polling.)


Just like a C++ program would, you have a loop that blocks on network input. This has been solved since the 1970s.


And then how do you use the other 99% of your CPU? Just by extra machines every time a network call hangs? Add threads, and then you gave a whole new can of expensive worms.


Here are the slides from Evan's talk at Strange Loop:

https://github.com/strangeloop/strangeloop2012/blob/master/s...


@mpolun - It appears your account has been hell-banned. You need to create a new account so I can upvote your comments:

mpolun> I agree that raw callbacks can get out of hand, but the typical solution in js is to use an event emitter (http://nodejs.org/api/events.html) or promises (like https://github.com/kriskowal/q), the latter of which seems to be pretty close to what this article is talking about. Is there a fundamental difference, or are promises an example of functional reactive programming in a language without direct support for it?


Voting up just for not using a "...Considered Harmful" headline, particularly since the author obviously is familiar with the idea.


Another approach to async IO is CPS (continuations passing style), in which you write imperative style code. This imperative style code is then compiled such that the blocking IO operations are called with callbacks, which are the remainder of that block of code (the continuation) - allowing the calling thread to be re-used while blocking for IO. Relies on the continuation having access to the outer/parent/previous-part function via closures.

It'll be interesting to see if people start doing this. Requires people to understand continuations and closures (which more people have exposure to now via JavaScript), and library support.


Hey, that was my idea too, see my own comment [1]. I've actually developed a (very simplistic) programming language utilizing this approach. I even compiled the interpreter to Javascript so you can try it our from the browser.

[1] http://news.ycombinator.com/item?id=4736630


Agreed. It's one of the reasons why I wrote lthread. Implementing any non-trivial protocol over a socket for example will lead to callback hell. There are plenty of states to transition from/to and it's very easy to get it wrong no matter how good the code is structured.

Forget Javascript for a second, and take a look at a typical http proxy written in C using callbacks to see what a callback hell looks like. If 5 developers are working on such a project, it will require a lot of mental effort from each developer to keep the callback flow up-to-date in their head.


Having programmed in many languages, but most recently Node.js for the last two years, I don't think "callback hell" is as big a problem as the OP makes it out to be.

Debugging huge, complicated, and even poorly written Node applications doesn't feel much different to me than debugging huge, complicated, or poorly written Java applications. Sometimes it's a pain, that's unavoidable. You can prevent it to an extent by writing clean, tested code.

I don't see a strong resemblance between goto and callbacks. The resemblance is just as strong between goto and any function, or class


I've been a JS (which is the land of callbacks) programmer for only a few months now, and I would disagree. Yes you can write deeply nested callback chains, but you don't have to most of the time. There are a couple of ways to avoid it.

* The async library mentioned by other posters helps a lot.

* Libraries like backbone make writing event-driven software easier.

But to sum it up: it's like anywhere else, bad programmers write "callback hell" code, and good programmers don't.


This excuse is as old as programming languages.

* Programmers who need a higher level language (e.g. C instead of assembly) are just bad programmers.

* Programmers who can't manage manual memory allocation are just bad programmers.

* etc


I'm not convinced.

I'm not familiar with the last approach but it seems to me that with a couple of higher-order functions in JavaScript, the code will quickly become more manageable.

  function getPhoto(tag, handlerCallback) {
     asyncChain(requestTag, requestOneFrom)(tag, function(photoSizes) {
       handlerCallback(sizesToPhoto(photoSizes));
     });
  }
     
  getPhoto('tokyo', drawOnScreen);


I used a functional-reactive-language-that-compiles-to-javascript for a web app, in a project that lasted about 3 years. It solved callback hell, and solved some UI problems, but created some hard UI problems as well. I'm not sure how this would translate to a server, but some examples anyway.

It seemed impossible to completely escape imperative programming. Mouse click handlers for example were much more natural to write imperatively; changes made in the imperative code would propagate as signals in a reactive way.

Reasoning about what happened around the boundaries of imperative and reactive code was hard, especially as the application grew in complexity. If I have a UI element that depends on two other signals - think spreadsheet cell with a formula that depends on two other calculations - do I want to update it as soon as one signal changes? do I wait for both to change? Do I want different behaviors in different circumstances? It often led excessive layout changes as values passed through intermediate states, or code being executed multiple times unnecessarily.


Is non-reactive code any better at solving those problems? Those are hard problems.


The number of languages that compile to JavaScript is starting to become disconcerting.

How long until we get tired of adding epicycles and just specify a VM and bytecode standard that all the browsers can implement and all the client-side languages can compile to?


Who would implement in their browser a VM no website used, and who would use a VM no browser implemented?

I'm not sure who you think "we" are, who should specify a VM and bytecode standard. If we wanted a bytecode standard, there doesn't seem anything deeply wrong with Java, and we seem to be in the process of phasing that out of every browser.


It'd have to be bootstrapped the same way other Web standards such as CSS had to be, which admittedly might take a while.

Java and Flash are no-gos because they're not open enough - they're owned by companies that want to maintain tight control over the platforms. They're also too heavily built around their own private APIs; what would really be needed is something that sticks to the same APIs and DOM that JavaScript interacts with, in order to ensure a reasonable migration path for existing technologies.

My guess is that the easiest option would be to base such a standard on a VM and bytecode format that a major browser already does implement: the one from the Spidermonkey Javascript engine.


So alternative VMs and VMs progress can go die in a fire, one runtime to rule them all and in the darkness bind them? No Carakan, no JavaScriptCore/Squirrelfish, no Chakram, no V8?

Here's an idea: javascript is your bytecode, and you have every single javascript VM as your runtime.


Perhaps not all of those specific ones, at least not without modification. But you're being hyperbolic; there's absolutely no reason you couldn't have multiple implementations of the VM standard. In fact, there should be - in the grandparent I pointed out that single-owner standards like Java are a no-go in my opinion.

My only thought here is that having a bytecode standard to work from might give a little more flexibility and power to folks who want to experiment with alternative client-side languages.


A byte-code based on Javascript VMs would be less useful than you might think.

For many years people who have tried to build dynamic languages on Java have had to go through horrible pain to build their languages, and not gained the performance they would want -- only the recent addition of invokeDynamic have finally allowed fast implementations.

Personally, I would hope any browser bytecode would have big integers as a primitive, as implementing them in dynamic language is very slow. However, no Javascript byte code would have big integers, as they aren't in Javascript!


What Javascript's capable of would really only need to be a starting point.

I certainly wouldn't want to limit what's possible to what Java or Javascript are capable of. Certainly not Java. Java is (IMO) practically defined by stagnation as a result of poor management by the companies that have owned it.

And part of what I'm thinking here is getting away from the limitations imposed by Javascript. For example, Javascript lacks big integers, and is a dynamic language. My thought is that decoupling the HLL from the runtime would allow for faster evolution, because it's hypothetically easier to add new opcodes to a bytecode and VM than it is to make large shifts to a high-level language. Consider the whole dynamic thing in particular - .NET has a great VM with pretty good support for both static and dynamic languages. Java didn't, but again we aren't strictly required to repeat the mistakes of the past. And by just using "Javascript as the bytecode" we commit the reverse sin - that's a 'bytecode' with poor support for static languages.


> But you're being hyperbolic

Not really.

> there's absolutely no reason you couldn't have multiple implementations of the VM standard.

If you define a bytecode standard, you greatly restrict the flexibility of VM implementors and the possible outputs. A stack-based bytecode will preclude register-based VMs, V8 doesn't even use bytecodes, it does all translations straight from source via two different JITs.

> My only thought here is that having a bytecode standard to work from might give a little more flexibility and power to folks who want to experiment with alternative client-side languages.

It doesn't. It might provide them more a little more simplicity because they merely need to generate binary bytecode streams rather than text, but it adds no expressive power, and as I noted it severely limits the flexibility of the runtime implementors.


As a creator of an "altJS" language, I've been thinking about this for a long time. I wanted a VM in the browser for years and was a huge fan of Silverlight/Moonlight but I recently changed my mind.

Higher level languages are easier to optimise for. Creating a bytecode would also mean another backwards compatibility hell and another format is not as general as people would like.


The people behind the standards will tell you that JavaScript is the VM and bytecode standard.



It [synchronous call] basically dodges the issue of time-dependence by just freezing if it is waiting for a value. This blocks everything in the program. Mouse and keyboard input just piles up, waiting to be processed, presenting the user with an unresponsive app. This is not an acceptable user experience.

It depends on what else your user can realistically do before the call completes. In many cases the answer is "nothing." He needs the result of the call before he can proceed in his task. In simple web apps this happens a lot. In those cases I will often just make a synchronous call and avoid all the callback complexity.


What about auto save in gmail, prefetching, loading resources in parallel, updating buddy chat status, realtime stock prices, ...


Callbacks do lead to hell. The back traces half the lead you now-where. I have been writing a scrapy crawler and sometimes when an exception happens it takes some grepping around to figure out where the value that is wrong actually was generated.

Has anyone touched the Google Chrome code base? It is quite difficult to start debugging problems because of the sheer volume of callbacks. Add to that the stack-traces are massive because of the use of templates and other C++ language features.

Async coding needs to be an abstraction within the language. I am curious how languages manage the shared memory. What about the risk of dead locks?


> The back traces half the lead you now-where.

Yes. This can be solved by maintaining your own backtrace of callbacks (say, as part of your request object/structure), but it's really something that should be implemented in the language or framework.


It is crazy. This is a huge problem that is so easy for the language writers to solve (add a history object to each function call, when compiling/running in debug mode), but no one solves it.


"It is pretty much the same as using goto to structure your programs."

I don't see how a self contained block of code can be equated to goto where the flow can bounce around all over the place.

The example callback "hell" code doesn't look any more complicated than the solution Elm code to me. Maybe the improvement is going over my head and I need to read it again. I just don't see it. Then again, I feel the same way about Coffeescript. These javascript helper languages just seem like an unnecessary added level of complication and cognitive load.


This is the typical "blog sample" complaint: "Four lines of code doesn't prove your conjecture". Of course, putting an entire application in would make for a challenging read.

While I'm not sure I would go as far as saying goto==callback, after wading through a fairly extensive browser extension, I know that callbacks are inherently more difficult to reason about than imperative code.

I haven't decided if the tradeoff of more flexible code versus more difficult reasoning is a good one. In some ways, I think the article got it right: we are expecting the programmer to do a lot of book-keeping that a compiler could do more cleanly. After all, it is possible to write goto-laden code that is just as readable as function-laden code. It's just nice to let the compiler manage all the baggage of frames, stack pointers and jumps for us.


With continuations (I cannot read the elm page right now as it is stuck), you can write something like;

  return doItWith(getSomething());
where getSomething() does something async. This is very clear and readable, in JS:

  var result = function() { ... handle the returned value in here }
  var self = this; 
  this.getSomething(function(something){
   result(self.doItWith(something));
  });
When you are wading through large JS chunks this really gets hard. I do code reviews for my employees and there is a lot of time wasted on fixing bugs related to these callback structures. It's just too easy to make mistakes. I'm not sure if elm or opa or ... are final solutions, but they do improve working with Javascript imho.


Does anyone recall the article a few months back that sort of dealt with callback hell? It was a concept from a different language (which one, I don't recall) that worked basically with returning 2 values and formatting your functions according to a general model.

Yes, that's a bit vague, but that's all I've got to go on. :)


Marginally related but IYI, I recently wrote a JS lib for writing loops with a delay, to avoid one instance of callback hell:

http://brandon.si/code/dilly-dot-js-a-library-for-loops-with...


I think the author is missing the obvious and natural solution: let the programmer write code in a completely synchronous (blocking) style, but have the programming language execute it an an asynchronous and concurrent fashion. Something like that:

  # this appears very synchronous
  function getPhoto(tag) {
      var photoList  = syncGet(requestTag(tag));
      var photoSizes = syncGet(requestOneFrom(photoList));
      return sizesToPhoto(photoSizes);
  }

  # Two getPhoto() "processes" are spawned. After this,
  # the language multiplexes between them via the (single) event loop,
  # in a single OS thread.
  job1 = spawn getPhoto('tokyo');
  job2 = spawn getPhoto('tokyo');

  # Wait for both of them to finish. This too happens in an asynchronous
  # fashion, i.e. calling job1.join() does not prevent the two jobs from
  # running. In effect at this point we have three "processes" running
  # (the main process doing the joins, the job1 process and the job2 process).
  photo1 = job1.join();
  photo2 = job2.join();
  drawOnScreen(photo1);
  drawOnScreen(photo2);
Yes, I know this may be very hard to implement in Javascript/Node, because it fundamentally changes the way the JS engine needs to work.

NOTE: It seems this approach is not new; "green theads" seems to be the right term, and there seem to be a lot of Python-based implementations. Go's goroutines also appear similar (but you can have them run truly in parallel).

BUT note a crucial difference from the "green threads" approach - in my suggested design, there would be no real scheduling. If you perform sequence of operations and they are all guaranteed not to block, this sequence is automatically atomic, and cannot be interrupted by another "process".

I should also mention this programming language I'm developing, called NCD [1], which employs the same idea. See the in-browser demo [2], click on the Spawn example.

Note that NCD implements a unique extension of imperative programming. Statements in general persist even after they have "returned", and they get the chance to do stuff when they are about to be "deinitialized" (see what happens when you click "Request termination" in the Spawn example). Plus, any statement can trigger "backtracking" to its point within the process, causing automatic deinitialization of any statements that follow (try Count example).

Also, IMO promises [3] are just a hack around the fact that the language is not inherently asynchronous. Seriously, who would prefer:

  doFoo()
      .then(function (foo) {
          return doBar(foo);
      })
      .then(function (bar) {
          return doBaz(bar);
      })
      .then(function(baz) {
          console.log(baz);
      });
Over this?

  function myWork () {
      foo = doFoo();
      bar = doBar(foo);
      baz = doBaz(bar);
      console.log(baz);
  }
  spawn myWork();
[1] http://code.google.com/p/badvpn/wiki/NCD [2] http://badvpn.googlecode.com/svn/wiki/emncd.html [3] https://gist.github.com/3889970


Green threads are indeed very useful, and probably the right way to go. In Javascript, see Tamejs: http://tamejs.org/

NCD looks interesting, in particular the backtracking feature. But I'm a bit concerned by your choice of developing a new language from scratch, with a very awkward syntax (at least at first sight). Why not extend an existing language?

See CPC for instance, which extends the C language with a spawn primitive (disclaimer - this is my PhD thesis project): http://www.pps.univ-paris-diderot.fr/%7Ekerneis/software/cpc... and http://www.pps.univ-paris-diderot.fr/%7Ekerneis/research/ for more details.


Yes, Tamejs and CPC indeed do exactly what I had in mind. CPC is particularly amazing for doing this to C.

Considering NCD: you might have noticed that NCD was never meant to be a general-purpose language, but rather a simple scripting language for controlling other programs and OS configuration. My thinking was that extending a language with the features that define NCD (asynchronous execution, backtracking and extended statement lifetime) would essentially require a complete rewrite of a language implementation. However after seeing tamejs and your CPC I'm not so sure anymore :)

P.S. Check out the Read File example I just added to the NCD demo page; it shows how backtracking can be used to handle errors elegantly.


An alternative method of decoupling callbacks is to make heavy use of events within your program. However, perhaps an event is even more like a goto because it doesn't encapsulate state.


What is the different between events and callbacks? I though an event triggers a callback.


Can anybody recommend a way to avoid Callback Hell in Objective-C?


I've been quite happy with Reactive Cocoa: https://github.com/blog/1107-reactivecocoa-for-a-better-worl...


blocks. but they are dangerous when dealing with threading.


Huh? As far as I can tell, blocks are callback hell in this case, certainly the nested callback lambdas in the JavaScript examples are essentially equivalent to blocks.


Blocks are callbacks in Objective-C


Amen brotha! I totally agree. Callbacks, especially nested callbacks are a nightmare to debug, and especially to refactor.


The solution to all of this is incredibly simple but for some reason very few people seem to utilize it: State machines.


The reason people don't use explicit state machines more is because it's kind of a painful way to write code. Suppose that I want to do a bunch of HTTP GETs, munge the data, and then fire off corresponding HTTP PUTs. I would like to be able to write something like:

    def process(job):
        http.put(job.destUrl, munge(http.get(job.srcUrl)))
    
    print "Processing some jobs"
    workerPool(MAX_CONNECTIONS).map(process, jobs)
    print "All done!"
Wouldn't that be great? And in fact, you can do this easily with something like Eventlet, so I'm not asking too much here.


Amusingly, in an environment with a level of parallelism, the above code poses few problems even if http.get/put are synchronous & blocking. A scheduler handles a bunch of blocked threads perfectly fine, and the series of calls per job aren't inefficient when executed in series.

We hit pain points when we can't have parallelism in our programs, and are forced to make elegant the work-arounds intended to unclog those programs.


Link seems broken, page fails to load.


It seems that web site does not like IE9. I can view the source, but nothing shows up.

Thankfully other browsers work fine. Makes me wonder what they are doing to completely break rendering on IE though...


So essentially, everything about node.js is wrong? Hmm. Didn't see that coming!


To be less inflamatory, using callbacks to specify those low level APIs certainly makes sense, since the main desire here is to make them simple and interoperable, not necessarily human friendly.

You can then layer your favourite libraries or tools on top of that to make things better to live with. (For example, a promise-based wrapper for the low level libs or a compiles-to-js dialect)

I do agree that all those posts with people writing callback-based code as though it were the future were kind of sad though.


goto is sweet


Why is it that nearly everytime some coding blog writes about something interesting you've got lots and lots of people criticizing the entry and saying, basically, that "the old way of doing thing is just fine, it's just you that are too stupid to understand it".

I'm sorry but that's not how it works. I've been coding for 20 years or so and I'm always open to new things. The first time I heard about using immutable objects in Java nearly everybody was laughing at the idea, making fun of it. Nowadays it's the contrary that is true. Same thing for using composition instead of concrete inheritance: everybody was there, ten years ago, saying that there was no problem with Java's concrete (implementation) inheritance. Or checked exceptions. Etc.

It's nearly always the same thing: a concept that is not mainstream but that looks very promising is explained in great details and yet people come here, bragging: "You're too stupid to understand ${common way of doing things}, there's no need for ${less commonly used technology}".

This really saddens me.


This is exactly what pg calls "Blub programmer" syndrome:

Programs that write programs? When would you ever want to do that? Not very often, if you think in Cobol. All the time, if you think in Lisp. It would be convenient here if I could give an example of a powerful macro, and say there! how about that? But if I did, it would just look like gibberish to someone who didn't know Lisp; there isn't room here to explain everything you'd need to know to understand what it meant.

http://www.paulgraham.com/avg.html

Hopefully though, people who are open to new ways to do things can achieve harder things.


Thank you. It's really quite depressing to visit Hacker News and see it filled with people being unopen to intelligent ideas to make things better.


This is what I personally call 'Bandwagon Threshold Theory'.

People resist the new ideas until a critical mass of people start admiring it.

Only then they'll 'jump on the bandwagon' and tell everyone all about how cool this new idea is.


I've been coding for 20 years too and I'm also open to new things. But no one can be proficient in everything anymore so there is a cost to learning something new: we have to forget something we previously spent time to become familiar with.

Don't be sad, it's clearly a natural reaction that has some utility. I agree that there's a better way to phrase it, usually something like "is it really that much better than what we have now?"


"we have to forget something we previously spent time to become familiar with."

http://www.youtube.com/watch?v=8dbDJzDV1CM


Well, there are many existing and more complete solutions to "callback hell", many JS devs worth their salt already use futures/promises/defers in one way or another. The author seems to ignore existing solutions completely.

Whether this project will even be finished -- or much less, be better than other solutions -- is yet to be seen. Any features that make Elm's callback solution stand out from the crowd are not being advertised.


i've been following elm with interest for a while now, and the author's general blogging style is "here are some reasons you'll be happy using elm" rather than "here is why elm is better than all the other altjs languages out there". elm doesn't need to stand out, it just needs to be a pleasant, productive way to do things.


Controversy and strong negative opinions in HN also correlate with cheap attention grabbing titles. If you're going to use an ages old programming catchphrase to get pageviews, you better have something really substantial to say.


I don't get it. His example just uses "let ... in" syntax instead of putting the next func as an argument to the first. It seemed to be exactly the same thing "deeply nested functions". He just chose not to indent after the "in", and he has a nicer syntax for nested functions.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: