Hacker Newsnew | comments | show | ask | jobs | submit login

That isnt the point, its that you have some nice code that does |var x = someBigFunction()| and at some point in the programs life, something inside someBigFunction turns async, you have a whole restructuring to do.



Then that's a breaking change of the someBigFunction()s interface and it should be treated with care, not just quickly patching on a promise and hope for the best. Javascript is single threaded so when you wrote someBigFunction() you didn't have to think of parallel side effects. If you suddenly pause the execution of someBigFunc() and allow the user to mess around with the interface, the global state that someBigFunc() is working on might become changed.

Simply slabbing on a promise inside such a function only works if the function is pure to start with. Usually that's not the case with slow functions in javascript since the slowest thing in javascript is messing with the DOM, i.e global state.

-----


The fact that I changed a calculation from a constant to reading its value from indexeddb shouldnt need to be an intrusive change. It is makes things more likely to break, there isnt some 'oh you decided to do io so you deserve to do a big refactor' point

-----


And this is why handling state in imperative programs is such a mess. You are fundamentally changing the way state is treated in that application - why shouldn't it require a massive refactor?

-----


Functional programs don't make this easier.

And don't say "monad transformer stacks" somehow solve this problem in an easier way.

-----


Not necessarily because it is functional, but Erlang makes this vastly easier.

I find the "oh you changed tiny piece of logic, you should totally have to refactor every line of code that follows it" very strange. It obviously sucks, there are far better ways to handle it and they are slowly making it into the language.

-----


Erlang makes it easier because it enforces no shared state between actors, and each actor has it's own independent thread of control.

-----


Haskell definitely makes it easier. You can use preemptive green threads with almost all the ease of cooperatively multitasked code because of the prevalence of immutability.

You get the performance benefits of non-blocking code. The simplicity benefits of blocking code. And (almost) none of the threading hell you get in imperative languages.

-----


Are they really preemptive? How does the scheduler decide when to switch?

-----


Yeah, they are preemptive (though there had been a long-standing bug where threads that have no allocations don't get preempted, I believe it is fixed now).

This is some documentation of the scheduler: http://blog.ezyang.com/2013/01/the-ghc-scheduler/

-----


Fair enough. It might be a breaking change but wouldn't it be nice if instead of having to rewrite everything into continuation passing style (a boring and error prone process!) you could just do something simple like

    var a = await someBigFunc()
or

    do
      a <- someBigFunc()
Javascript in its current state is not built for supporting CPS programming very well. You often end up needing to reimplement control flow primitives like while loops, break statements and exception handling since those don't play nice with async stuff.

-----


This syntax already exists in LiveScript (http://livescript.net/#backcalls) which compiles to straightforward JS.

    a <- doFirstThing()
    b, c <- doSecondThing(a)
    doFinalThing(b + c)

-----


This is just not something that happens in real life. You know what functions are going to be async up-front, and likely make them async even if you don't know, just in case. Since all IO in js is async by default, having a code that does a simple calculation and introducing IO into it is actually a very big change that warrants the refactoring you'll be required to do.

-----


> This is just not something that happens in real life.

You never change code in real life? You don't always know what functions are going to be async up-front. They might become async later, 4 levels down, when someone decides to do some IO. What happens then? Exactly. You have to ripple that code all the way up to the top of the API.

-----


This is definitely something that happens to me a lot in real life.

And there isnt some magical point at which a change becomes large enough that it warrants a refactoring of unrelated code just because of the control flow.

-----


It's happened to me in real life. When you want to go from keeping a value in external storage to keeping it in memory, you've gone from async to sync. The reverse happens too. These are not huge conceptual changes. When they require restructuring an entire program, it's reasonable to wonder why that program's structure is so brittle. "Likely make them async even if you don't know, just in case" sounds to me like an admission of this problem. Why would I want to make an in-memory lookup async, thus forcing it to wait in the event queue and defeating the purpose of RAM? The only reason to do that is that the programming model imposes a large complexity tax for not doing it.

Consider the simple case where one wants to look up a value synchronously if one has it in memory, and go get it from storage asynchronously if one doesn't. That's a natural thing to want, but it's problematic in Node. The problem is not syntactic—you can easily write a function that calls back immediately in the one case and asynchronously in the other. It's that sync and async semantics don't compose well, so when you start to do anything a little complicated (e.g. launch N requests and call back when all N have either returned or failed), the two trip over one another. Working in a Lisp that compiles to JS, I had to write some surprisingly complex macros in order to get correct behaviour. I wouldn't dream of writing that JS code by hand.

-----


Are you serious? This happens in real life all the time.

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: