Hacker News new | comments | show | ask | jobs | submit login

A function that doesn't do IO and a function that does IO sound like totally different functions to me. Why aren't you writing a new function?

That isnt the point, its that you have some nice code that does |var x = someBigFunction()| and at some point in the programs life, something inside someBigFunction turns async, you have a whole restructuring to do.

Then that's a breaking change of the someBigFunction()s interface and it should be treated with care, not just quickly patching on a promise and hope for the best. Javascript is single threaded so when you wrote someBigFunction() you didn't have to think of parallel side effects. If you suddenly pause the execution of someBigFunc() and allow the user to mess around with the interface, the global state that someBigFunc() is working on might become changed.

Simply slabbing on a promise inside such a function only works if the function is pure to start with. Usually that's not the case with slow functions in javascript since the slowest thing in javascript is messing with the DOM, i.e global state.

The fact that I changed a calculation from a constant to reading its value from indexeddb shouldnt need to be an intrusive change. It is makes things more likely to break, there isnt some 'oh you decided to do io so you deserve to do a big refactor' point

And this is why handling state in imperative programs is such a mess. You are fundamentally changing the way state is treated in that application - why shouldn't it require a massive refactor?

Functional programs don't make this easier.

And don't say "monad transformer stacks" somehow solve this problem in an easier way.

Not necessarily because it is functional, but Erlang makes this vastly easier.

I find the "oh you changed tiny piece of logic, you should totally have to refactor every line of code that follows it" very strange. It obviously sucks, there are far better ways to handle it and they are slowly making it into the language.

Erlang makes it easier because it enforces no shared state between actors, and each actor has it's own independent thread of control.

Haskell definitely makes it easier. You can use preemptive green threads with almost all the ease of cooperatively multitasked code because of the prevalence of immutability.

You get the performance benefits of non-blocking code. The simplicity benefits of blocking code. And (almost) none of the threading hell you get in imperative languages.

Are they really preemptive? How does the scheduler decide when to switch?

Yeah, they are preemptive (though there had been a long-standing bug where threads that have no allocations don't get preempted, I believe it is fixed now).

This is some documentation of the scheduler: http://blog.ezyang.com/2013/01/the-ghc-scheduler/

Fair enough. It might be a breaking change but wouldn't it be nice if instead of having to rewrite everything into continuation passing style (a boring and error prone process!) you could just do something simple like

    var a = await someBigFunc()

      a <- someBigFunc()
Javascript in its current state is not built for supporting CPS programming very well. You often end up needing to reimplement control flow primitives like while loops, break statements and exception handling since those don't play nice with async stuff.

This syntax already exists in LiveScript (http://livescript.net/#backcalls) which compiles to straightforward JS.

    a <- doFirstThing()
    b, c <- doSecondThing(a)
    doFinalThing(b + c)

This is just not something that happens in real life. You know what functions are going to be async up-front, and likely make them async even if you don't know, just in case. Since all IO in js is async by default, having a code that does a simple calculation and introducing IO into it is actually a very big change that warrants the refactoring you'll be required to do.

> This is just not something that happens in real life.

You never change code in real life? You don't always know what functions are going to be async up-front. They might become async later, 4 levels down, when someone decides to do some IO. What happens then? Exactly. You have to ripple that code all the way up to the top of the API.

This is definitely something that happens to me a lot in real life.

And there isnt some magical point at which a change becomes large enough that it warrants a refactoring of unrelated code just because of the control flow.

It's happened to me in real life. When you want to go from keeping a value in external storage to keeping it in memory, you've gone from async to sync. The reverse happens too. These are not huge conceptual changes. When they require restructuring an entire program, it's reasonable to wonder why that program's structure is so brittle. "Likely make them async even if you don't know, just in case" sounds to me like an admission of this problem. Why would I want to make an in-memory lookup async, thus forcing it to wait in the event queue and defeating the purpose of RAM? The only reason to do that is that the programming model imposes a large complexity tax for not doing it.

Consider the simple case where one wants to look up a value synchronously if one has it in memory, and go get it from storage asynchronously if one doesn't. That's a natural thing to want, but it's problematic in Node. The problem is not syntactic—you can easily write a function that calls back immediately in the one case and asynchronously in the other. It's that sync and async semantics don't compose well, so when you start to do anything a little complicated (e.g. launch N requests and call back when all N have either returned or failed), the two trip over one another. Working in a Lisp that compiles to JS, I had to write some surprisingly complex macros in order to get correct behaviour. I wouldn't dream of writing that JS code by hand.

Are you serious? This happens in real life all the time.

doing/not doing IO may be an implementation detail for a function. For example, say you have a function isPrime(x) that you implement using some primality test, with no IO. At some point in the future, you may notice that computing the primality test takes a long time, so instead you decide to submit the number to the server which will then return the answer. The function itself remains the same, it takes in a number and returns its primality, however the function is now asynchronous and performs IO.

The function isn't the same, it now how side effects.

Not conceptually. From the perspective of the rest of the program, isPrime(x) has no side effect regardless of which method you use to implement it. If you are working in a purely functional language, then it makes sense to have introducing IO be a major change, as it has huge potential to make a function impure, and assuming IO is a pure function is roughly the equivalent of unsafePerformIO. However, in this case, we are not even working in a pure functional language. In concept, there is no reason that implementing a function with IO should require writing code outside of that function differently than implementing it with CPU.

Of course it's different. What if the IO fails?

Then the program dies with an exception. What if a memory allocation in a Haskell program fails? It can happen just as well because allocating memory is an impure operation which you still have to perform from pure functions. Haskell just happens to pretend it has infinite memory available because forcing the programmer to attempt handle out of memory conditions would be impractical and annoying. Haskells "purity" is just a chimera, albeit a useful one. And it's just as useful being able to pretend network operations are pure too.

So your whole program just crashes because the prime server was not responding? I think that changes the semantics of isPrime quite a bit.

What if pure code throws an Exception?

If every function is effectively impure, it still sucks.

generally you are right, it would be far easier to hide these kind of choices if the language allowed it.

However currently javascript has a purely event driven model and this means that you have trouble fitting in even a purely CPU bound computation, as it would block the responsiveness of your UI (or other concurrent requests if your are doing server side JS). That's why there was a need for creating something like WebWorkers.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact