That isnt the point, its that you have some nice code that does |var x = someBigFunction()| and at some point in the programs life, something inside someBigFunction turns async, you have a whole restructuring to do.
Then that's a breaking change of the someBigFunction()s interface and it
The fact that I changed a calculation from a constant to reading its value from indexeddb shouldnt need to be an intrusive change. It is makes things more likely to break, there isnt some 'oh you decided to do io so you deserve to do a big refactor' point
Not necessarily because it is functional, but Erlang makes this vastly easier.
I find the "oh you changed tiny piece of logic, you should totally have to refactor every line of code that follows it" very strange. It obviously sucks, there are far better ways to handle it and they are slowly making it into the language.
Fair enough. It might be a breaking change but wouldn't it be nice if instead of having to rewrite everything into continuation passing style (a boring and error prone process!) you could just do something simple like
var a = await someBigFunc()
a <- someBigFunc()
This is just not something that happens in real life. You know what functions are going to be async up-front, and likely make them async even if you don't know, just in case. Since all IO in js is async by default, having a code that does a simple calculation and introducing IO into it is actually a very big change that warrants the refactoring you'll be required to do.
> This is just not something that happens in real life.
You never change code in real life? You don't always know what functions are going to be async up-front. They might become async later, 4 levels down, when someone decides to do some IO. What happens then? Exactly. You have to ripple that code all the way up to the top of the API.
It's happened to me in real life. When you want to go from keeping a value in external storage to keeping it in memory, you've gone from async to sync. The reverse happens too. These are not huge conceptual changes. When they require restructuring an entire program, it's reasonable to wonder why that program's structure is so brittle. "Likely make them async even if you don't know, just in case" sounds to me like an admission of this problem. Why would I want to make an in-memory lookup async, thus forcing it to wait in the event queue and defeating the purpose of RAM? The only reason to do that is that the programming model imposes a large complexity tax for not doing it.
Consider the simple case where one wants to look up a value synchronously if one has it in memory, and go get it from storage asynchronously if one doesn't. That's a natural thing to want, but it's problematic in Node. The problem is not syntactic—you can easily write a function that calls back immediately in the one case and asynchronously in the other. It's that sync and async semantics don't compose well, so when you start to do anything a little complicated (e.g. launch N requests and call back when all N have either returned or failed), the two trip over one another. Working in a Lisp that compiles to JS, I had to write some surprisingly complex macros in order to get correct behaviour. I wouldn't dream of writing that JS code by hand.
doing/not doing IO may be an implementation detail for a function. For example, say you have a function isPrime(x) that you implement using some primality test, with no IO. At some point in the future, you may notice that computing the primality test takes a long time, so instead you decide to submit the number to the server which will then return the answer. The function itself remains the same, it takes in a number and returns its primality, however the function is now asynchronous and performs IO.
Not conceptually. From the perspective of the rest of the program, isPrime(x) has no side effect regardless of which method you use to implement it. If you are working in a purely functional language, then it makes sense to have introducing IO be a major change, as it has huge potential to make a function impure, and assuming IO is a pure function is roughly the equivalent of unsafePerformIO. However, in this case, we are not even working in a pure functional language. In concept, there is no reason that implementing a function with IO should require writing code outside of that function differently than implementing it with CPU.
Then the program dies with an exception. What if a memory allocation in a Haskell program fails? It can happen just as well because allocating memory is an impure operation which you still have to perform from pure functions. Haskell just happens to pretend it has infinite memory available because forcing the programmer to attempt handle out of memory conditions would be impractical and annoying. Haskells "purity" is just a chimera, albeit a useful one. And it's just as useful being able to pretend network operations are pure too.
generally you are right, it would be far easier to hide these kind of choices if the language allowed it.