As far as I can tell, the argument is that allowing side effects inside functions allows programmers to make mistakes, and therefore "mostly functional" programming is broken.
I'll happily extend this: "fully functional" programming does not work because I can define a mathematical function that divides by 0, and therefore does not work. I'd go a step further even: all programming does not work, because I can solve a problem in the wrong way if I make a mistake.
1/0 is not infinity, it is undefined. The limit as x tends to 0 of 1/x is infinity, but there is no value for 1/0, which is why doing a calculation that involves a divide by 0 is mathematically invalid. That Haskell let's you do it and gives an approximate answer is OK, but if it was in a calculation for the number of apples you can eat, it would be a useless answer. You could end up with that if you made a mistake in defining your logic. In the same way, you can make a mistake when doing all manner of other things such as using a non-pure function or adding instead of subtracting. They're all forms of logic error that are either allowed or disallowed by tools. The tool isn't broken if it let's you make mistakes though.
A power drill will let you drill a nail into your leg. That doesn't make power drills broken. A smart drill that turned off the drill if your leg was in front would be better - but we still use drills without that function because there are trade offs when you make your drill do that. There are trade offs in functional languages too. That doesn't make C# "broken" for being a tool that let's you drill into your leg if you point your drill there.
The standard for floating point, IEEE 754, defines X/0, where X != 0 as Infinity. 0/0 is defined as NaN. I disagree quite a bit with the philosophy of a lot of functional programming, but this is an area where they're right.
Well then your formula which contains a division by zero is mathematically invalid as well. What do you think is valid - a floating point exception? That's how the hardware handles it.
I think you're being overly pedantic on this specific example, division by zero is a special case. Floating point numbers on all architectures are only approximate - 1/7 times 7 is seldom exactly equal to 1, in any language, but that doesn't mean floating point math on a computer is useless.
Sure, but that wasn't my point - my point is that if you're calculating the number of apples you can eat and you make an error in the logic and end up with a calculation of x/y with x=22.9 and y=0, you get an exception or an answer of 'infinity'.
My point was: either answer is obviously wrong. You can't eat infinity apples any more than you can eat undefined apples. It wasn't the real answer to the question you were trying to solve. Haskell (or whatever language) allowed you to enter logic that gave an invalid answer. You used the mathematical and functional operators to blow off your leg. You could probably use a more constrained language dealing specifically in "apples and eating" that would prevent you from getting an answer that wrong. Obviously you'd have trade offs in using that language though.
So let me try again: Haskell isn't any more a broken language because it let's you calculate apples incorrectly than C# is a broken language because you can print things multiple times.
You can blow up the runtime with integer division by zero, however.
λ 1 `quot` 0
*** Exception: divide by zero
With `1/0`, it uses `Double` because `(/)` requires a Fractional type.
Also, partial functions like `head` or `fromJust` can burn you:
λ head []
*** Exception: Prelude.head: empty list
The latter is a legacy behavior that is generally considered undesirable. The former is more debatable; some would say that you should avoid integer division by an unknown value as much as possible.
In Haskell, exceptions in non-IO code (other than deliberate hard crashes) are generally considered a wart to be avoided and mitigated as much as possible (because they interfere with functional programming).
His point was you cannot fix against bad programmers. While I'm sure we'd all agree that it's preferable that languages enable developers by throwing up the least amount of hidden traps, at some point we need to stop blaming our tools like the proverbial bad workman.
Mostly functional programming is broken because it cannot solve the problem it sets out to, namely controlling side effects. Because the language type system does not control them, "mostly functional" programming is no more useful than standard imperative programming.
I feel like the author makes it easy for himself, by both defining what other languages try to achieve by being mostly functional and how they fail to achieve that. I don't believe he's right with his assumptions.
It's not like linq, Erlang, f#, ocaml, or even python func/itertools designers said - we're going to be as pure as possible and functional to get that almost-purity. Some problems are just easier to solve with functional-like approach. Whether you're doing it in a pure way or not doesn't matter. Some languages give you a way to write that functional-like solution. I doubt many think of purity while using them, but problems get solved. And we're still better off than writing nested loops instead.
For example I'm happy with write:
items.filter(f1)
.map(f2)
.map(f3)
.reduce(...
and I really really don't need or want an IO monad to be returned from f2 if it happens to need to log some warnings. It would be completely counterproductive.
> and I really really don't need or want an IO monad to be returned from f2 if it happens to need to log some warnings. It would be completely counterproductive.
Are you sure you don't? This means that anything calling your function now produces log entries, but it's not obvious without reading its code.
Allowing side-effects saves you time now, but you might hate yourself later when trying to figure out why code in one part of your codebase is magically affecting code in another part.
Why should anything calling f2 care whether it can produce a warning or not? If the contract is simple - return a value, then that's all we normally care about.
If logs are not convincing, then think about profiling. One day you want to know what kind of values are computed by some function. Same story: impure adds stats.record(x), pure requires IO redesign.
> Why should anything calling f2 care whether it can produce a warning or not? If the contract is simple - return a value, then that's all we normally care about.
What if you call f2 multiple times? What if you call f2 an unpredictable number of times? What if you call f2 in a situation where the logging system isn't initialised?
The parameters and return value are not the only things that matter.
These aren't situations specific to the pure/not issue. You have to know how you deal with them in either case. The uninitialised case it a specific case of: what if your logging target goes away / never existed. Signatures won't help here, because that's a dynamic property of the system.
Who says that "Mostly functional programming" sets out to control side effects?
How the hell do you get to the conconculsion that's it's no more useful than standard imperative programming? That implies that e.g. first class functions do not have merit, which is, excuse me, horseshit.
Lisp doesn't do that and it works wonderful. Same for ML/F#, PerformunsaveIO, Ocaml, you name it.
Programmers can control side effects. While mistakes can/will occur, lessening the benefits compared to language enforced control over side effects, there's a big difference between lessening and eliminating. As such, I'd hesitate to say it's "no more useful".
Mostly functional programming is good enough to make cooperative multitasking and asynchronous programming, which makes for faster servers and easier to reason about user interfaces.
You just have to remember a few globals and try to keep things uncomplicated. Also, having tests helps a lot.
I'll happily extend this: "fully functional" programming does not work because I can define a mathematical function that divides by 0, and therefore does not work. I'd go a step further even: all programming does not work, because I can solve a problem in the wrong way if I make a mistake.