(1) uses logging in the middle of an otherwise pure function,
(2) is criticized for the unpredictability of a bounded/well-defined instance of laziness, namely iteration.
The author's proposed solution is Haskell, which
(1) makes it difficult to add logging at all to pure code without refactoring to add monads, unless you subvert the type system by using functions like 'trace'...
(2) ...in which case behavior will be significantly less predictable than C# due to Haskell's pervasive laziness - something which also comes up in other situations, like reasoning about performance/memory use.
I find it hard to take this article seriously.
That kind of mixing is common on languages that are not completely pure. Because people do not have a good idea about what code depends on the real world, and tend to get "pure" code that can not be relied to be actually pure.
(0) The fact you need to modify value-level code to put it in a monadic context is a defect of Haskell's design, not of monads per se. See: http://www.cs.umd.edu/~mwh/papers/monadic.pdf
(1) Laziness has nothing to do with being purely functional.
See http://haskell.cs.yale.edu/wp-content/uploads/2011/02/histor... (§3.2)
(0) You can have a pure strict language just fine. You can also have a lazy impure language just fine.
(1) Nontermination is an effect anyway, so Haskell isn't pure unless you wish nontermination out of existence. (Which the Haskell community is in the habit of doing.)
No, you can't have a non-strict impure language just fine, the semantics would make it too difficult to reason about side effects. I am not aware of any such language. According to Wikipedia, Miranda and Clean are also non-strict but they are also purely functional.
Haskell allows you to create non-strict impure code using functions like unsafeInterleaveIO, this is generally regarded as a bad idea for technical reasons except in very special cases. See for example Data.ByteString.Lazy.readFile, which is more or less obsolete since the creation of iteratees.
On your second point, nontermination is treated semantically as a value in Haskell. It is an error to consider nontermination an "effect". See:
In short, a function which does not terminate is said to return ⊥ from a semantic perspective. The monotonicity of Haskell functions is what makes this work as you'd expect without violating purity, since ⊥ is the least element.
I've read it before, and not precisely yesterday or today.
> No, you can't have a non-strict impure language just fine, the semantics would make it too difficult to reason about side effects.
Sure, such a language wouldn't be very usable, but it's very much definable.
> See for example Data.ByteString.Lazy.readFile, which is more or less obsolete since the creation of iteratees.
You don't need to convince me that lazy I/O is a bad idea. If it were up to me, the type of a lazy stream would be:
data Stream m a = Nil | Cons a (m (Stream m a))
> On your second point, nontermination is treated semantically as a value in Haskell.
Nothing is a value in Haskell. (If you work out the operational semantics of a non-strict language, you'll see that you don't need to “carve out” a subset of terms that must be considered “values”, unlike the case for a strict language.) All you have is computations, some terminating, some not.
It's hard to come to any other conclusion here—other than the conclusion that the author of these comments wishes to argue rather than discuss.
There's a lot to learn from the Haskell community if you're willing. For example, there's probably about two decades of work that's been done past the definition of lazy streams which you provide. Conduit, iteratee, pipes, etc. Really cool stuff with stream fusion rules and bounded resource usage.
There has just been a big debate on the Haskell community over extending the do notation to work with applicatives, instead of only monads. The pro side won, and it's already on GHC 8. It's too early to be sure, but it didn't seen to break anything yet.
The article suggests extending it into pure values too (so that you don't actually need the "do" part). I think all the same arguments apply, so it may be a good move.
"The slightest implicit imperative effect erases all the benefits of purity, just as a single bacterium can infect a sterile wound."
This is just not true. It's a fair rule of thumb that the more pure a function is, the easier it is to reason about its behaviour in a larger program.
If the article had stuck to a survey of common non-pure programming style and its pitfalls, I'd be much happier, but then it started on the "Informal Introduction to Monads", a topic which requires more ink and motivation than it was given here.
> the easier it is to reason about its behaviour in a larger program.
Yes, but there are a LOT of other factors that contribute to reasoning about functions (or methods). Though purity helps, naming (intension revealing selector) is actually more important, at least to me, and function chaining with unnamed intermediary results actually reduces readability significantly (though it seems cool).
For me this is the reverse, having assignment (to verbose and often redundant named identifers) interrupting the flow adds a lot of visual noise compared to an expression using chaining functions (using say |> operator in F#).
Anyway, you might as well say that every programmer should have a modicum of understanding of chip design, since every language is run on chips; but while it may make one a better programmer, it's not necessary in many higher-level languages.
(0) For every type X in the language, there is an object X of values (not arbitrary terms!) of that type.
(1) For every type X in the language, there is an object TX of closed programs (containing no free variables) of type X. T is the monad in question.
(2) A morphism “f : X -> TY” is an arbitrary program of type Y containing a single free variable of type X.
(3) (This is the part where I'm the least sure!) A morphism “f : X -> Y” is a pure terminating program containing a single free variable of type X.
Also, I should've been more careful. If the language is pure and terminating, the monad in question is an equivalence of the category in question with itself. Not necessarily the identity functor.
To paraphrase: "English, mofo. Do you speak it?!?" :-)
I have been programming for over 30 years (which I suppose means nothing if it's just doing the same year over 30 times, but bear with me). Every other language (at the risk of being largely self selected, I know) I have ever worked on seems to incrementally add things or shift ideas around a little bit.
Some communities might try to helpfully explain things to newcomers. But the perception of the Haskell community is that they revel in being as cryptic as possible in their purity.
I'm not even sure if the above comment was serious (it was probably deadly serious), or absolute troll fodder nonsense.
I know (more or less???) what a functor is, and I know what IFF means. Unfortunately, most of what I remember from mathematics comes down to a function has a domain of inputs and a range of outputs, and being a function implies you can predict the output from the input; a "type" is a rule for describing a set, possibly infinite (not much use for computing), of values; values can be "scalar", or multidimensional such as a vector/matrix/tensor. I know that "category" means something very specific to you guys, but not what. You lost me.
For much, much more, this might be a good fit for you: https://bartoszmilewski.com/2014/10/28/category-theory-for-p...
You might compare it to checked exceptions in Java, which are actually a total pain in the butt in practice.
Even if you did need to "rewrite" the caller—which is rare, as I mentioned—the change is usually pretty trivial. You might change the type and stick an fmap, <$> <*>, liftM2 or whatever in the caller, but it won't be a big deal.
I guess my point is just to add some counterweight to this myth that once you grok pure FP you never come back to the "dirty" old world...and also to give some color to the particular pain points that arose. Haskell, I think exposes both a lot of the potential in pure FP, and several pitfalls.
I personally haven't had problems with long type signatures in my code. Typically, they'll disappear when you define the right monad for your application. But I don't have much experience solving "toy" problems in Haskell, I've leaned more towards production code, so my experiences are different.
That's one of the main problems with Haskell—so many people learn it to get better at programming rather than to use it in earnest, write blog posts about monads, and solve toy problems like Project Euler. Like other languages, it takes a different form when you're managing larger projects. Just my 2¢.
Haskell forces you into specializing your code into pure and impure parts. You can go deep on any of those, and use one to interpret the other, but mixing them is bad.
1 - One of the many, many features that I do consider misfeatures on other languages, but in Haskell it interacts well with the rest of the language and becomes empowering.
That said, you learn to predict where monads will be needed over time, and it's absolutely worth it to be able to use the type system to reason about side effects.
Not sure that threading is so ordinary considering that the whole argument is about how to make parallel programming safe.
There may be good reasons to keep as much code as possible purely functional. Even my impure imperative mind finds it comforting not to have to worry about ground shifting below me. Impurities affect the purity of the caller - the paper has a valid point but then it was never considered a good idea to manipulate state hidden deep down in a remote function. In fact I would argue that the place for manipulating state is exactly in the center middle where it can be seen and reasoned about clearly.
I find it at times surprising how far people can get with purely functional programming. It may be required to use black and white thinking when deciding what to allow in the functional space. But then for any program affecting the real world there is going to be some I/O. The physics of the real world has state as have ledgers. Going to the extremes likely is not a good communication strategy when reaching out to the imperative world.
FYI: Link to the author's Wikipedia page: https://en.wikipedia.org/wiki/Erik_Meijer_(computer_scientis...
Really??? The author sees no point having an explicit mechanism through which updated values are obtained, and considers it the same as PERFORMing an update on something in the DATA-DIVISION at any point in time???