Hacker News new | past | comments | ask | show | jobs | submit login
The Curse of the Excluded Middle (2014) (acm.org)
71 points by tosh on Dec 26, 2016 | hide | past | web | favorite | 37 comments



So the first example, in C#,

(1) uses logging in the middle of an otherwise pure function,

(2) is criticized for the unpredictability of a bounded/well-defined instance of laziness, namely iteration.

The author's proposed solution is Haskell, which

(1) makes it difficult to add logging at all to pure code without refactoring to add monads, unless you subvert the type system by using functions like 'trace'...

(2) ...in which case behavior will be significantly less predictable than C# due to Haskell's pervasive laziness - something which also comes up in other situations, like reasoning about performance/memory use.

I find it hard to take this article seriously.


There is also the empirical evidence that "mostly functional" C# has successfully been used in perhaps thousands of others around the world. It's hard to take "it doesn't work" seriously when it has clearly worked for me and many others.


How do you do functional programming in C#? It has lambda functions, but it lacks sum types, pattern matching and immutable collections (they exist, but it's not much use if none of the libraries use them).


None of these things are essential for functional programming in the sense of pure functions free of side effects.


In that sense, no programming language feature is essential; you can do pure functions in assembler.


In my experience, it is rare to want to put logging in pure code in the first place. You tend to already be using some kind of monad or IO facility in the places where you would naturally want to put logging. This is from my experience with Haskell.


That's my experience too. The only reason you may want to log something is because there's something that may change on different executions of the same code, what means it is not pure.

That kind of mixing is common on languages that are not completely pure. Because people do not have a good idea about what code depends on the real world, and tend to get "pure" code that can not be relied to be actually pure.


Yes, Haskell is a very flawed language, however:

(0) The fact you need to modify value-level code to put it in a monadic context is a defect of Haskell's design, not of monads per se. See: http://www.cs.umd.edu/~mwh/papers/monadic.pdf

(1) Laziness has nothing to do with being purely functional.


I'm not even trying to criticize Haskell, really; Haskell has plenty of great attributes (though the paper you linked is interesting). I just think this particular argument advocating for it is poorly constructed, basically pointing to a weakness as if it were a strength.


On the second point, laziness and purity are very closely related.

See http://haskell.cs.yale.edu/wp-content/uploads/2011/02/histor... (§3.2)

See http://stackoverflow.com/questions/31477074/how-lazy-evaluat...


Maybe for social reasons. From a purely technical standpoint:

(0) You can have a pure strict language just fine. You can also have a lazy impure language just fine.

(1) Nontermination is an effect anyway, so Haskell isn't pure unless you wish nontermination out of existence. (Which the Haskell community is in the habit of doing.)


I think you may have glossed over that paper I linked.

No, you can't have a non-strict impure language just fine, the semantics would make it too difficult to reason about side effects. I am not aware of any such language. According to Wikipedia, Miranda and Clean are also non-strict but they are also purely functional.

https://en.wikipedia.org/wiki/Strict_programming_language

Haskell allows you to create non-strict impure code using functions like unsafeInterleaveIO, this is generally regarded as a bad idea for technical reasons except in very special cases. See for example Data.ByteString.Lazy.readFile, which is more or less obsolete since the creation of iteratees.

http://stackoverflow.com/questions/5892653/whats-so-bad-abou...

On your second point, nontermination is treated semantically as a value in Haskell. It is an error to consider nontermination an "effect". See:

https://en.wikibooks.org/wiki/Haskell/Denotational_semantics

In short, a function which does not terminate is said to return ⊥ from a semantic perspective. The monotonicity of Haskell functions is what makes this work as you'd expect without violating purity, since ⊥ is the least element.


> I think you may have glossed over that paper I linked.

I've read it before, and not precisely yesterday or today.

> No, you can't have a non-strict impure language just fine, the semantics would make it too difficult to reason about side effects.

Sure, such a language wouldn't be very usable, but it's very much definable.

> See for example Data.ByteString.Lazy.readFile, which is more or less obsolete since the creation of iteratees.

You don't need to convince me that lazy I/O is a bad idea. If it were up to me, the type of a lazy stream would be:

    data Stream m a = Nil | Cons a (m (Stream m a))
Where m is Identity when you have a strict (and guaranteed finite!) list. Of course, laziness is a monadic effect just like any other.

> On your second point, nontermination is treated semantically as a value in Haskell.

Nothing is a value in Haskell. (If you work out the operational semantics of a non-strict language, you'll see that you don't need to “carve out” a subset of terms that must be considered “values”, unlike the case for a strict language.) All you have is computations, some terminating, some not.


These comments seem to be redefining words each time they are used. A parent comment refers to "value-level code" in Haskell and now this comment says that "nothing is a value in Haskell". This seems to be contradictory.

It's hard to come to any other conclusion here—other than the conclusion that the author of these comments wishes to argue rather than discuss.

There's a lot to learn from the Haskell community if you're willing. For example, there's probably about two decades of work that's been done past the definition of lazy streams which you provide. Conduit, iteratee, pipes, etc. Really cool stuff with stream fusion rules and bounded resource usage.


Errr, oops, yes, my bad. I should've said term-level code.


About #0, it would be very, very interesting to see how that works on practice.

There has just been a big debate on the Haskell community over extending the do notation to work with applicatives, instead of only monads. The pro side won, and it's already on GHC 8. It's too early to be sure, but it didn't seen to break anything yet.

The article suggests extending it into pure values too (so that you don't actually need the "do" part). I think all the same arguments apply, so it may be a good move.


I like the article as a roundup of some ways in which programming is usually done in a non-pure way. However, this early quote is an example of why I disagree with the article:

"The slightest implicit imperative effect erases all the benefits of purity, just as a single bacterium can infect a sterile wound."

This is just not true. It's a fair rule of thumb that the more pure a function is, the easier it is to reason about its behaviour in a larger program.

If the article had stuck to a survey of common non-pure programming style and its pitfalls, I'd be much happier, but then it started on the "Informal Introduction to Monads", a topic which requires more ink and motivation than it was given here.


> It's a fair rule of thumb that the more pure a function is,

> the easier it is to reason about its behaviour in a larger program.

Yes, but there are a LOT of other factors that contribute to reasoning about functions (or methods). Though purity helps, naming (intension revealing selector) is actually more important, at least to me, and function chaining with unnamed intermediary results actually reduces readability significantly (though it seems cool).


> and function chaining with unnamed intermediary results actually reduces readability significantly

For me this is the reverse, having assignment (to verbose and often redundant named identifers) interrupting the flow adds a lot of visual noise compared to an expression using chaining functions (using say |> operator in F#).


I'll agree with the article here. In a mixed setting, it is very hard to estimate how pure a function is. There are all kinds of interactions that you don't expect but will bite you if you assume they don't exist.


The type structure of call-by-value languages is inherently monadic. The monad in question is the identity functor iff the language is pure and terminating. Given the predominance of call-by-value languages, I'd expect every programmer to have a modicum of understanding of monads, if only to have the faintest idea how popular languages (yes, popular languages, not Haskell) work.


But I argue that this is not the place to do it (since it requires more ink and motivation than there is space left in the article).

Anyway, you might as well say that every programmer should have a modicum of understanding of chip design, since every language is run on chips; but while it may make one a better programmer, it's not necessary in many higher-level languages.


By the way, it's very early on the morning after Christmas Day for me, so I'm struggling to load the structures into my mind properly. What category is the functor going from and to? The Precise Monadicity Theorem states that a functor G is monadic iff it has a left adjoint and it creates coequalizers of G-split pairs; what is the left adjoint of this functor? What does a coequalizer look like in the category? (I suspect I can work these out myself given a good statement of what the category is.)


If I understand correctly, the category in question can be characterized as follows:

(0) For every type X in the language, there is an object X of values (not arbitrary terms!) of that type.

(1) For every type X in the language, there is an object TX of closed programs (containing no free variables) of type X. T is the monad in question.

(2) A morphism “f : X -> TY” is an arbitrary program of type Y containing a single free variable of type X.

(3) (This is the part where I'm the least sure!) A morphism “f : X -> Y” is a pure terminating program containing a single free variable of type X.

Also, I should've been more careful. If the language is pure and terminating, the monad in question is an equivalence of the category in question with itself. Not necessarily the identity functor.


Holy crap. This.

To paraphrase: "English, mofo. Do you speak it?!?" :-)

I have been programming for over 30 years (which I suppose means nothing if it's just doing the same year over 30 times, but bear with me). Every other language (at the risk of being largely self selected, I know) I have ever worked on seems to incrementally add things or shift ideas around a little bit.

Some communities might try to helpfully explain things to newcomers. But the perception of the Haskell community is that they revel in being as cryptic as possible in their purity.

I'm not even sure if the above comment was serious (it was probably deadly serious), or absolute troll fodder nonsense.

I know (more or less???) what a functor is, and I know what IFF means. Unfortunately, most of what I remember from mathematics comes down to a function has a domain of inputs and a range of outputs, and being a function implies you can predict the output from the input; a "type" is a rule for describing a set, possibly infinite (not much use for computing), of values; values can be "scalar", or multidimensional such as a vector/matrix/tensor. I know that "category" means something very specific to you guys, but not what. You lost me.


A category is a set of objects and a set of arrows between those objects, and an operator that composes those arrows associatively.

For much, much more, this might be a good fit for you: https://bartoszmilewski.com/2014/10/28/category-theory-for-p...


It was intended seriously, as a question between mathematicians, though I realise the language looks pretty intimidating :)


The problem with making effects explicit is that effects pollute: a callee doing IO means that all is callers also acquire the IO effect. Which is as it should be, except that this makes code modifications tedious if effects have to be explicitly mentioned. Haskell makes it harder still by using completely different syntax for monadic and pure code which means that introducing an effect in a callee requires rewriting the caller.


This is not a problem in practice, having worked on decent size (10s of kloc, so not huge) Haskell projects. It just doesn't really come up. You end up with the outer bits of your program written in some monad, and then smaller, self-contained modules will be pure. So if you need to rewrite some pure code and have it use IO, it's usually called from a place that already uses IO to begin with.

You might compare it to checked exceptions in Java, which are actually a total pain in the butt in practice.

Even if you did need to "rewrite" the caller—which is rare, as I mentioned—the change is usually pretty trivial. You might change the type and stick an fmap, <$> <*>, liftM2 or whatever in the caller, but it won't be a big deal.


Agreed. That's not where I felt pain...it's swerving in and out of pure/impure/different monads code when I really just want to speak my logic outright, without threading it through the type system with a pipe cleaner. I always loved pure Haskell code like I'd use if I was solving project Euler problems or other toy logic things...but when I had to go into a monad with the lifts and *M methods and do notation...felt like tramping around in a suit of armor. Not to mention hideous type signatures a mile long, that you sweep under the rug with type synonyms. I grinned and bore it for a while in the name of "purity", but it sucked.

I guess my point is just to add some counterweight to this myth that once you grok pure FP you never come back to the "dirty" old world...and also to give some color to the particular pain points that arose. Haskell, I think exposes both a lot of the potential in pure FP, and several pitfalls.


The "suit of armor" may be an apt metaphor, clumsy but safe.

I personally haven't had problems with long type signatures in my code. Typically, they'll disappear when you define the right monad for your application. But I don't have much experience solving "toy" problems in Haskell, I've leaned more towards production code, so my experiences are different.

That's one of the main problems with Haskell—so many people learn it to get better at programming rather than to use it in earnest, write blog posts about monads, and solve toy problems like Project Euler. Like other languages, it takes a different form when you're managing larger projects. Just my 2¢.


Haskell does force an organization into your code¹. If you try to write it in a way that the language does not "like", you will suffer. The logic you want to speak is probably not suitable for expression in Haskell, but there is very probably an equivalent logic that is suitable.

Haskell forces you into specializing your code into pure and impure parts. You can go deep on any of those, and use one to interpret the other, but mixing them is bad.

1 - One of the many, many features that I do consider misfeatures on other languages, but in Haskell it interacts well with the rest of the language and becomes empowering.


I've run into the problem when writing an interpreter. If you're not already using a monad, you basically need to change every line of code in the program. Trivial perhaps, but quite tedious.

That said, you learn to predict where monads will be needed over time, and it's absolutely worth it to be able to use the type system to reason about side effects.


Local effects are possible, e.g., algebraic effects (until a handler is reached) and local imperative state (ST). Your second complaint is valid, of course.


Haskell : side effects :: Java : exceptions


> Operations as ordinary as exceptions, threading, and I/O all cause as much hardship as simple mutable state.

Not sure that threading is so ordinary considering that the whole argument is about how to make parallel programming safe.

There may be good reasons to keep as much code as possible purely functional. Even my impure imperative mind finds it comforting not to have to worry about ground shifting below me. Impurities affect the purity of the caller - the paper has a valid point but then it was never considered a good idea to manipulate state hidden deep down in a remote function. In fact I would argue that the place for manipulating state is exactly in the center middle where it can be seen and reasoned about clearly.

I find it at times surprising how far people can get with purely functional programming. It may be required to use black and white thinking when deciding what to allow in the functional space. But then for any program affecting the real world there is going to be some I/O. The physics of the real world has state as have ledgers. Going to the extremes likely is not a good communication strategy when reaching out to the imperative world.

FYI: Link to the author's Wikipedia page: https://en.wikipedia.org/wiki/Erik_Meijer_(computer_scientis...


E.g. - "Erlang, you suck because you allow message queues. You might as well just make empty Java Beans and mutate them whenever you want". (OK, I embellished a bit).

Really??? The author sees no point having an explicit mechanism through which updated values are obtained, and considers it the same as PERFORMing an update on something in the DATA-DIVISION at any point in time???




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: