Hacker News new | comments | show | ask | jobs | submit login

FP is not defined at all. Pure FP (PFP) has a clear definition, but I challenge you to find a definition for FP that includes Clojure, Erlang, F#, and OCaml but excludes Java, JS, Ruby, Python, C++. All of these languages are imperative with varying amounts of reliance on higher-order functions and emphasis on immutability (for example, my favorite so-called-FP language, Clojure, allows global mutation all over the place, but provides some transactional assurances).

Second, even if we consider PFP alone (which is, indeed, different from imperative programming), I often find that some of its proponents are lacking in understanding of the theoretical underpinnings of imperative programming. For example, day before yesterday I gave a talk at CurreyOn/ECOOP about why monads are harmful in imperative languages (i.e. any language that isn't PFP), and PFP experts present weren't aware (although, TBH, neither was I, but I'm not a PL researcher) of this proof[1] that imperative continuations are at least as powerful and easy to express as monads (I later claimed and provided evidence that they're more powerful).

There is actually much to debate about the relative merits of PFP and imperative, but I certainly wouldn't say that PFP is "avoiding badness". It's avoiding some badness while introducing other kinds of badness (monads have some serious disadvantages), while imperative programmers can avoid the same badness without resorting to PFP (e.g. as Clojure and Erlang do).

[1]: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.8...




I think it's well known that Cont is the "mother of all monads" which gives you the embedding. Then, anywhere you have global continuations you can have delimited continuations with one global variable. This is less well known, but I know Oleg writes about it a lot and Filinski's paper may indeed be the origin.

I'm not confident that this shows that monads are bad or harmful though. I'd like to see the larger argument.

I also wouldn't say that PFP is "avoiding badness" as much as its avoiding "everything" in name of explicitness. Over explicitness can be interpreted as a problem if you want—I'm very willing to take this argument—but I'm not sure how to paint it in broad good versus evil terms in the same way that I am with something like, I dunno, the COME FROM statement.


Well, the argument against monads I made in my talk wasn't really about monads in general but against the use of monads in impure languages (in short: they lose stack context for post-mortem debugging and profiling, and require re-implementation of built-in control flow and exception handling).

But there is much to be said against PFP itself (and much to be said in favor) like the lack of clear execution models, the lack of clear thread context (which is, of course, related), the difficulty in analyzing computational complexity, in debugging and profiling, and implementing many simple algorithms. Many of those are related to laziness more than referential transparency itself, but RT necessitates laziness in some cases, whether the language itself is strict or not.

And, of course, there is the big question of "wherefore referential transparency". If it is to avoid problems caused by shared mutable state, then there might be better ways, like making shared state transactional to some degree (Clojure, Erlang). If it is to make automatic verification better, then there might be easier ways (like some of the many model-checking methods), and if it is to make human reasoning better, one should ask whether equational thinking is really the best way for people to think about programs (I think the human mind is much more adept at simulating state than at math, and that an essential, "basic" abstraction such as monad transformers is already too high-order for people to wrap their heads around). Then you have all the details, like how far you can take Curry-Howard-style, type-based proofs while maintaining inference and how far you can go while keeping type errors understandable.

I think that because much of PL research in the last decade has been focused on PFP, people are under the impression that it is somehow more "mathematical" or theoretically sound (not in the logical sense) than imperative, completely forgetting that fully-verifiable, and theory-supported imperative languages (like Esterel and its descendants) have been used much more successfully in the industry than Haskell and friends.

The argument of PFP vs. imperative is important and interesting, and there are both mathematical as well as psychological arguments to be made in favor of each. However, in recent years, many have come to see imperative as pedestrian and PFP as mathematically-advanced, which is completely false.

EDIT: I fully recognize that there are also cultural differences here. PFP emphasizes abstraction while imperative emphasizes algorithms. Personally, I think algorithms are more important than abstractions, but I realize that depends on the domain.


> RT necessitates laziness in some cases

I would be interested to see your proof of that.


Lazy evaluation performs implicit side effects, which in some cases (and with RT as a premise) allows for better asymptotic complexity than eager evaulation.

For an example see this paper:

"More haste, less speed: lazy versus eager evaluation" by Richard Bird, Geraint Jones, Oege De Moor

http://journals.cambridge.org/action/displayAbstract?aid=441...

ftp://ftp.comlab.ox.ac.uk/pub/Documents/techpapers/Geraint.Jones/FP-1-96.ps.Z


Any side effect must be lazily performed by the runtime at a level above the program's entry point (like main returning IO) or break RT. I don't mean that a PFP language must have non-strict evaluation.


It's not lazy performance, it's a representation of that computation as a value. Most concretely by an interpreter. That's not really lazy evaluation and suffers from almost none of the ailments.


> and suffers from almost none of the ailments.

I'd say that's subjective, and that it suffers from enough ailments: no stack context for post-mortem debugging and profiling. I know Haskell has this "cost center" thing, but I don't know enough about them to tell whether they are truly sufficient. Anything that harms posthoc reasoning about a program with a debugger/profiler/log can be plenty painful.


You can write an interpreter in any language. That's what I'm angling for here. There's no relationship to laziness except that if Haskell is your only example implementation then you're stuck with it.


> But there is much to be said against PFP itself (and much to be said in favor) like the lack of clear execution models, the lack of clear thread context (which is, of course, related), the difficulty in analyzing computational complexity, in debugging and profiling, and implementing many simple algorithms. Many of those are related to laziness more than referential transparency itself, but RT necessitates laziness in some cases, whether the language itself is strict or not.

Don't blame purely functional programming and referential transparency, which are unquestionably good ideas, for the downsides of laziness, which is indeed a bad default. Purely functional programming doesn't need laziness, and, in fact, is much better off with a formal distinction between values and computations, as in the call-by-push-value paradigm [0]. Pervasive laziness actually has negative consequences for equational reasoning, because it forces all free variables to stand for potentially nonterminating computations. To restore the practicality of equational reasoning, Haskellers often have to pretend free variables only stand for productive computations (that big-step-reduce to a term in WHNF), but this assumption is completely unwarranted by language itself.

> "wherefore referential transparency". If it is to avoid problems caused by shared mutable state, then there might be better ways, like making shared state transactional to some degree (Clojure, Erlang).

Although it doesn't exactly have transactions, Rust's strict aliasing and mutability controls is a much better argument for "there are better ways to avoid the problems of shared mutable state". Clojure and Erlang take the easy route of making the consequences of your bugs less catastrophic [1], but do very little to actually avoid concurrency bugs.

> If it is to make automatic verification better, then there might be easier ways (like some of the many model-checking methods)

Model-checking doesn't scale to truly large state spaces. Typeful purely functional programming is, to a large extent, about trimming the state space to exclude nonsensical stuff.

> and if it is to make human reasoning better, one should ask whether equational thinking is really the best way for people to think about programs

Yes, equational reasoning is the best way for people to think about programs. Equational reasoning, being barely more complex than high school algebra [2], is so easy that a normal programmer can do it casually. The most complicated operation you ever have to perform is substitution: replacing all free occurences of a variable by an expression.

> (I think the human mind is much more adept at simulating state than at math, and that an essential, "basic" abstraction such as monad transformers is already too high-order for people to wrap their heads around)

The untrained human mind might be more adept at simulating a single sequence of states than at math. However, even the best trained human mind is comically bad at analyzing classes of sequences of states, formally or informally. In particular, in the formal case, this kind of analysis often requires using logical relations, such as weakest preconditions [3], whose bodies can contain arbitrary quantifiers. Now, as anyone who's used formal methods can attest, rigorously manipulating quantifiers is a bitch - you often have to refer to their formal rules of inference, because it's so easy to use quantifiers wrong if you rely on intuition alone.

> I think that because much of PL research in the last decade has been focused on PFP, people are under the impression that it is somehow more "mathematical" or theoretically sound (not in the logical sense) than imperative, completely forgetting that fully-verifiable, and theory-supported imperative languages (like Esterel and its descendants) have been used much more successfully in the industry than Haskell and friends.

You're right that it's perfectly feasible to ground imperative programming in a formal foundation. However, as I explained in my previous points, in practice, the formal techniques for reasoning about functional programs are straightforward enough to learn and use by anyone straight out of high school, whereas the techniques for formally reasoning about imperative programs are both cumbersome and subtle - that's precisely why they aren't used except when incorrect programs are completely unacceptable.

> The argument of PFP vs. imperative is important and interesting, and there are both mathematical as well as psychological arguments to be made in favor of each. However, in recent years, many have come to see imperative as pedestrian and PFP as mathematically-advanced, which is completely false.

Again, I partially agree. Imperative programming is far from pedestrian. It's too complicated, in fact, so complicated that the only way we can cope with it is by handwaving away our proof obligation.

---

[0] http://www.cs.bham.ac.uk/~pbl/cbpv.html

[1] Erlang's marketing is very honest about this: Let it crash.

[2] The extra complexity comes primarily from the fact that high school algebra is single-sorted (all free variables are real, or at most complex), whereas equational reasoning in Haskell is multi-sorted (because Haskell has a rich type system).

[3] https://en.wikipedia.org/wiki/Predicate_transformer_semantic...


> Don't blame purely functional programming and referential transparency, which are unquestionably good ideas

I think many people question that they are good ideas. Not pure functions -- but forcing all functions to be pure. If you do that, you are basically forced to use monads for all effects, and that is very questionable.

> Rust's strict aliasing and mutability controls is a much better argument for "there are better ways to avoid the problems of shared mutable state". Clojure and Erlang take the easy route of making the consequences of your bugs less catastrophic [1], but do very little to actually avoid concurrency bugs.

Yeah, Rust does something nice, too, but every operation allowed by Clojure is also allowed by Rust. I don't see how it's any better. Erlang's "let it crash" has nothing to do with data-race bugs. Once you have a GC everything is a lot easier when it comes to concurrency. Rust works very hard to ensure safety without a GC, which is necessary for domains that can't use GCs (like programs in constrained environments). And in any case, I think most people would consider Rust to be even less functional or more imperative (whatever that means) than Clojure.

> Yes, equational reasoning is the best way for people to think about programs.

It's good that you're so certain about this, but I can just as easily say "running your program in a debugger and a profiler is the best way for people to think about programs". How do you know? And nobody has even shown that apriori reasoning is any better than posthoc reasoning (debugger, profiler etc.) Deciding that we must reason completely about our program before we ever run them completely ignores a whole set of useful tools that we have at our disposal.

I also challenge you to program the world's most used sorting algorithm, TimSort, with equational reasoning. And consider this -- Haskell is grossly insufficient to really reason about programs in this way. Idris is the very minimum to even start, and even that doesn't even begin to scratch temporal logic.

> replacing all free occurrences of a variable by an expression.

That is a mathematical argument. Not a psychological one. You could use the same argument to claim that teaching a person to catch a ball is easier if you teach them Newton's equations than let them practice. I mean, we know simply "replacing all free occurrences of a variable" is Turing complete, right[1]? Who says people are good at that?

> However, even the best trained human mind is comically bad at analyzing classes of sequences of states, formally or informally.

And yet 100% of production software in the world, including avionics, medical devices and the like is written in imperative languages. I am not saying this makes your claim outright wrong, but it certainly challenges it, especially considering little evidence exists PFP actually makes things globally better (i.e. not by fixing one undesirable property and introducing another).

> that's precisely why they aren't used except when incorrect programs are completely unacceptable.

But if you want to be honest, you must admit that PFP isn't used anywhere. Imperative verification is much more pervasive. And nobody said people writing programs are willing to invest any extra effort to make them more correct than they believe those programs need be. Programs need to be as correct as they are now only developed at a lower cost (although, if you could offer more correctness completely free -- without any negative consequence -- I don't think people would object).

> It's too complicated, in fact, so complicated that the only way we can cope with it is by handwaving away our proof obligation.

Again, you're thinking about it from a mathematical viewpoint, but mathematics has absolutely zero relevance to whether a programming language is a "good" one or not. It can only promise certain properties, not their desirability. Yes, imperative programming is too complicated for machines to verify in every circumstance, yet claiming it is too complicated for humans pretty much ignores reality (or, at the very least, requires some serious explaining). And where proofs are required, imperative PLs can be just as rigorously proven -- e.g. Esterel, which has been used successfully by the industry much more than Haskell. In other case, I don't see that we have any obligation for proof, and when we do, Imperative languages can do just fine.

What is the source of that obligation? Are, say, provably correct programs more important than fast programs? Than programs shipped soon but are only correct "enough"? Who says correctness is the most important concern? I can tell you that software users certainly don't think so. We had a correctness crisis in the nineties, but the transition to memory-safe languages and wide adoption of engineering practices have pretty much resolved it. It's gotten good enough that continuous deployment, or "how do we ship our software in ever shorter cycles" is now a much bigger concern for most companies than "how do we ensure our software doesn't have bugs". This isn't handwaving but what people actually want. I don't think math can tell them to want something else. It's great that you can build a perfect table, but it doesn't help me if what I need is a chair.

There is one domain, though, where correctness is important even in 99% of software which doesn't require absolute correctness: security. But even with security, memory safety has done a lot to make things better, and where it hasn't, dependent types would be necessary to prove correctness, anyway. You can also help with richer type systems in imperative languages, like Java 8's pluggable type systems[2].

To summarize, I think we know precious little about the relative merits of PFP vs imperative (and by imperative I also mean functional-imperative). Unfortunately, no one has ever used PFP to write any large software other than compilers, so we don't even have the beginning of empirical evidence to even start an educated argument on the subject.

[1]: Well, sort of -- the equivalence between LC and UTM is something many misunderstand. There are plenty of computations that can be expressed by a UTM and not in LC. LC is equivalent because it is powerful enough to simulate a TM (but most certainly not to directly represent any TM program), but from that point on your computational model is no longer simply substituting free variables.

[2]: E.g.: http://types.cs.washington.edu/checker-framework/current/che...


I'm interested what you mean by [1]. The rigorous part of Church's Thesis says exactly that the set of LC-computable, TM-computable, and general recursive functions (nat -> nat) are all identical. This is somewhat weaker that "express", sure, but I'm very curious what you are pointing at when you invoke that.


Parallel or is a classic example. BTW, I'm not implying it's a severe limitation, it's just part of what I see as PFP valuing abstractions over algorithms which -- to my personal taste -- seems like misplaced priorities. The fact that you LC has no good ways of expressing computational complexity is a much bigger limitation, IMO, and another symptom of the same prioritization.


I'm curious to see how parallel or is implemented on a TM. It's not a function from (nat -> nat) of course since we're talking about something other than extensional equality... but even then I tend to find these things not terrifically far out of whack.


You're right, it's not a function nat -> nat because it depends on program representation. And it's implemented in a universal TM by dovetailing: running one instruction from one input program and then one instruction from the other program and so on back and forth until one of them terminates.

Parallel or is important because it uncovers something interesting: LC works with high-order functions but it can't do high-order programs, namely representing the computation process itself as a first-class entity. LC is not introspective in that sense, and most of its limitations flow from that (including the difficulty describing complexity, which is a feature of the computation process, not of the functions). Again, I'm not well versed in PL theory (or hardly at all), but I think that Lisp's macros therefore actually make it strictly more powerful than LC[1] (I think I may have seen this mentioned somewhere).

Indeed, macros have allowed Clojure to implement stackless continuations, and bytecode manipulation has allowed us to implement full continuations on the JVM without any JVM hacking. Similarly, this kind of direct computation representation allows all sorts of instrumentations and various interesting transformations that are not representable in LC proper (without macros). This isn't at all catastrophic -- you can live without those transformations -- but they can be very powerful and very useful.

[1]: And by "strictly" I mean with direct representation rather than by simulating a TM.


Ah, yes. Generally you're right: on 2nd order functions ((nat -> nat) -> nat) TMs and LC differ where TM's have strictly more functionality. But then it flips again at 3rd order (((nat -> nat) -> nat) -> nat) and so on and so forth.

Additionally, you can always just Gödel encode a program, pass it in, get all of this functionality. That might be considered cheating (simulating a TM, although you could just as well simulate LC in both models). This is not actually any different from what the UTM is doing... it's just possible to ignore the difference between interpretation and normal operation because it's all just tape.


Once you have fixed an evaluation strategy, you can talk about time complexity just fine. And, with a substructural type system, you can also talk about space complexity just fine.

So it's not abstraction vs. algorithms. The lambda calculus gives you both.


You're probably right -- I don't have the tools to argue with you -- but you know, as an algorithms person I view this whole abstraction discussion as revolving around UI. To me, UI should be intuitive and never get in your way. I'm appalled to think that I need to spend time learning what a substructural type system is in order to analyze the complexity of my algorithms. I can verify the correctness of my algorithms just fine (to the extent I need them verified, which usually isn't that much, at least not upfront), and so far I have seen zero evidence to support the claim that PFP makes developing software any cheaper. But I see more and more concepts going into abstractions and CH verification that I just can't see the justification for such mental overload standing between me and my algorithms. I'm perfectly happy with a model checker in the very rare cases I need one, and it doesn't require any substructural type systems, or even changing my choice of programming language (there are model checkers that work at the Java bytecode level and verify any program targeting the JVM regardless of the language).


> And yet 100% of production software in the world, including avionics, medical devices and the like is written in imperative languages

Huh?

> But if you want to be honest, you must admit that PFP isn't used anywhere

Eh?

> Unfortunately, no one has ever used PFP to write any large software other than compilers

Whatnow?

Are you exaggerating for rhetorical effect, or do you really believe that?


Sorry, I meant 99.999% and rounded up.

As to the compiler thing -- that is the sad reality. In its 20 years of existence, the largest Haskell program ever written is the Haskell compiler itself. The largest company codebase is only a few million lines of code, and it is comprised of many programs, including a modified Haskell compiler (which, I am willing to bet, is their biggest program). Just for comparison, a medium-sized company codebase is around 50 MLOC. More production code is being produced in 2015 in Delphi, COBOL and Fortran than in Haskell.

This is my biggest problem with PFP: all discussions about its effectiveness are completely theoretical. Calling pertinent evidence "anecdotal" is an insult to anecdotal evidence. There are between 0 and 2 (much closer to 0) data points of large, complex, long-maintained Haskell programs outside the compiler itself (out of thousands of such projects that are started each year). I honestly don't know how effective PFP is, but we can be certain of one thing -- no one else does either[1]. Maybe it's the next big thing as everyone has been saying for twenty years (I remember around that time the talk at my University about how Haskell was about to take over the world) and maybe it's a total dud. Nobody has a clue (and that you've managed to write your API endpoint in two hours instead of two months or whatever isn't evidence of anything; evidence would be writing an air-traffic control system in one year instead of five, and maintaining it for a few years with a changing team of 3 instead of ten).

[1]: I remember that for a good few years most of the industry thought C++ is the best thing that had ever happened to it -- until projects started requiring maintenance. I do not for one second think Haskell is like C++, but vast, undeniably pertinent evidence was proven completely misleading because it wasn't longitudinal enough, let alone, shallow, scattered evidence, which is borderline relevant.


> I think many people question that they are good ideas. Not pure functions -- but forcing all functions to be pure. If you do that, you are basically forced to use monads for all effects, and that is very questionable.

You aren't forced to use monads specifically. What you're forced to do is distinguish pure from impure computations somehow. In the CBPV papers, it's clearly explained that functions are just one particular kind of computation. In CBPV, there's an adjunction between the categories of value types and computation types, and it's this adjunction that gives rise to a monad [corresponding to the effect(s) that computations might exhibit]. If we consider pure computations only [no effects, not even nontermination], this adjunction is actually an equivalence of categories.

---

> Yeah, Rust does something nice, too, but every operation allowed by Clojure is also allowed by Rust. I don't see how it's any better. Erlang's "let it crash" has nothing to do with data-race bugs.

I'm not talking about data races. Of course the absence of sharing in Erlang frees you from data races. But they don't free you from breaking transactional invariants, such as "this doesn't even begin happening unless that condition is met". In spite of the absence of data races, Erlang is still about coping with failure, not outruling it. In Rust, you can use a nonforgeable witness that the precondition holds, and consume it when the operation begins. It's rare that a non-dependently typed language is better than Haskell at something other than performance, but that's exactly what's happening here.

> Once you have a GC everything is a lot easier when it comes to concurrency. Rust works very hard to ensure safety without a GC, which is necessary for domains that can't use GCs (like programs in constrained environments).

Garbage collection makes sharing easy. However, concurrency control (i.e., controlling access to shared mutable resources) remains just as hard. Otherwise, nobody would have invented such things as "global interpreter locks".

> And in any case, I think most people would consider Rust to be even less functional or more imperative (whatever that means) than Clojure.

Indeed, Rust isn't terribly functional (though it's more functional that meets the eye), which is exactly why I said it's a better argument for "there are other ways besides functional programming".

---

> I also challenge you to program the world's most used sorting algorithm, TimSort, with equational reasoning.

Well, in order to use equational reasoning alone, I need a purely functional algorithm.

> And consider this -- Haskell is grossly insufficient to really reason about programs in this way. Idris is the very minimum to even start, and even that doesn't even begin to scratch temporal logic.

Haskell's type system is insufficient to verify equational reasoning using the type system, but equational reasoning is so simple that programmers can do it manually anyway. [More on this later.] On the other hand, programming languages based on dependent type theory, like Idris, while often more beautiful, and certainly more powerful, don't yet pass my bang for the buck test for everyday programming: (0) They force me to annotate things that Haskell can infer on its own. (1) It is a royal pain in the ass to manually translate data between representations that are convenient for proving [e.g., Peano naturals] and representations that are efficient for calculating things at runtime [e.g., machine integers, big integers]. (2) Most dependently typed languages often don't have terribly efficient implementations. To the best of my knowledge, ATS is the only honorable exception to the last point.

---

> That is a mathematical argument. Not a psychological one. You could use the same argument to claim that teaching a person to catch a ball is easier if you teach them Newton's equations than let them practice.

Your analogy breaks when you think about it non-superficially. Merely understanding the laws of physics won't make your muscles strong enough, or your reflexes fast enough to go out there and actually catch a ball. On the other hand, when you've written a program alongside its proof of correctness, well, you've written a program!

---

> Yes, imperative programming is too complicated for machines to verify in every circumstance, yet claiming it is too complicated for humans pretty much ignores reality (or, at the very least, requires some serious explaining).

It's too complicated for humans to verify too. If it weren't, more people would be doing it casually. This is unarguable, unless you've tried it yourself.

> even with security, memory safety has done a lot to make things better, and where it hasn't, dependent types would be necessary to prove correctness, anyway.

I never said that everything has to be machine verified. If you give me a proof that I can understand (i.e., I'm confident I can correctly verify), that's more than enough for me. But, if the proof is past a certain complexity threshold, I'll ask you to convince someone else that I can trust (a better mathematician or a proof assistant).


> What you're forced to do is distinguish pure from impure computations somehow.

Imperative languages can do that very, very easily. The definition of PFP, though, is that it is referentially transparent, i.e. no effects whatsoever (other than monads that are returned to the runtime at the top level).

> I need a purely functional algorithm.

Good luck finding one with the same performance... But the imperative one is now proven to be correct with no need for PFP. Writing a similar provably correct PFP one would have been harder (so it seems to me at least).

> but equational reasoning is so simple that programmers can do it manually anyway.

That's a nice claim. What we know for a fact is that programmers are able to write programs that are as correct as they're required to be with imperative reasoning.

> IT's too complicated for humans to verify too. If it weren't, more people would be doing it casually.

I don't understand exactly what you're referring to, but most programs as correct as people want them to be. The correctness crisis is over, mostly thanks to automated testing.

> If you give me a proof

Who says we want correction proofs to begin with? Most projects require some evidence of some measure of correctness. Certainly not proof of total correctness or even proof of partial correctness.


> Imperative languages can do that [distinguish pure from impure computations] very, very easily.

In an imperative language, can I write a map or filter whose argument is guaranteed to be pure? [Note that my notion of "pure" is stronger than Haskell's: I consider nontermination an effect.] Can you easily convince the type system that an imperative loop will indeed terminate? [Without supplying anything even vaguely resembling a proof.] With pattern matching and recursion, the compiler can check whether recursive calls are always passed smaller arguments.

> Good luck finding one with the same performance...

I'm pretty sure that, with substructural types, it's possible to update a single array element in O(1) in a referentially transparent manner. Conceptually, you're destroying the old array, and creating a new one with the same contents, except for the updated element. But, because the old array won't be used anymore, this can be done in-place. As far as I can tell, no other imperative facilities are needed to sort an array.

> as correct as they're required to be

What does this even mean?


> In an imperative language, can I write a map or filter whose argument is guaranteed to be pure?

Of course. See, e.g. Java's checked exceptions. If you marked every effectful function as throwing a checked exception, the compiler can enforce which effects are allowed.

> Can you easily convince the type system that an imperative loop will indeed terminate? [Without supplying anything even vaguely resembling a proof.] With pattern matching and recursion, the compiler can check whether recursive calls are always passed smaller arguments.

Model checkers can do this just as well.

> What does this even mean?

That programs aren't required to be 100% correct, just as they aren't required to be as fast as possible. It is perfectly acceptable for most programs to fail on rare edge cases. If there is an inverse relationship between the manifestation of the bug in the frequency of the circumstances that causes it, then you're fine (put simply: it's OK to fail with ever higher probability on ever rarer cases).


Equational reasoning extends to mutation as long as you track the monad. It also smoothly devolves into Hoare Logic. It's the same tool, just applied in either different (monadic) contexts.

Mutation is not disallowed in PFP, by the way! You can write a perfectly great Timsort that's pure. You just can't pretend that your mutation effects are not observable when they are. (And you can use monads as type cues when that occurs).


In all fairness, Hoare logic just happens to be something I want to avoid doing as much as I can.

Also, you can have a pure Timsort with linearly or affinely typed arrays. Only action-at-a-distance truly requires some framework (monads, algebraic effects, whatever) for embedding an effectul language.


Yup, that's another way to do it so long as you're interested in paying the explicitness tax to introduce substructural types. My point about Hoare logic is not to promote it but instead to note that it's not categorically different from equational reasoning except in that it operates over more complex languages. And then that exactly is your tradeoff.


> You can write a perfectly great Timsort that's pure.

I would be interested to learn how (with the same space complexity?), but anyway my point is that writing a provably correct Timsort using CH is harder than simply proving the imperative algorithm with a model checker.


Have a pure model of memory which is implemented directly. Associate this with a proof that the extensional side of it is pure (copying mechanics of substructural types) and you're done.


Right, but proving correctness would still be much harder than with the model checker, I think.


Hm, well, I suppose I'm in agreement about many things, but also a little unsure of how to interpret PFP unless it is meant to be "PFP = Haskell".

I think the primary things I'd ask about or try to answer revolve around the use and interpretation of monads and the meaning of referential transparency.

Firstly, "using monads" is perhaps better read as "being explicit about the design of the monadic context you are in". Their original use is much more about creating nice semantical interpretations of imperative programs than their practical use today in Haskell, and, in this broader context they are tremendously useful in imperative PLs.

Indeed, it's perhaps proper to say that an imperative language is one where its statements can be thought of as living in what I like to call "some, unstated ambient monad" which has a number of control flow and stateful properties. In this sense, a lot of your argument, I believe, boils down to saying that (1) people appear to have intuition for a certain monadic context and (2) designing a language to implicitly live in that context is friendly. I agree strongly with both of those.

Some remaining points include a lot around how monads are implemented/interpreted in Haskell today. That's a fine critique, but not one against the broader idea or against PFP at large.

Finally, wrt to referential transparency: this ultimately comes down to statements about what it means to have names and binding in your language. Strangely, such a primitive question spirals out into having enormous consequences. The nice thing about referential transparency is that it, more or less, is simpler than any other kind of name/reference that people often pick and therefore it's a good default state IF your goal is to make everyone opt into any more sophisticated ideas.

Again, I'm not a personal proponent of saying anything like "100% explicit is always the right answer". As noted, it's annoying, complex, and has external impact outside of mere value-semantics matching. So, in this sense I agree a lot here.

But in a broader sense, I strongly believe that it's no accident that the research behind PFP is as strong as it is. The core idea is not a CS one but instead in the free connection to logic provided by the Curry-Howard or BHK interpretations. As we design logics which are increasingly expressive and translate them to PLs we find increasingly that these PLs have properties of extreme regularity. This make them great foundational modes of which other languages should be based. I don't think those forces will be going away.

But languages are not always best suited for extreme explicitness and expressiveness. Metaphor and intuition are powerful tools that languages should take advantage of. These "metaphor langs" can be profoundly nice, have rigorous translation to foundational language, and thus share fantastic semantics. Ultimately today that's not so different from the embedded DSL story.


> but also a little unsure of how to interpret PFP unless it is meant to be "PFP = Haskell".

Right, I don't know how a non-Haskell (or Haskell-like) PFP language could look like, but I'd love to hear some ideas (PL is not my field).

> ... being explicit about the design of the monadic context you are in

Obviously you're not advocating 100% abstraction explicitness, either, but the desire for that kind of explicitness -- and this is totally subjective -- seems to me like valuing abstractions over algorithms, which I see as inverted priorities (like I said, PL is not my field :)). I think that the motivation isn't serving a human need but a mathematical need, or, rather a researcher's need (see the end of my comment).

> Indeed, it's perhaps proper to say that an imperative language is one where its statements can be thought of as living in what I like to call "some, unstated ambient monad"

I think we can be more specific. Imperative languages "live" in continuations, and those are at least as expressive as monads. I can't see how monads are more or less explicit than continuations, and why naming the monad somehow conveys intent more than naming the operation (as an example, returning a list from a monadic function in the list monad is no more explicit than calling "produce" or "yield" in a list generator) -- the only difference is whether that name is at the "top" as with the monad, or at the bottom, as with continuations. Related to this, effect systems are completely orthogonal to the question of continuations vs. monads, and restricting a subroutine's choice of what kinds of continuations it can pause on is as easy as restricting a pure-function's return value to restrict what monad it can participate in.

> The core idea is not a CS one but instead in the free connection to logic provided by the Curry-Howard or BHK interpretations... I don't think those forces will be going away.

I agree but see this as a case of searching under the lamppost. Curry-Howard makes things easy to verify (in fact, makes them trivial, as the burden is on the developer -- the language "just" provides soundness and inference). Because it's easy to work with mathematically, that's where a lot of PL research is. However, from a pragmatic perspective, this is mostly moot if the result is psychologically (or economically) incompatible with human developers' preference. No one is debating the properties of PFP. But their desirability -- or lack thereof -- is not a CS question.


> a non-Haskell (or Haskell-like) PFP

I'd immediately suggest that Haskell is not a particularly pure language itself and that you should take a look at { Coq, Agda, Idris } for far more ideas about where PFP ought to go. You can also without too much difficulty imagine an ML derivative which is pure although they are historically not.

In all of these cases, laziness is not a component and many of your "the lack of clear execution models, the lack of clear thread context (which is, of course, related), the difficulty in analyzing computational complexity, in debugging and profiling, and implementing many simple algorithms" arise directly out of laziness alone.

> you're not advocating 100% abstraction explicitness

To put my cards on the table, I am advocating for this at least some of the time. :)

I'm interested in the idea of there being a tension between favoring abstractions versus favoring algorithms. I understand what you mean anecdotally, but am not sure how to put it into a larger context. In many ways I feel abstractions create the hard outlines of algorithms and are indispensable. This I feel is true even if you're, e.g., programming in Forth. The difference is not a lack of (even higher order) abstraction but merely a difference in how much of that structure you communicate to your compiler.

Re: continuations.

Having access to arbitrary continuation semantics is equally powerful to "all monads combined together" in the sense of "what you can encode" but, then, simultaneously, ultimately weak in the sense of "how many ways can you interpret this". On the other hand, restricting the set of continuations you have access to is completely identical to choosing a monad.

You say that the only difference is whether you pick things at the top or bottom. I say that this isn't even an actual difference. In both cases you're effecting the exact same thing and, if you like, it would be fair to analyze it as a monad.

Monads aren't equivalent to Haskell's encoding of them. Even Haskell admits many encodings. The first-order ones like we're used to seeing are popular because they're simple to understand and implement not because they're the only way. As an example, here's the monad stack at the core of Attoparsec, a fast parser combinator library

    Parser a
    ~
    forall r.
      (Buffer, Int, Bool) ->
      ((Buffer, Int, Bool) -> ([String], String) -> Result r) ->
      ((Buffer, Int, Bool) -> a -> Result r) ->
      Result r

    Result r
    ~
      Fail Text [String] String
    | Partial (Text -> Result r)
    | Done Text r
This is unapologetically a monad based on 2 continuations, the second and third arguments to the Parser function are the success and failure continuations which respond to being threaded on a continuation state and either `([String], String)`, a bundle of error messages or `a` a final result.

Re: lamppost

I agree a lot with what you're saying here. In particular, I think there are two ways to attempt to answer the (silly, but then you take it seriously) question "what is the greatest programming language possible?"

First, you can ask for power measured in "How completely does this language serve to articulate and circumscribe any thought which I might, as a human, want to have?". Following this line necessitates following logic carefully not because it's magically better but because mathematicians have spent many centuries delimiting interesting, complex ideas and at least 1 century working dedicatedly on a language for describing them. It's silly not to crib off this effort... or to think any other field is going to surpass it.

Second, you can ask for whatever language best fits the human mind with all of its quirks. This is a question of psychology, of course, and design and I would never claim that it's not incredibly important... although I do often give it short shrift.

The reason being that I have some personal chips on the idea that these two goals actually will wind up in the same place more-or-less. This isn't to claim that mathematics doesn't have massive ergonomics issues today... but instead to claim that it is in many ways an ultimate distillation of the logic all of our minds are marinating in... just as a consequence of living in this universe and partaking in this shared experience.

I feel like Mathematics took on the challenge of mapping out the entire human headspace from the inside out and psychology took on the same challenge but working from the outside in. I have to believe they'll converge someday.

And then, maybe finally, someone will write a language with good error messages.


Thank you for this great comment.

> In both cases you're affecting the exact same thing and, if you like, it would be fair to analyze it as a monad.

Perhaps, but here's the thing -- I don't care. In fact, I am truly and utterly sorry for having taken the time to understand what monads are -- sort of (well, not really, but you get what I mean). I consider it a waste of time that could have been better spent reading another DB or concurrency paper (my current fields of work). As a veteran programmer in the industry (almost twenty years) I really don't care much about programming language concepts or verification methods (I am interested in compilers, especially Graal, which I think is a great breakthrough, but that's a completely different matter). I want to write an algorithm in a way that is straightforward, easily communicable with my colleagues and successors, be able to read and understand other's code, and be able to reason about it both before as well as after it runs. With the recent resurgence of PFP and it's (IMO, sometimes unfortunate) influence on imperative languages, I find this emphasis on expression and abstraction very distracting. When programming in a "richly typed" myself, I often find myself drawn to using clever abstractions that are necessary neither for expressing the algorithm nor for making the code more maintainable or readable, which is why I enjoy using languages with relatively few abstractions (C, Java, Go). I find expressing the algorithm much more straightforward, faster (because I don't waste time coming out with a nifty abstraction and then being pleased with myself about it), and not a bit less readable. Ever since I started using automated tests (along with the rest of the industry) I've never even found correctness to be a problem.

Which leads me to ask, what problem is PFP trying to solve? Now, I am entirely willing to believe that it's not trying to solve an acute crisis, but just to make programming "better" (which hopefully means cheaper). But if that is the case, I have been waiting for convincing evidence for the past 16 or so years -- ever since first hearing of Haskell and its imminent revolution at university -- and so far: nothing. I am not saying that PFP is necessarily a false prophet (although I still think imperative code with even crude effect systems is just as effective, and much clearer than PFP -- and no, I don't think they're the same at all even if they could be both reduced to each other: I am not a PL researcher, nor do I personally care to know much about that CS sub-discipline; from the perspective of the programmer the two are completely different), but I do see it as something of a vaporware. A lot of hype and no result. That the largest Haskell program -- after twenty years -- is the Haskell compiler is a very bad signal (and yes, it is the biggest program).

As for mathematics and the human mind, well, I think the search space is just too big on both ends. Math can express just about anything (including, probably, types of intelligence very different from the human mind), and the human mind is very flexible, a universal computer, which, in spite of its many constraints, is probably universal enough to not be meaningfully reducible to its physical components (just as software is not meaningfully reducible to the physical processes powering the hardware it runs on). So I don't really know if the human mind or math can converge to anything, other than that our mathematical exploration is guided by our human minds, so we might be subconsciously heading somewhere interesting.


It's late, so while I'd like to bite more, I'll just take this one on:

> other than that our mathematical exploration is guided by our human minds

I think that's exactly it. I'm not honestly a good enough philosopher to be clear on how I feel about the origins of mathematics. So, blurrily, I'll just state immediately that I think they are an embodied thing (I am no Platonic dualist). In that sense they are not constrained by our language but instead our language is continually growing to be so strong as to admit the vagaries of our thoughts.

And this is why I like PFP: because it endeavors to capture a wider world than just algorithms—although it is one which places algorithm at its core by accepting the need to build and witness over placing trust in the completeness of the universe.

This justification is wildly impractical, of course. I can't even from this perspective recognize the smell of practical—so the closest I can come is to say that I think Church's Thesis, if it is even true, taught us something really interesting: that it's quite easy to represent all functions (nat -> nat). Since this encompasses nearly everything we can "practically" witness we're playing in a giant, easy sandbox (this is not me saying that programming is easy, of course!).

So there ends up being a lot of degrees of freedom on how we accomplish our giant, easy tasks. If you want to eliminate them then you must take aim at harder tasks---and that is exactly what PFP does do. Or at least what the research apparatus attached to PFP does. So out here is a wild ride to the core of human expression. That's one that I want to ride.


> and that is exactly what PFP does do

True, but 1/ with so far very little to show for it (it is easy to make easy programs easy; no one has seen PFP make hard programs easier) and 2/ I am far from convinced (although I haven't explored other avenues myself, as PL is not my field) that there aren't other approaches which do the same and are more compatible with how we think.

I think that other than the happy "coincidence" that is Curry-Howard, I see nothing in LC to suggest that it is the "right" way to model computation, and CH itself is just a form of verification (that requires programmers to prove their programs). Maybe it's the right way to do verification and maybe it isn't. Suppose Hoare logic approaches proved to be easy. Would people still see LC as promising? I think it's seen as promising just because verification is easier -- not because there is evidence to suggest it's the right way to express unverified algorithms -- but nobody has quantified the cost/benefit of various verification techniques (both formal and informal).

Other than the mathematical results which are to be expected to be found under this powerful lamppost, I see little evidence to suggest PFP is even on the right track. PFP has had one obvious achievement, though: thanks to PFP, there's a lot of research into type systems, some may prove to be very useful even in imperative languages (like Rust, although that language is also far from being proven, either, but I have a feeling we'll learn of its effectiveness -- or lack thereof -- much sooner than we will of Haskell's :)) But I find that achievement rather disappointing compared to those of the algorithmic CS disciplines (distributed databases, machine learning, JITs).

I do recognize that PL has a much harder time proving the worth of its results. A machine learning algorithm could be implemented in any language. Producing and then getting people to adopt a new language, in contrast, is extremely expensive, as it requires not only implementing the new work, but porting man-decades of work. But sympathy aside, as someone in the industry, I view PL research as that CS discipline that's supposed to concentrate most on the human side, yet contributed least to to our industry's achievements (while making more noise than most other CS disciplines).

And this is where I'd like to circle back to the original post and the discussion of whether PFP people are arrogant or not. I don't know too many PFP people with major achievements in the industry, but I don't suppose they are any more or less arrogant than others. I do, however, find it annoying (and ridiculous) to be accused by some PLers of having an anti-intellectual stance or refusal to try new things simply because I'm very skeptical of new languages. I and others like me are very quick to adopt new algorithms, but new languages are not only extremely costly to adopt -- they have so far delivered precious little more than big promises (but oh, do they love making those promises!). So if I were to describe PL researchers and PFPers in particular, I'd say they overpromise and under-deliver more than the rest of academic CS, and have certainly had the least impact on the industry. I can suggest (with little knowledge of actual facts) that the reason for that is that PL, as a UX discipline, should be most accommodating to human psychology, but in practice it is often as uncompromising as any other pure-theoretical field.


I'm interested as well in alternative ways to do this kind of thing which are more "psych friendly" though I don't have any warm leads at the moment.

I don't think of CH as being a way of doing verification, as you mention it's not even a very good way of doing that necessarily, but instead a note that human reasoning can be accurately transcribed in types. The formal verification angle on this is just window dressing.

Also LC is not even seen as the "right" way. It's obviously too simple. But it is at the core of other more "right" ways because it as a notion is very composable.

PFP is an important part of this research because it's got the most complete type theories. Impure arrows can be modeled too, but they're harder. Nice impure models share more similarity with PFP than with standard imperative programming, too.

I suppose in my mind PFP has had tons of victories... but they're just not in the domain that you're looking in. That's fine, I believe some will exist there someday. I'm not actually much of one to proselytize for PFP in industry ;)

And on arrogance I am completely with you. When people get a little power it goes to their head and if you make a transition from "practical" imperative programming to "mathy" FP you feel like you just leveled up a lot. Then you ignore that "mathy" imperative programming (and "mathy" OO for that matter) exist and are healthy and probably have a lot to share, too. That's sad.

Finally, I agree with the idea that at least somewhere in PL there ought to be a UX discipline (see my writing elsewhere around types as UX for more on this) but I am also pretty convinced that this is not the entirety of PL research.

Types should be studied even if it is conclusively proven that humans cannot stand their UX. Why? Because they are as powerful a language for expressing formal thought as humans have yet invented. We'll figure out the UX someday, but if we step away from the seat of utility then we'll have lost something vital.


> Types should be studied even if it is conclusively proven that humans cannot stand their UX.

I'd like to add that while I agree 100% with this, from the perspective of the industry, abstractions exist to increase code reuse, assist in code maintenance, and help verification: in short they exist to reduce the cost of development, and -- from a non-research perspective -- that is the only metric by which they are judged.

Of course, that is not to say that types shouldn't be studied for pure research, or even that that research might not lead down the line for significant cost reductions.


I agree, but am pretty excited by the opportunity that such research will have outsize long-term benefits.


> but instead a note that human reasoning can be accurately transcribed in types

Is that human reasoning or logic? :)

> But it is at the core of other more "right" ways because it as a notion is very composable.

It is and it isn't. Where it counts, it's got some composability problems, like with composing monads. Moand transformers are way too high-order for me to follow (or care to learn to follow) -- remember, NASA said first-level and second-level pointer indirection are fine, but not third-level; I think this applies to high-order functions, too. OTOH, "scoped" continuations[1] compose perfectly, while still being statically typed and explicit about which "monad" they're in[2].

> Nice impure models share more similarity with PFP than with standard imperative programming, too.

I totally believe that, but from the programmer's perspective, that is completely irrelevant. That's something we want to stay hidden, left to those doing PL research. Give us the results -- we really hope we don't need to learn how you got there :)

> Then you ignore that "mathy" imperative programming (and "mathy" OO for that matter) exist and are healthy and probably have a lot to share, too.

Considering that 99% of algorithms are expressed imperatively and invented by imperative programmers, I'd say that's likely true. People doing DB, numerical/scientific computation, machine learning, distributed computing and concurrency research work mostly in C, Java and Matlab -- not Haskell or Idris.

> Types should be studied even if it is conclusively proven that humans cannot stand their UX. Why? Because they are as powerful a language for expressing formal thought as humans have yet invented. We'll figure out the UX someday, but if we step away from the seat of utility then we'll have lost something vital.

I totally agree!

[1]: Scoped continuations are nested continuations that allow functions to pause at each level of the scope (think how nested try-catch blocks work with different exception types) -- I'll post my talk when it's available online.

[2]: Think Java's checked exceptions.


I don't see any meaningful distinction between human reasoning and logic, tbqh.

As I stated earlier, your stacks of scoped continuations are exactly equivalent to monads but, depending on how you restrict them, a little less flexible. I don't mean this to say something trivial—I mean it to say that every composition problem solved in each place is identically solved in the other.

I'm confident that your 99% claim holds historically to various degrees, but not completely. Plenty of mathematical algorithms were (a) constructive and (b) first formulated declaratively. We have spent the last 30-50 years describing many more algorithms at a much more furious pace at a time where publicity depended deeply upon using an imperative language... so historical explanations have that going for them.


> As I stated earlier, your stacks of scoped continuations are exactly equivalent to monads but, depending on how you restrict them, a little less flexible.

But they compose better, not requiring monad transformers while still being type-safe. For the continuation to decide on which scope to pause on is like a monadic function returning a monadic value of a monad not directly enclosing it, but one or more levels up.


You can compose Monads in this way too. Even in Haskell, mtl-style transformers are very similar.


I would very much like to hear more about this.


I agree; I'm handwaving at the definition. FP as a 'thing' doesn't really exist, but we recognize languages that more and more embrace pure FP as an ideal, and require greater and greater concessions away from imperative programming. There is likely a sweet spot in that continuum, but from someone solidly on the imperative side, the people falling somewhere on that continuum can appear arrogant, due to what imperative bits they have given up, even if it's not ~all~ the bits.


Don Syme calls F# a "functional first" programming language, which I think it more accurate. While it supports imperative constructs, they're not enabled by default. The idea is you make a language that makes it difficult for your code to be accidentally imperative.

That's the big difference between (Erlang, F#, OCaml) and (Java, JS, Ruby, Python, C++). I can't speak for Clojure because I don't know it.


> I challenge you to find a definition for FP that includes Clojure, Erlang, F#, and OCaml but excludes Java, JS, Ruby, Python, C++

Challenge accepted: FP languages emphasise restriction of mutation.


That's a good categorization, but it's not a definition. For example, the distance between Haskell and Clojure when it comes to allowing (or discouraging) mutation, is bigger than the distance between Clojure and Java. Which, BTW, I think is true. Imperative "FP" languages have more in common with other impure languages that are not commonly categorized as FP, than with PFP. Clojure, Erlang and even ML really are closer to Java than they are to Haskell. I'd argue that the line is between using the continuation abstraction (or the thread abstraction) vs. monadic composition to achieve side-effects (and it is ironic that more people -- probably orders of magnitude more -- use monads in Java than in Haskell :)).

Favoring immutability is a great property, but it is not a hard dividing line. The Java community started encouraging immutability no later than 2001, and Scheme was called functional long before anyone categorized its main features as "restricting mutation".


Javascript has Object.freeze and Object.seal built right in! How is that not emphasis? (He asks with a touch of both sarcasm and seriousness.)

(I seriously believe that any attempt to define FP will end in tears. It's a marketing term and has no real likelihood of producing legitimate technical value.)


> I seriously believe that any attempt to define FP will end in tears.

FP as a programming style to which a programmer may adhere to a greater or less degree is easy to define. FP as a binary is-or-is-not characteristic of programming languages is, well, a misguided concept. To the extent it makes sense to talk about FP -- or OOP, or any other style -- as a feature of a programming language, it is more in terms of the degree to, or better the particular manner in, which it facilitates that style of programming, rather than a binary descriptor.


I agree, but then I also add to that the roughly four major definitions of what that means today and the various misinterpretations therein and call it a wash. It's a series of cultures all bandied about within a larger cultural phenomenon and you can take or leave whatever pieces of it you like.


Well, some people write very functional javascript. I've thought of a better definition that circumvents the issue of what constitutes emphasis.

Functional languages are languages used predominantly by functional programmers.

Being functional probably has more to do with how a language is used than the set of features it boasts.


I don't quite agree. Languages push you in different directions. Haskell pushes you toward FP; you can fight the language and "do" (pun intended) imperative programming, but it's going to be painful.

Can you do FP in any language? Maybe not COBOL, but you could do it in C with function pointers. But it's still easier to do it in Haskell than in C, and it's still easier to do imperative programming in C than in Haskell.

So you have a point. But languages have different characteristics, too, and those differences let us classify languages, if not as "FP" and "non-FP", at least as "more FP" and "less FP".


I'm not saying that every language is the same. I'm saying it's very difficult to define styles of programming, but if you have a particular style (in this case functional), you will gravitate towards languages that facilitate that style. At the boundary, where a language doesn't have a particular leaning, the style that you will adhere to is going to be influenced much more by the community around the language than its features.


That is precisely my favorite definition :)


Also, I'm a fan of saying that "FP" is a community not a definition.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: