The other is to ignore any insult, intended, perceived, or a mix of both, ignore all the hierarchy labeling - 'beginner', 'advanced beginner', "standard ladder" etc who tf cares? - and just see if there are skills you can pick up or explore on your own.
That said, I laughed at 'equational reasoning' being considered an 'expert skill'. It is considered an advanced technique but imo the basics are trivial to pick up, and I had a lecture/demo on this at a local fp conf. Sure it can get very hard if you tackle a hard problem, but that is true for everything.
I think the class (as in hierarchy) and topic-chasing anxiety it will induce in many is counter-productive. It's extremely flawed as a guide for what to learn or what in order to learn things as well. One of the worst attempts I've seen on that front in fact. And yeah, the labels are useless anyway. It doesn't really matter what one thinks a "beginner" is.
It's also just bad optics. I don't know why John does stuff like this.
(A tweet thanking survey respondents: https://twitter.com/lambda_conf/status/803695274896093184)
1. This list is very Haskell-focused: it includes lots of features which only make sense in Haskell or very Haskell-like languages, and lacks mention of many interesting functional programming concepts which don't appear in Haskell (like ML-style modules and functors, row types, macro systems and homoiconicity, and so forth.) There are a lot of functional languages which have very different ideas about how to program, and this list doesn't reflect that.
2. Some of the 'skill hierarchy' choices feel a bit confused and arbitrary. For example, 'Use lenses & prisms to manipulate data' appears as a Competent skill, but 'Use optics to manipulate state' appears as a Proficient skill, despite being slightly different ways to refer to an effectively identical skill. (I assume the latter means "…use lenses & prisms to manipulate data, but in a state monad," which is only a tiny difference.)
3. While I like the idea of a list of a road-map to learning, I feel like this gives the unfortunate impression that many of these are obligatory skills. It calls itself a "standard" hierarchy (which makes it sound like a consensus, rather than just a single person/group's opinion) and has language like "…skills that developers must master on their journey…" (emphasis mine), but the list includes a lot of things that are far from necessary for deeply understanding functional programming. You could lead a long (academic or industry) career in functional programming without a deep understanding of many concepts listed: things like comonads, recursion schemes, finally tagless interpreters, higher-order abstract syntax, and so forth. All of them are useful concepts and deserve study, but you can definitely be a functional programming expert without ever having seen a comonad.
In many ways, I wish this list took a cue from Benjamin Pierce's Types and Programming Languages which features not an ordered list but a graph of the concepts related in the book, and how they relate to other concepts. It would be more complicated, true, but also a lot more honest about the academic and intellectual path you might want to take through functional programming, and without giving the idea that you need to master the vagaries of dozens of Oleg Kiselyov's papers just to be "competent" in functional programming.
I work in FP, programming Clojure for real-world web applications every single day, and 90% of this list is completely meaningless to me.
It's nothing less than more of the Haskellite strain of "everything must be hideously complicated type theory or it's not 'real' FP."
And that shit can fuck right off. It's incredibly hostile and disrespectful both to new programmers and existing ones. It's even harmful to Haskell itself as a language, because this persistent attitude that you have to learn advanced type theory just to get anything done in Haskell is one of the biggest barriers to adoption and learning of the language, especially when even most of the material supposedly for "beginners" insists on thrusting this attitude on the reader.
Once again, just as they did by platforming an open fascist, LC demonstrates a complete cluelessness both of the community it claims to represent, and of the impact their "representation" has on said community.
Your "hideously complicated type theory" is many peoples' "hideously horrible paren-based syntax" - i.e. Lisp. So, how many people don't learn Lisp only because of its syntax? I'd say a lot.
Many of the concepts make sense when having an explicit type system, so if you want types you may want to learn them.
Others I suspect you are already using, even if you don't know them by name. Have you ever used streams? Congratulations, this is what codata is.
But the point is that nobody's putting 'S-expression syntax languages' in a Lisp-centric list of essential functional programming skills. The idea that half this stuff is supposedly mandatory is a skewed perspective.
> Others I suspect you are already using, even if you don't know them by name.
And if the list is comprised of Haskell-centric jargon for more generic notions, that's obviously grounds for criticism.
They should've probably said "FP-only" (i.e. pure) languages to exclude imperative languages that merely support functional programming constructs, like Clojure, Erlang, etc.
I agree, many of the concepts that are useful in pure languages (i.e. monads) are not that practical in imperative/impure or dynamic languages (at the cost of reduced ability to reason about your program.)
> And if the list is comprised of Haskell-centric jargon for more generic notions, that's obviously grounds for criticism.
It's the other way around - the items on this list are the more generic concepts . Less expressive languages lack the (practical) ability to reason about those concepts generically, hence the need for more specialized constructs, like i.e. streams, transducers, etc.
Ahh, and there would be that definition again, the one at the root of the problem.
Please find your nearest whiteboard and write the following 100 times or until it sinks in: "Haskell is not the only Functional Programming language."
FFS, the languages that invented functional programming are not Haskell, and predate it by some time. Lisp dates back to 1958. Scheme first appeared in 1970. ML dates to 1973. Haskell didn't even exist until 1990.
This weird ahistorical definition of "functional programming" is both useless and inaccurate. Strict typing and purity are not the sole measure of whether a programming language is functional and never have been. It's only been in the last few years that there seems to be this desperate push to define FP as "Haskell", which as far as I can tell is the product more of a desperate kind of evangelism than any sound argument for such a limited definition.
We get it, you like Haskell. But some of us also like Clojure, and Erlang, and Elixir, and F#, and OCaml, etc. etc., and you don't get to just redefine terms to cut out languages you don't like. Reality doesn't work that way.
To avoid confusion, they should have used the term "pure functional programming" or even "typed pure functional programming".
That's what was used when I started learning FP, which was maybe three or four years ago. There was FP, and then purity was a secondary characteristic. Haskell was pure and typed, but that didn't make other languages not FP, when they still had all the same basic tools and the same idiomatic focus on composition of functions over mutable state and imperative logic. No, you didn't need a special magic word to allow you to have side-effects like I/O, and most also allowed for mutability (even Scheme had set! after all), but they are first and foremost functional languages. In many ways, Haskell was and still kind of is the odd one out, not counting experiemental academic languages like Idris and Coq.
This wasn't even a controversial statement just a few years ago. But as Haskell has been more and more in vogue, there's this weird fixation on redefining the window of what "FP" is to only refer to, well, Haskell. And it's very clearly what the LC list is all about, because so many of the concepts described are both largely unique to Haskell and it's closest relatives, and unnecessary to do practical, everyday work in a vast majority of FP languages.
And ultimately, attitudes like that hurt the whole field. In trying to fix Haskell's image problem by brute force, they wind up damning the larger domain in the process.
You could use mutable arrays in Clojure the same way you could use immutable arrays in Python (yet nobody is calling Python a functional language.)
Still, I agree that functional mostly means "language with closures and immutable data" in the mainstream, but in the context of this post I'd say it's not that hard to guess that the authors meant purely functional.
The strain you're referring seems like this perception that is somehow self perpetuating. My experience is that of an incredibly helpful community that gives up its time to help and educate people.
This is Anti-intellectualism.
FP is often a sliding scale. You can program with as much or as little type safety, purity or totality as you want. Some folks like to push the boundaries of what is possible without shortcuts, taking inspiration from recent research. Many Haskell, OCaml, Scala and F# libraries will require some of the concepts in this list to understand. I do agree that the list is somewhat Haskell centric though.
His dependency graph for Software Foundations is even cooler, and clickable! http://www.cis.upenn.edu/~bcpierce/sf/current/deps.html
I agree. My preferred language (F#) doesn't even support many of the listed concepts beyond "Advanced Beginner".
A language limited to a level of advanced beginners is not necessarily bad, though.
We could argue all day about whether different 'families' of FP languages, like MLs, Lisps, Joys, etc. are more or less "advanced", so let's avoid any ambiguity and stick to the Haskell family of languages. Let's pick a language which is strictly more advanced and powerful than Haskell, such as Agda or Idris: not only can we implement all of the Haskell concepts in these languages (especially Idris, since we can toggle totality checking, rather than having to work around it), but the inclusion of dependent types lets us implement many more powerful patterns, and even simplifies many of the Haskell patterns (e.g. there's no need for hacks like singletons, normal functions can be used instead of type families, etc.).
So, given this vast landscape of new possibilities, which of the many 'design pattern' concepts have been chosen from these ultra-advanced, super-powerful languages?
> Dependent-Types, Singleton Types
That's it! Apparently the only new concept in these languages is the fact that dependent types exist; but even that's been constrained to a Haskell context, by lumping it in with singletons!
There's no mention of proof objects (de Bruijn criterion, etc.), computational content, erasure or proof irrelevance. No mention of tactic languages. They do give kinds and rank-n types, but they don't mention cumulative universes! No HoTT concepts are mentioned, like types-as-spaces, identity-as-paths or univalence. There's not even a mention of intensionality vs extensionality!
So what about skills? Surely there'd be specifics like modelling divergence co-inductively via Partial/Delay? Maybe propagating invariants with initial algebras, even if it's just a Vector example? What about something trivial, like heterogeneous equality?
> Use proof systems to formally prove properties of code.
> Use dependent-typing to prove more properties at compile time
Again, they've just listed that "these things exist", rather than giving any specifics whatsoever.
What about Haskell? Does that exist? You bet! It even has this thing called lenses; and not only do they exist, there are all sorts of nuanced proficiencies to them!
.... except! I really think Lenses are a concept orthogonal to Haskell. They can be implemented anywhere and usually make good sense. I think we have a long way to go before we've come to understand their best place in functional programming—especially construed as widely as you note.
I like this idea a lot
I suppose I'm slightly bothered by the fetishizing of challenging knowledge for challenge sake. Most of the people I know who learn about "Embedded DSLs with combinators" and set theory seem to not be able to stop rubbing in how smart they find themselves, and yet strangely they never seem to be the most productive engineers (in terms of delivering useful and reliable code).
I know it can be a little disappointing to feel like there isn't a pot-of-gold at the end of the rabbit-hole, but this is no different than most disciplines. You use arithmetic daily, algebra weekly, calculus monthly, imaginary numbers annually or less.
DSLs are a great example of this long-term dynamic. Compared to throwing together an ad-hoc library, a DSL approach takes a lot more design effort but, ultimately results in a system that's more coherent, elegant and expressive.
I've worked on both kinds of projects and the difference is palpable: with the first style, productivity is constant at best—getting things together in the first place is a struggle, and then adding features or fixing bugs continues to be a struggle. On the other hand, the second style of project is even more of a struggle at first, but once it works it's like magic: new features are easier to add than you'd expect, and I've had way more things work after my first attempt than anyone has a right to expect.
I know which style I prefer, and it's definitely not because I "fetishize challenging knowledge for challenge sake"—it's because I'm willing to put effort up front for a long-term reward. And it's not even that long-term—I've found these things pay off over weeks or months, not years, so I'd take the same deliberate approach unless my deadlines were literally days away.
My experience has been (31, SF) that companies are fundamentally disorganized, frequently reinvent things (a behavior seen at all levels), are skewed by politics, and ultimately are rarely successful due to the code. The fact is, that for most startups it's not the code that's make-or-break (twitter, snapchat, facebook, airbnb, uber), it's the business execution.
It's been my career experience that the only way to be a 10x engineer is to prevent management from engaging in unnecessary projects (Bob wants to rewrite X in node, Joe wants to make a service that only has the responsibility of CRUD to 1 table, Sally wants to move it all to noSql). It's been my career experience that soft skills give the best ROI.
So when I see a list like this, I find it hard to imagine how "Profunctor Optics" is what Zynga (or any company I've worked at) needed to be successful.
What functional programming brings to the table, aside from dense jargon, is the tools to build systems that are _correct by design_ and have certain, provable properties. For businesses this means they can spend less money fixing errors in their software and avoid losing revenue if they gain a reputation for releasing unreliable software. For programmers it means focusing on delivering instead of fussing around with runtime type errors, deadlocks, and the like.
Where this is useful is reducing the risks associated with failure: _when_ your software fails, what is the worst that could happen to your users or your business? If the answer is, "well some people might see the wrong blog article or have to re-submit their comment" then you have your answer. If your system is handling orders on a trading platform where an error could cost someone a few hundred million dollars... well it might be worth the effort to eliminate the possibility of as many errors as possible by using a better tool to help you with that.
The success of some companies in spite of not using _strict_ functional programming languages doesn't disprove anything the FP zealots have been saying for years. It only demonstrates how much money and time we invest in absorbing the cost of developing and operating software with innumerable, unknown errors.
I mean, by that criteria, why learn anything?
The world is not just startups btw, I work on products that have very very defined requirements. Hell, sometimes I'm implementing an RFC. FP is a huge win for us, and yes i've used profunctor optics in a high performance network application.
This might be tongue in cheek, but I agree with the parent. Engineering excellence will not further your career much, companies tend to oversell meritocracy. Salaries flat out quickly, and there is a huge supply of good enough developers, if you step out of the line too much.
What other professions besides lawyers or doctors regularly have a 3 or 4x range for salaries of people more or less with the same job description?
The business itself was oblivious to why the project was successful. Or how we were able to extend it so quickly and retain stability.
Even on that project, 90% of the difficulty was social/political. Just as Java allowed average programmers to write above average code, the same thing can be said of FP.
I would like to challenge that by pointing out that not a single piece of large software (except maybe a compiler or two) has ever been written using the skills/concepts of the higher levels of this chart. What you say may be true if generalized to better upfront design etc., but there's also no evidence that this is better done using the approaches discussed. It's not even about measuring productivity; it's about arguing over the properties of an empty set. When it comes to small (<100KLOC) and non-large (<1MLOC) programs, then the difficulties are not that hard to begin with, and depend mostly on the essential complexity of the problem, rather than abstraction/code organization etc..
> it's because I'm willing to put effort up front for a long-term reward.
You're equating upfront effort with a particular choice of technique, and one that has never been put to the test.
"I would like to challenge that by pointing out that not a single piece of large software ... has ever been written using the skills/concepts of the higher levels of this chart."
To which I gave two counterexamples (today and on other occasions).
All software systems should be modular to some degree, so I am not clear exactly what your criteria for a large software system is. In both my examples, the Haskell codebases are monolithic repositories where everything is typed-checked and built together.
You keep asking for quantitative data for a comparison with other languages. But my answer is the same as last week. It's good people that make software efficiently and cheaply. Give engineers PHP and they'll still manage to build something good.
Haskell is just a tool, but it's a tool that increasingly good people are asking to use.
The system at Barclays and Standard Chartered were built and are currently maintained very cheaply because good people were hired. Haskell just happened to be their preferred tool.
Let a "software system" mean any assembly of processes that communicate to provide some shared functionality, with the components being coupled to one another in some non-trivial way (i.e., there are correctness conditions that cross processes boundaries, so that changes to one program may necessitate changes in others). Excluding the compiler, how many lines of Haskell code do you have in your largest system (using my definition)?
Oh, thank god, finally we get a number. It turns out that I was completely wrong, and that in its 20 years of hyped existence, someone has built something big with Haskell once. You may think I'm sneering, but only a little bit, because while one is almost nothing, it is much better than actually nothing because at least it is a first anecdote. Now, who do I have pester to get some more metrics?
> More details here:
Sadly, there are no more relevant details in that talk.
I am, however, very skeptical of the interpretation some people, especially in typed FP, give to types and the reasons they believe types provide a benefit. For example, the relationship between useful type systems and software correctness is not direct. I am currently using a formal verification tool that is completely logic based (and it has an interactive theorem prover, a model checker, and is backed by decades of careful mathematical analysis of its soundness) which is completely untyped, and yet it is as powerful as Coq for proving correctness of algorithms. If direct, formal, proofs of correctness is what you're after, types might not be the best solution. Types, however, have other clear benefits that are not related to formal correctness, or, at least, not correctness of global program properties.
Low cost of adoption + clear minor tooling benefits is usually enough evidence to adopt Java/C# style static types.
Haskell style static typing has a very high cost of adoption and so must conclusively show a strong benefit in order to be adopted by industry.
At the moment I feel Haskell companies can get away with using Haskell because PL enthusiasts are willing to absorb the training costs.
What are you talking about? I know for a fact this is untrue. How do you continually justify making up things and passing them off as facts?
Why, because I missed a point of data in the ocean of hype and there is one? Two? Please, set the record straight, and provide us with some facts. Maybe in Haskell's 20 year history as the world's most hyped language there was one or two or maybe three non-compiler programs written in it that aren't very small. Maybe there is even one anecdote out there with some actual information in it, although I've looked for one -- a lot -- and couldn't find any. So right now, the people who are making stuff up and passing them off as fact are those who claim significant impact over and over and over without a shred of even the tiniest of anecdotal data. How is that justified?
I have absolutely no problem saying that Haskell works better for large interactive software than other languages, once there are a couple of anecdotes around, but the fact is that currently there aren't any (at least none that I could find). What we have at the moment is a lot of vague claims with zero metrics.
But perhaps I should explain the source of my skepticism. First, it stems from the almost unprecedented gap between hype and evidence. I would imagine that after so many years, given the grand claims there would at least be some good anecdotes. That there aren't any, inspires skepticism. Someone here likened Haskell's abstractions to load bearing materials as opposed to other language's mud. If that were true, the reasonable prediction would be to see Haskell skyscrapers towering over a sea of mud hats; that we actually observe the opposite, inspires skepticism. Second, it stems from my general skepticism (based on CS theory and 20 years of experience) towards the impact any language can make. I.e., I have not encountered a case where the choice of language was the determining factor. Haskell is perhaps the most notable example of a language that claims to make a significant difference by virtue of language-level features (as opposed to runtime features, like GC). I would very much like to see how big that contribution is, if it exists at all.
BTW, "extremely large projects" are anything above, say 20MLOC. What projects of that size have been written in pure functional style? What large projects (>5MLOC) have?
I wonder: to what extent this is because those higher level concept tend to shrink programs in the first place? I mean, big is bad, there's no debating this. Big programs can only arise from necessity or stupidity. And if those fancy features are any use, they must be reducing the need for big.
I would ask myself a slightly different question: what kind of program still have to be big, even if you have all these fine concepts at your disposal?
And of course, big is risky, and that tend to make us chose more conservative options. Why use Haskell where C++ has shown in the past it could do that kind of job? Not to mention the network effects: even if Haskell as a language was better than C++ as some specific job, you may still choose C++ because of libraries, commercial support, and developer availability.
> When it comes to small (<100KLOC) and non-large (<1MLOC) programs
Oh, so that's how you're calibrated… My, to me, small means <10KLOC. 100KLOC is already big, and 1MLOC is gigantic. Besides, I've seen a couple multi-million lines programs, and I don't believe for a second they had to exceed 100KLOC. Such programs are more about piling historical accident on top of historical accident than about encoding a complex problem domain.
A small compiler takes a couple thousand lines of code. With the proper tools, it can often be squeezed into 2KLOC (Source: the STEPS project from http://vpri.org). It won't optimize like GCC, but you rarely have to anyway. Now you have a DSL for writing an executable specification. What kind of specification is so complex that it cannot even fit in 50 books?
Of course you have to keenly understand the problem domain to pull that of, and that often means solving it the crappy way the first time around. Which means you're never going to rewrite it the proper way: it would be way too risky. So of course it has seldom been put to the test. One does not just hinge an entire business on original research.
> You're equating upfront effort with a particular choice of technique, and one that has never been put to the test.
No, not never. I have at least one example: one of my uncles once had to write a number of database transactions. It was one of his first job. He had 1 year to do it. Seeing how tedious it would be, he first though about the problem, then devised a DSL in which he could write the damn transactions. Since he wasn't exactly senior, and had crappy tools (he used B), the DSL took him about 6-7 months to perfect. (During which management were on the verge of panic, because no transaction has been done yet.)
At the 8 month mark, all transactions were done, tested, and had a surprisingly low bug count. The client was very pleased and the contract was renewed for another batch of transactions. Same size, but to be done in 8 months this time. That was given to a co-worker, who used the my uncle's DSL to perform 8 months worth of contracted work in 1 month.
DSLs sometimes work.
None at all. No one, AFAIK, even claims a reduction of even a single order of magnitude in code size.
> I would ask myself a slightly different question: what kind of program still have to be big, even if you have all these fine concepts at your disposal?
All those that are big now. I'm not talking about a single executable, but about an entire system (specification, development and debugging have always been done modularly, regardless of whether there's a single process or a distributed system). I see no way to implement the requirements of, say, an air-traffic control system, complex avionics or a banking system in software that isn't very big.
> 100KLOC is already big
A mid-sized enterprise software system is ~5MLOC. Google and Facebook have codebases that are measured in the hundreds of millions LOC. You can use whatever definitions, but if you look at software actually being constructed, much (if not most) of development effort in the industry is systems that are about ~5MLOC.
> What kind of specification is so complex that it cannot even fit in 50 books?
My guess? The majority of software written today is part of such specifications. I once worked on a medium-sized air-traffic control system designed for a relatively small area and number of planes, whose informal functional specification was ~10 books. The specifications for the avionics software of a fighter jet developed in the 80s were ~2000 pages of structured natural language (source: http://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts.H...)
http://vpri.org claims about 3, though most of it is not because of the languages, but because of the removed redundancies. The languages do seem to be responsible for at least 1 order of magnitude.
About the rest, of what you say, I won't claim anything. It just make me feel… uneasy. Okay, those specs are that big. Do they have to, though? I've seen a big fat list of requirements in my last gig, and many were duplicated into 2 slightly different versions. As were some pieces of the code. And that's the obvious stuff. There were more subtle waste, where simpler alternatives would have fit the bill, but weren't applied because of historical reasons (I asked the architect, there was a reason for everything).
Almost everywhere I look, I see a wasteland of useless code, and even the specs aren't that clean to begin with. It feels like proper DRY alone would have reduced the size of this stuff by 2 or 3. Maybe I was unlucky enough to work in especially crappy environments. But from what I hear, that's the norm. So far, you're the only one I met that challenges that perception.
Compared to what? C? I'm not talking about C, but about any modern language.
> Do they have to, though?
Yes. Or, at least, everybody (including me when I first saw them) says "this cannot possibly be this complicated", and after understanding them, everybody says, "oh, OK". But let me put it another way: if the modern world really only requires simple software, then our work is nearly done. Some would use Python, some would use Java, some would use Haskell -- but if the largest software system needs to be ~100KLOC, then none of it matters too much. Writing such a piece of software isn't hard regardless of what language you use (or, rather, the difficulty is in the essential complexity; there's not much accidental complexity in 100KLOC). Even assuming one methodology would be 15% than another, it wouldn't make much difference to the bottom line because producing 100KLOC software is cheap anyway. And if that were the case, investing in programming languages would be even a bigger waste, as investing in simplifying specifications would have a much bigger impact, and would cost a lot less.
But just to get a sense, GHC -- which is a compiler, and, as you've noted, compilers tend to be small -- is about 400KLOCs of Haskell. The Linux kernel is over 15MLOC of C. Reduce that by an order of magnitude, you still get 1.5MLOC, and that's just for an OS kernel.
> So far, you're the only one I met that challenges that perception.
I don't argue that there isn't a lot of waste. I argue that even without all that waste, we'd still need software that's very large (or that, alternatively, waste is unavoidable). There are no signs that Haskell reduces this waste at all, or that it dramatically reduces the size of programs.
The domain they tackle is personal computing, which means Kernel, Windowing system, remote communication (web/mail), multimedia… So, yeah: mostly C and C++, by the look of currently popular programs.
That said, much of the (apparently) needlessly complex stuff I have seen was written in C++, and it did look like they didn't have the real-time requirements or resources constraints that would justify the use of such a monster of a language.
> Or, at least, everybody (including me when I first saw them) says "this cannot possibly be this complicated", and after understanding them, everybody says, "oh, OK".
I have yet to reach the second stage. And on one occasion, I did reach a reasonable understanding of the whole system. It definitely had to be very complex to match the specification, but the specifications themselves didn't match the end user's needs.
> (or that, alternatively, waste is unavoidable)
That's the alternative I'm most scared of. I cannot comprehend unavoidable waste, but I can't rule it out either.
Alternatively, I've came across the idea of not solving some problems¹, because the return on investment is just crap. Okay, when safety is involved, you probably cannot do that. Still, the idea that the 80/20 rule is sometimes more like 99.9/0.1 is enticing. Sometimes, full automation is not best. For instance last I checked, the best Chess player ever is a human-computer team, not a computer.
 Stop Over-Engineering https://www.youtube.com/watch?v=GRr4xeMn1uU
That, too, cannot be solved by a programming language :)
But much of complex software deals with things that could not possibly have been solved more efficiently (or even efficiently enough) by humans (like sensor fusion, package tracking, manufacturing control etc.). Also, we're pretty far from full automation. Nobody trusts computers to make the decisions in air-traffic control systems, power plant control software, or even in ERP systems.
The reason we keep building large software is that -- in spite of many problems -- they really do work (in the sense of achieving their goal of higher-throughput; whether or not a higher throughput of flights or of business deals is good or bad for humanity is an entirely separate question).
I think this is absolutely the difference between you and proponents of Haskell. Personally I have found working on 100k LOC codebases very hard in Python and easy in Haskell.
But other than that: yes! It is trivial to verify that Rust indeed fulfills its claims, but it is not trivial to verify that that's enough, namely, that overall, the overall cost of developing in Rust is lower than in C++, which is why I wouldn't switch from C++ to Rust without seeing that at least the costs are comparable. I'm also eagerly waiting for data about Rust's concurrency approach.
I would also tell you this: while I find the pure-functional aesthetically unattractive for interactive programs and am very much impressed by Clojure's and Erlang's approaches to state, and while I've personally written a semi-popular Clojure library, I have expressed skepticism about Clojure's suitability for large-scale projects. How well a language works in big software is something that you simply cannot extrapolate from experience in small projects, and as much as I like Clojure, I have serious doubts about its applicability, and I would not use it to write a large system without the kind of data I expect from Haskell.
But all those languages (Erlang and Clojure included) have social advantages that make this data much more easy to come by: they are used by people who aren't enamored with the language itself, who aren't PL enthusiasts, and are much more goal-oriented. Their judgment stems mostly from how well things work, not how interesting the language is. So articles with pertinent data (maybe not enough to risk a large project, but certainly more than those you find for Haskell) are more abundant.
So I don't need to beg for technical reports as much because 1. no wild claims that aren't supported by data are made, and 2. there's data out there (for Go, Erlang, Clojure) or it's coming (Rust).
By the way "if it compiles it works" is supposed to be somewhat tongue in cheek, as is "avoid success at all costs".
And we're not saving the industry; the industry as a whole isn't suicidal and will never bet too much money on unproven technologies. The industry isn't so fragile, but research is. Given the amount of money in the industry, even small parts that do bet on unproven tech can cause a funding surge followed by a research winter when the disappointment hits. We're saving Haskell.
If those who adopt Haskell do it with clear vision and not by believing some messianic claims, there would be no great disappointment and no research winter. Nothing is more dangerous to research than wild claims. You need to promise less and deliver more, not the other way around. And because the industry is competitive, it is actually quick to adopt ideas once they show actual significant benefit -- i.e., once they're ready. The industry adopted garbage collection almost overnight; it was almost 40 years after it had been invented, but almost immediately after it's been productized well enough to provide a significant advantage.
If pure-FP is really the way to go in general-purpose programming, eventually the industry will adopt it in some well-productized form. Getting there does require some early adopters, but they're already here and overselling doesn't help to get more good ones. It helps to get precisely those adopters who will end up causing a research winter.
What bothers me in terms of harm to the industry is the waste caused by people switching from one language to another, rewriting libraries over and over and thinning out the effort. I think we're losing years of progress. But there's really nothing I can do about that because even if I can convince someone that a language that is 1% "better" than another does not justify the effort of porting libraries over, there are already too many languages and platforms around, and there's always the question of, so which few languages should we pick, and why should it be the ones you like and not the ones I like. Haskell actually does not contribute much to this problem because its adoption rate doesn't yet make a dent.
Opaleye , an Arrow-based  DSL for Postgres SQL that allows the user to create type-safe, composable, and (generally) optimally fast SQL queries.
Halogen , a type-safe frontend framework for PureScript  that models view component interaction as an algebraic datatype containing query actions (among other fairly advanced concepts).
If someone is used to a certain level of abstraction, then the higher levels might seem unnecessary until they invest the time into learning them. The only thing unique to functional programming is that it has inherited a lot of terms from theory (profunctor, monad, algebra) rather than inventing friendlier ones.
How are those counterexamples? Do they provide benefits that significantly outweigh their cost?
> the higher levels might seem unnecessary until they invest the time into learning them
Except that this can't go on forever. Because we know that programming productivity has a theoretical upper bound, therefore there must be diminishing returns, and at some point the cost of more abstraction would outweigh the benefit. We just don't know where that point is. Finding out is not a matter of faith or even personal feeling, but of actual results.
Now, I know that collecting meaningful productivity data is hard, but that evidence of a claim is hard to come by doesn't make that evidence any less necessary for the claim. You're free to say that you like FP because you enjoy it more, or even that it makes you feel more productive. But you can't make actual empirical claims without actual evidence.
> The only thing unique to functional programming is that it has inherited a lot of terms from theory (profunctor, monad, algebra) rather than inventing friendlier ones.
I'm not entirely sure that's the case. FP is often its own theory. I'm not sure many of those concepts were studied heavily outside the context of FP, but I could be wrong about that.
But this is just a side point to what you're saying.
Hopefully the smarter bits van be factored out as easier-to-use libraries.
I agree that this wording makes it sound pretentious, but combinators are actually a very useful, practical technique. Parser combinators, in particular, are awesome.
It's not even biased towards Haskell; it's biased towards working at Slamdata, where they abuse free monads for everything.
Tired of this elitism.
Some of this stuff is cool and interesting, even to me -- but please don't rank people based on their inferiority of knowledge WRT to you, as you are explicitly doing
Sorry but no. Functional programming requires a whole set of new skills that most programmers don't have.
Learning the basics, pure functions, higher order components, immutability, second order functions, first class functions, lambdas, partial application, currying and point-free style. That's functional programming.
I know I have a long way to go before I'm a competent with functional programming, but I thought I at least understood the basic landscape, even if I don't know how to apply it properly yet.
No one is interested in type-safety or purity for their own sake (except for aesthetic reasons). I'm interested in better, cheaper software. If Haskell's purity (there are simpler languages with similar type systems) provides that then there should be evidence of that before the claim is made. You have not provided any such evidence, and even implied that you're not even carefully collecting metrics that can be used to construct such evidence.
I can assure you our software is not at all simple. Standard Chartered has industry leading portfolio compression and XVA compute. Barclays has industry leading pricing of exotic derivatives (including pricing on GPUs). Both industry leading solutions built in Haskell on top of frameworks/APIs in Haskell.
> No one is interested in type-safety or purity for their own sake (except for aesthetic reasons).
So are all forms of static verification just aesthetics to you? Or just those offered in Haskell?
> I'm interested in better, cheaper software.
Then hire good people and let them use the tools they want.
The only evidence I could ever hope to offer is that good people build good software. But that would never make an interesting management report.
As someone who makes use of formal methods regularly, I am aware of how weak Haskell's static guarantees are. That is not to say they may not be useful in practice, but that's an empirical question, and the one that I'm desperately waiting for answers to.
> The only evidence I could ever hope to offer is that good people build good software. But that would never make an interesting management report.
It also doesn't support the claims made that Haskell makes a significant contribution.
That's excellent. So please publish some numbers so we could estimate the contribution. With so much hype and zero data I think that the skepticism is very well justified.
> The person responsible for our industry leading portfolio compression joined us because we use Haskell.
I have no doubt that some people really like Haskell, but that's not data.
For my own curiosity, to which languages do you allude in this statement?
That said, functional programming is incredibly difficult to do well without years of practice and the languages still suffer from limitations with performance that are hard to debug. While I still feel that functional programming is the "future" of software engineering, I feel it is quite a ways in the future.
http://debug.elm-lang.org/ as one example.
Elm's debugger (and similar projects like the Pux debugger from PureScript-land) are live systems. They are replaying all events deterministically but you can edit your code, reload it and then replay the exact same series of events or toggle events on/off and see what the resulting state is. rr is super cool and I'm very glad it exists but it's a static analysis tool where the Elm debugger is almost REPL-like in terms of how you can play with things and see the results live.
1. Writes simple programs
2. Writes more complex programs
3. Writes programs that are maintainable
4. Writes programs that are maintainable by others, and communicate module intent clearly to team members who interact with it
5. Writes programs that are maintainable etc., and also make good use of resources, both computational and human
The emphasis on language concepts seems so distracting that I wander how much time people have left to make their program externally better (better functionality, better performance, more maintainable by others) rather than internally better (make use of clever abstractions). That the two are related is a hypothesis which doesn't seem to be supported by evidence, and is believed by those who enjoy thinking about the latter more than about the former and want to justify their focus.
I'll give you rigorous, but how is it proven? I think it is a fair guess that the number of non-tiny programs (>100KLOC) making use of the "expert" concepts and skills is ~0. The number of large programs (>1MLOC) at the proficient level is also ~0.
(This is intended as a serious question, not a troll. It often seems that in discussions of FP, and particularly of Haskell, someone will suggest that there are few widely-known, large-scale projects written in this style that can be used to evaluate its effectiveness, and someone else will then reply with one or more of the same very small collection of larger projects or high profile organisations using FP/Haskell that are publicly known.)
Please don't talk about what you don't know. Standard Chartered's Haskell compiler is written completely from scratch and is not based on any existing compiler.
(Also, while it has little to do with my point, I've also heard that the person behind SC's Haskell compiler is the one who'd written the first ever Haskell compiler, long before he started working for SC; is that true?)
: Please don't take this too literally. In the two decades that have passed since Haskell was declared the language to end world hunger, someone may have written a bigger program. Maybe even two.
It is also clear that there is not just one FP.
The smugness and self-importance displayed in there is quite the opposite of the virtue of code simplicity put forward by e.g. Don Stewart at HX this year.
Can you give some link? I wasn't able to google it.
In my opinion, if you refactor a Java class to change a setter into something which returns an altered copy, you've just made your code a little bit 'more functional'; if you use a list comprehension in Python instead of a for-loop, you're code's become 'more functional'; and so on. The nice thing is that many of these small changes are also good practices to be following regardless of whether you want to be 'functional' or not.
Regarding the 'advanced' concepts, I'd say they exist mostly for library/DSL authors who don't want their users to have to care about those concepts.
Using libraries based on these ideas is great, as the author may have chosen these techniques in order to give the library correctness guarantees, or efficient resource usage, or good error messages, etc.
Using these ideas to write a library can, occasionally, provide a solution to an otherwise tricky situation, e.g. if you need to maintain state, whilst offering the users a pure interface.
Using these ideas to write a library for your own usage changes the cost/benefit calculation considerably, since you're not limited to providing an API: if you want efficient resource usage, can you achieve that by keeping resources in mind when writing the application code? If so, you probably don't need to over-engineer one particular component to enforce this for you. And so on.
Another thing I'd change about this ordering is to put dependent types much earlier, e.g. at "advanced beginner". Dependent type systems are very simple; they're just a slightly more complex version of lambda calculus. Sure you can use dependent types in complicated ways, just as you can write complicated lambda functions. They're certainly easier to grasp, and strictly more powerful, than Haskell's complicated mess of monomorphism restrictions, singletons, kinds, type families, etc.
Even "just" refactoring Java or Python code to be more functional is a huge win. Even "just" using Scala in a more or less FP way is a huge win. I don't fret being unfamiliar with Lenses or whatnot yet.
The Reddits for the different functional programming languages are a good place to hangout (and a frequent source of blogs and videos on these topics), and for FP in non-FP languages, there are good Github communities (e.g. http://github.com/fantasyland/, no association with FIOL).
I'd also humbly suggest that LambdaConf 2017 (May 25-27) is a great place to learn more about functional programming. There will be a special two-day LambdaConf workshop prior to the conference that introduces the basics of functional programming (no background knowledge), another that is aimed at a slightly more experience audience, and then at the subsequent conference, plenty of workshops and sessions to learn many of these topics (and others).
It's a journey, but everyone can get there if they have the interest. Most of the resources out there (blogs, videos, even e-books) are free, and the remainder are low-cost if you are already working in tech.
Good luck and please just let any of us lurker functional programmers know if you need a hand. :)
It's terse but comprehensive, and the exercises have been a pleasant source of a-ha moments. There's a more recent version of the course offered, but I haven't been through it.
Singletons are a way to emulate dependent types with typeclasses, datakinds, and GADTs
It didn't get much traction back then.
But overall I like the idea as a roadmap for learning particular language or skill set. I'd like to something like that but with references to material where particular skill can be learned, etc
I think it's a better idea to separate skills required to use most of the language, and the ones required to create libraries.
Here's a couple threads: https://twitter.com/bitemyapp/status/803720075702255616
The rest is just icing on the cake, and you don't have to deal with it if you don't find it interesting.