One way to process this list is to see it as an elitist construct intended to put you down or degrade you or extract money from you for training etc.(which might even be true, but generally it is better, ime, to assume that people are genuinely trying to provide what they see as value)
The other is to ignore any insult, intended, perceived, or a mix of both, ignore all the hierarchy labeling - 'beginner', 'advanced beginner', "standard ladder" etc who tf cares? - and just see if there are skills you can pick up or explore on your own.
That said, I laughed at 'equational reasoning' being considered an 'expert skill'. It is considered an advanced technique but imo the basics are trivial to pick up, and I had a lecture/demo on this at a local fp conf. Sure it can get very hard if you tackle a hard problem, but that is true for everything.
I saw this pdf before all the angry comments, and was very excited to see a list of fun topics to explore and learn. I love skill trees in Skyrim etc, and to me this looked like a real life version! Time to level up!
I definitely agree that the most difficult part of learning something new especially something as esoteric as FP is knowing where to start and then where to go from there. I also benefited from and highly recommend functional-programming-jargon(https://github.com/hemanth/functional-programming-jargon).
I have offered paid training at Lambda Conf in the past with my coauthor Julie and I am opposed to this ladder for _many_ reasons.
I think the class (as in hierarchy) and topic-chasing anxiety it will induce in many is counter-productive. It's extremely flawed as a guide for what to learn or what in order to learn things as well. One of the worst attempts I've seen on that front in fact. And yeah, the labels are useless anyway. It doesn't really matter what one thinks a "beginner" is.
It's also just bad optics. I don't know why John does stuff like this.
I'm not sure I would have called it a 'ladder', and the fine grained tiering into buckets seems overly ambitious, but I'm guessing you'd end up agreeing that there are topics that most beginners will find confusing/unfamiliar. If the end goal is to make it easier to sort talks into tracks for conferences, what's the issue? Do you really think Profunctor Optics should be a beginner level talk, or that using "functors" should be advanced? I'm sure you could argue specifics or make a good case for mistakes made in this chart, but to say "I don't know why John does stuff like this" seems unnecessarily harsh.
In response only to the third point, there are some very nice examples of "advanced" equational reasoning in the work of Richard Bird. Pearls of Functional Algorithm Design [0] is a great book for exploring this and demonstrates a very "expert" nature.
I don't think it's mentioned anywhere in the poster, but it's also perhaps worth noting that this came out of a survey, it wasn't just dreamed up by one person / a few people.
1. This list is very Haskell-focused: it includes lots of features which only make sense in Haskell or very Haskell-like languages, and lacks mention of many interesting functional programming concepts which don't appear in Haskell (like ML-style modules and functors, row types, macro systems and homoiconicity, and so forth.) There are a lot of functional languages which have very different ideas about how to program, and this list doesn't reflect that.
2. Some of the 'skill hierarchy' choices feel a bit confused and arbitrary. For example, 'Use lenses & prisms to manipulate data' appears as a Competent skill, but 'Use optics to manipulate state' appears as a Proficient skill, despite being slightly different ways to refer to an effectively identical skill. (I assume the latter means "…use lenses & prisms to manipulate data, but in a state monad," which is only a tiny difference.)
3. While I like the idea of a list of a road-map to learning, I feel like this gives the unfortunate impression that many of these are obligatory skills. It calls itself a "standard" hierarchy (which makes it sound like a consensus, rather than just a single person/group's opinion) and has language like "…skills that developers must master on their journey…" (emphasis mine), but the list includes a lot of things that are far from necessary for deeply understanding functional programming. You could lead a long (academic or industry) career in functional programming without a deep understanding of many concepts listed: things like comonads, recursion schemes, finally tagless interpreters, higher-order abstract syntax, and so forth. All of them are useful concepts and deserve study, but you can definitely be a functional programming expert without ever having seen a comonad.
In many ways, I wish this list took a cue from Benjamin Pierce's Types and Programming Languages which features not an ordered list but a graph of the concepts related in the book, and how they relate to other concepts. It would be more complicated, true, but also a lot more honest about the academic and intellectual path you might want to take through functional programming, and without giving the idea that you need to master the vagaries of dozens of Oleg Kiselyov's papers just to be "competent" in functional programming.
This is ... much more politely and detailedly stated than I managed on Twitter.
I work in FP, programming Clojure for real-world web applications every single day, and 90% of this list is completely meaningless to me.
It's nothing less than more of the Haskellite strain of "everything must be hideously complicated type theory or it's not 'real' FP."
And that shit can fuck right off. It's incredibly hostile and disrespectful both to new programmers and existing ones. It's even harmful to Haskell itself as a language, because this persistent attitude that you have to learn advanced type theory just to get anything done in Haskell is one of the biggest barriers to adoption and learning of the language, especially when even most of the material supposedly for "beginners" insists on thrusting this attitude on the reader.
Once again, just as they did by platforming an open fascist, LC demonstrates a complete cluelessness both of the community it claims to represent, and of the impact their "representation" has on said community.
The fact that it's meaningless to you doesn't mean it doesn't have any meaning :)
Your "hideously complicated type theory" is many peoples' "hideously horrible paren-based syntax" - i.e. Lisp. So, how many people don't learn Lisp only because of its syntax? I'd say a lot.
Many of the concepts make sense when having an explicit type system, so if you want types you may want to learn them.
Others I suspect you are already using, even if you don't know them by name. Have you ever used streams? Congratulations, this is what codata is.
> "hideously horrible paren-based syntax" - i.e. Lisp. So, how many people don't learn Lisp only because of its syntax? I'd say a lot.
But the point is that nobody's putting 'S-expression syntax languages' in a Lisp-centric list of essential functional programming skills. The idea that half this stuff is supposedly mandatory is a skewed perspective.
> Others I suspect you are already using, even if you don't know them by name.
And if the list is comprised of Haskell-centric jargon for more generic notions, that's obviously grounds for criticism.
> But the point is that nobody's putting 'S-expression syntax languages' in a Lisp-centric list of essential functional programming skills.
They should've probably said "FP-only" (i.e. pure) languages to exclude imperative languages that merely support functional programming constructs, like Clojure, Erlang, etc.
I agree, many of the concepts that are useful in pure languages (i.e. monads) are not that practical in imperative/impure or dynamic languages (at the cost of reduced ability to reason about your program.)
> And if the list is comprised of Haskell-centric jargon for more generic notions, that's obviously grounds for criticism.
It's the other way around - the items on this list are the more generic concepts [0]. Less expressive languages lack the (practical) ability to reason about those concepts generically, hence the need for more specialized constructs, like i.e. streams, transducers, etc.
They should've probably said "FP-only" (i.e. pure) languages to exclude imperative languages that merely support functional programming constructs, like Clojure, Erlang, etc.
Ahh, and there would be that definition again, the one at the root of the problem.
Please find your nearest whiteboard and write the following 100 times or until it sinks in: "Haskell is not the only Functional Programming language."
FFS, the languages that invented functional programming are not Haskell, and predate it by some time. Lisp dates back to 1958. Scheme first appeared in 1970. ML dates to 1973. Haskell didn't even exist until 1990.
This weird ahistorical definition of "functional programming" is both useless and inaccurate. Strict typing and purity are not the sole measure of whether a programming language is functional and never have been. It's only been in the last few years that there seems to be this desperate push to define FP as "Haskell", which as far as I can tell is the product more of a desperate kind of evangelism than any sound argument for such a limited definition.
We get it, you like Haskell. But some of us also like Clojure, and Erlang, and Elixir, and F#, and OCaml, etc. etc., and you don't get to just redefine terms to cut out languages you don't like. Reality doesn't work that way.
But Haskell does stand out among your list as being a pure functional programming language (it is not the only example of course). Whether this is a good or bad thing, I will not get into. When programming in a pure language you typically have to use some quite recent/advanced ideas such as monads and lenses, just to get practical work done. This is not the case with Clojure, F#, Scala, Erlang etc, where side-effects are idiomatic. In fact, if you tried to go fully pure with these languages, you would run into problems with performance.
To avoid confusion, they should have used the term "pure functional programming" or even "typed pure functional programming".
To avoid confusion, they should have used the term "pure functional programming" or even "typed pure functional programming".
That's what was used when I started learning FP, which was maybe three or four years ago. There was FP, and then purity was a secondary characteristic. Haskell was pure and typed, but that didn't make other languages not FP, when they still had all the same basic tools and the same idiomatic focus on composition of functions over mutable state and imperative logic. No, you didn't need a special magic word to allow you to have side-effects like I/O, and most also allowed for mutability (even Scheme had set! after all), but they are first and foremost functional languages. In many ways, Haskell was and still kind of is the odd one out, not counting experiemental academic languages like Idris and Coq.
This wasn't even a controversial statement just a few years ago. But as Haskell has been more and more in vogue, there's this weird fixation on redefining the window of what "FP" is to only refer to, well, Haskell. And it's very clearly what the LC list is all about, because so many of the concepts described are both largely unique to Haskell and it's closest relatives, and unnecessary to do practical, everyday work in a vast majority of FP languages.
And ultimately, attitudes like that hurt the whole field. In trying to fix Haskell's image problem by brute force, they wind up damning the larger domain in the process.
If your language is not purely functional it is imperative. There is really no conceptual difference between Clojure and Python, except for syntax/macros and that Clojure has immutable data structures in clojure.core.
You could use mutable arrays in Clojure the same way you could use immutable arrays in Python (yet nobody is calling Python a functional language.)
Still, I agree that functional mostly means "language with closures and immutable data" in the mainstream, but in the context of this post I'd say it's not that hard to guess that the authors meant purely functional.
I really don't understand your rage about this, you could offer up constructive criticism and suggest an alternative that more closely suits Clojure/Lisp development.
The strain you're referring seems like this perception that is somehow self perpetuating. My experience is that of an incredibly helpful community that gives up its time to help and educate people.
I think this list is not reflective of a general Haskell outlook. Its reflective of an outlook of people standing _outside_ Haskell (LC is historically a scala-heavy conference) and projecting onto it a certain sort of structure of expectations that isn't actually how Haskellers in general view things. I agree that this misperception isn't a good thing for Haskell -- but its a misperception imposed from the _outside_.
FP is often a sliding scale. You can program with as much or as little type safety, purity or totality as you want. Some folks like to push the boundaries of what is possible without shortcuts, taking inspiration from recent research. Many Haskell, OCaml, Scala and F# libraries will require some of the concepts in this list to understand. I do agree that the list is somewhat Haskell centric though.
I think that Haskell's takeover of the term "functional programming" in certain circles/contexts is both inevitable and nothing you should care about (I, too, program in Clojure, an imperative-functional language, and I don't mind it not being perceived as "true FP"). See this relevant discussion: https://www.reddit.com/r/programming/comments/59hubc/happine...
> In many ways, I wish this list took a cue from Benjamin Pierce's Types and Programming Languages which features not an ordered list but a graph of the concepts related in the book
I disagree. This list contains lots of "Haskellisms" like monad transformers and lenses, which are effectively design patterns which have emerged over time as powerful ways to work within and around the idiosyncracies of Haskell.
We could argue all day about whether different 'families' of FP languages, like MLs, Lisps, Joys, etc. are more or less "advanced", so let's avoid any ambiguity and stick to the Haskell family of languages. Let's pick a language which is strictly more advanced and powerful than Haskell, such as Agda or Idris: not only can we implement all of the Haskell concepts in these languages (especially Idris, since we can toggle totality checking, rather than having to work around it), but the inclusion of dependent types lets us implement many more powerful patterns, and even simplifies many of the Haskell patterns (e.g. there's no need for hacks like singletons, normal functions can be used instead of type families, etc.).
So, given this vast landscape of new possibilities, which of the many 'design pattern' concepts have been chosen from these ultra-advanced, super-powerful languages?
> Dependent-Types, Singleton Types
That's it! Apparently the only new concept in these languages is the fact that dependent types exist; but even that's been constrained to a Haskell context, by lumping it in with singletons!
There's no mention of proof objects (de Bruijn criterion, etc.), computational content, erasure or proof irrelevance. No mention of tactic languages. They do give kinds and rank-n types, but they don't mention cumulative universes! No HoTT concepts are mentioned, like types-as-spaces, identity-as-paths or univalence. There's not even a mention of intensionality vs extensionality!
So what about skills? Surely there'd be specifics like modelling divergence co-inductively via Partial/Delay? Maybe propagating invariants with initial algebras, even if it's just a Vector example? What about something trivial, like heterogeneous equality?
> Use proof systems to formally prove properties of code.
> Use dependent-typing to prove more properties at compile time
Again, they've just listed that "these things exist", rather than giving any specifics whatsoever.
What about Haskell? Does that exist? You bet! It even has this thing called lenses; and not only do they exist, there are all sorts of nuanced proficiencies to them!
.... except! I really think Lenses are a concept orthogonal to Haskell. They can be implemented anywhere and usually make good sense. I think we have a long way to go before we've come to understand their best place in functional programming—especially construed as widely as you note.
I get the sense that most engineers would almost never use, and never need to use, most things beyond advanced-beginner in this sheet. It may be fun to brag about knowing how to use "embedded DSL with combinators," but is that really the best thing to help your startup succeed?
I suppose I'm slightly bothered by the fetishizing of challenging knowledge for challenge sake. Most of the people I know who learn about "Embedded DSLs with combinators" and set theory seem to not be able to stop rubbing in how smart they find themselves, and yet strangely they never seem to be the most productive engineers (in terms of delivering useful and reliable code).
I know it can be a little disappointing to feel like there isn't a pot-of-gold at the end of the rabbit-hole, but this is no different than most disciplines. You use arithmetic daily, algebra weekly, calculus monthly, imaginary numbers annually or less.
A lot of these techniques are things that take a bit more up-front effort but pay off over the long (or even medium) term. I think that's why it might make people seem less productive—although of course a lot depends on your perspective and experience. Measuring productivity is an open problem in software engineering and, in my experience, people's intuitions about it are all over the board, which means that nobody's any good at it.
DSLs are a great example of this long-term dynamic. Compared to throwing together an ad-hoc library, a DSL approach takes a lot more design effort but, ultimately results in a system that's more coherent, elegant and expressive.
I've worked on both kinds of projects and the difference is palpable: with the first style, productivity is constant at best—getting things together in the first place is a struggle, and then adding features or fixing bugs continues to be a struggle. On the other hand, the second style of project is even more of a struggle at first, but once it works it's like magic: new features are easier to add than you'd expect, and I've had way more things work after my first attempt than anyone has a right to expect.
I know which style I prefer, and it's definitely not because I "fetishize challenging knowledge for challenge sake"—it's because I'm willing to put effort up front for a long-term reward. And it's not even that long-term—I've found these things pay off over weeks or months, not years, so I'd take the same deliberate approach unless my deadlines were literally days away.
Well, I can't speak to your career experience, but I can speak to mine.
My experience has been (31, SF) that companies are fundamentally disorganized, frequently reinvent things (a behavior seen at all levels), are skewed by politics, and ultimately are rarely successful due to the code. The fact is, that for most startups it's not the code that's make-or-break (twitter, snapchat, facebook, airbnb, uber), it's the business execution.
It's been my career experience that the only way to be a 10x engineer is to prevent management from engaging in unnecessary projects (Bob wants to rewrite X in node, Joe wants to make a service that only has the responsibility of CRUD to 1 table, Sally wants to move it all to noSql). It's been my career experience that soft skills give the best ROI.
So when I see a list like this, I find it hard to imagine how "Profunctor Optics" is what Zynga (or any company I've worked at) needed to be successful.
My experience has been (34, Earth) that companies fail to succeed for a variety of reasons. Some of those reasons are poor technical choices. And some companies succeed in spite of said choices.
What functional programming brings to the table, aside from dense jargon, is the tools to build systems that are _correct by design_ and have certain, provable properties. For businesses this means they can spend less money fixing errors in their software and avoid losing revenue if they gain a reputation for releasing unreliable software. For programmers it means focusing on delivering instead of fussing around with runtime type errors, deadlocks, and the like.
Where this is useful is reducing the risks associated with failure: _when_ your software fails, what is the worst that could happen to your users or your business? If the answer is, "well some people might see the wrong blog article or have to re-submit their comment" then you have your answer. If your system is handling orders on a trading platform where an error could cost someone a few hundred million dollars... well it might be worth the effort to eliminate the possibility of as many errors as possible by using a better tool to help you with that.
The success of some companies in spite of not using _strict_ functional programming languages doesn't disprove anything the FP zealots have been saying for years. It only demonstrates how much money and time we invest in absorbing the cost of developing and operating software with innumerable, unknown errors.
>So when I see a list like this, I find it hard to imagine how "Profunctor Optics" is what Zynga (or any company I've worked at) needed to be successful.
I mean, by that criteria, why learn anything?
The world is not just startups btw, I work on products that have very very defined requirements. Hell, sometimes I'm implementing an RFC. FP is a huge win for us, and yes i've used profunctor optics in a high performance network application.
This might be tongue in cheek, but I agree with the parent. Engineering excellence will not further your career much, companies tend to oversell meritocracy. Salaries flat out quickly, and there is a huge supply of good enough developers, if you step out of the line too much.
In addition to those? Actors, musicians, sport people. In the office? Managers. Hence the advice, focus on soft skills. I would wager the top 1% on the developer track earn nowhere near to the top 10% on the managerial track.
And your best bet is to regularly change jobs otherwise good luck getting anything more than a 5-10% increase, while managerial salaries and bonuses scale with the size of the organization. Also, there is a lot to be said about the difficulty of the work.
I have had a number of projects that were absolutely successful from a technical standpoint because of FP methods. No one knew except team members that were unfamiliar with the techniques. While they were not full converts, many "saw the light" and reduced mutation and wrote purer functions after that.
The business itself was oblivious to why the project was successful. Or how we were able to extend it so quickly and retain stability.
Even on that project, 90% of the difficulty was social/political. Just as Java allowed average programmers to write above average code, the same thing can be said of FP.
> with the first style, productivity is constant at best—getting things together in the first place is a struggle, and then adding features or fixing bugs continues to be a struggle. On the other hand, the second style of project is even more of a struggle at first, but once it works it's like magic: new features are easier to add than you'd expect, and I've had way more things work after my first attempt than anyone has a right to expect.
I would like to challenge that by pointing out that not a single piece of large software (except maybe a compiler or two) has ever been written using the skills/concepts of the higher levels of this chart. What you say may be true if generalized to better upfront design etc., but there's also no evidence that this is better done using the approaches discussed. It's not even about measuring productivity; it's about arguing over the properties of an empty set. When it comes to small (<100KLOC) and non-large (<1MLOC) programs, then the difficulties are not that hard to begin with, and depend mostly on the essential complexity of the problem, rather than abstraction/code organization etc..
> it's because I'm willing to put effort up front for a long-term reward.
You're equating upfront effort with a particular choice of technique, and one that has never been put to the test.
Again, I answered this for you last week. Both Standard Chartered Bank and Barclays have many millions of lines of very successful production Haskell code. Both codebases are solving complex problems not solved at other banks and both make extensive use of types and purity. We do not regard pure functional programming as "aesthetics".
You are not addressing my point at all, and millions of lines of code (that include a Haskell compiler) do not necessarily make a large software system. If you are building software that is significantly more complex than could be achieved by other, simpler means, or you're building comparable large software systems for significantly cheaper, you'd only do Haskell and the world a great service by collecting and publishing relevant data. Without it, I have no idea how big your systems are, how complex they are compared to other projects, and how costly they are to build and maintain. I have no doubt you're happy with Haskell, but there are plenty of far more mainstream languages that people are happy with. Without any data, we can't make any comparison, so claims of superiority are nothing more than just claims. I find it puzzling that you don't see my reaction as a reasonable one for a marketing campaign that has been going on for nearly two decades, that has so far produced no evidence and negligible adoption. I truly and honestly want to be convinced. I am familiar with the theory of FP (up to about monad transformers, but not profunctor optics) and have programmed in SML. I just don't see any signs whatsoever that adopting Haskell can improve the bottom line significantly enough to be worth the cost and the risk, so the most logical thing to do is to wait and beg people for data.
"I would like to challenge that by pointing out that not a single piece of large software ... has ever been written using the skills/concepts of the higher levels of this chart."
To which I gave two counterexamples (today and on other occasions).
All software systems should be modular to some degree, so I am not clear exactly what your criteria for a large software system is. In both my examples, the Haskell codebases are monolithic repositories where everything is typed-checked and built together.
You keep asking for quantitative data for a comparison with other languages. But my answer is the same as last week. It's good people that make software efficiently and cheaply. Give engineers PHP and they'll still manage to build something good.
Haskell is just a tool, but it's a tool that increasingly good people are asking to use.
The system at Barclays and Standard Chartered were built and are currently maintained very cheaply because good people were hired. Haskell just happened to be their preferred tool.
> I am not clear exactly what your criteria for a large software system is.
Let a "software system" mean any assembly of processes that communicate to provide some shared functionality, with the components being coupled to one another in some non-trivial way (i.e., there are correctness conditions that cross processes boundaries, so that changes to one program may necessitate changes in others). Excluding the compiler, how many lines of Haskell code do you have in your largest system (using my definition)?
You just described the entire bank! I prefer my single codebase/single build definition. I forget the current figure, but StanChart has at least 3M lines of (dense) Haskell.
More details here: https://skillsmatter.com/skillscasts/9098-haskell-in-the-lar...
> I forget the current figure, but StanChart has at least 3M lines of (dense) Haskell
Oh, thank god, finally we get a number. It turns out that I was completely wrong, and that in its 20 years of hyped existence, someone has built something big with Haskell once. You may think I'm sneering, but only a little bit, because while one is almost nothing, it is much better than actually nothing because at least it is a first anecdote. Now, who do I have pester to get some more metrics?
> More details here:
Sadly, there are no more relevant details in that talk.
Just out of interest, are you skeptical of static type systems in general? Because AFAIK there has never been a quantitative study proving their effectiveness, only empirical studies.
I am not skeptical of type systems in general and TBH I've never worked on a large piece of software written in an untyped language (I often program in Clojure, but so far never a large system -- and yeah, I am skeptical about Clojure too even though I love it). But you're mistaken if you think I'm looking for quantitative studies; that's far too high a bar. I am looking for well-researched anecdotes, and there are plenty of them for type systems in general.
I am, however, very skeptical of the interpretation some people, especially in typed FP, give to types and the reasons they believe types provide a benefit. For example, the relationship between useful type systems and software correctness is not direct. I am currently using a formal verification tool that is completely logic based (and it has an interactive theorem prover, a model checker, and is backed by decades of careful mathematical analysis of its soundness) which is completely untyped, and yet it is as powerful as Coq for proving correctness of algorithms. If direct, formal, proofs of correctness is what you're after, types might not be the best solution. Types, however, have other clear benefits that are not related to formal correctness, or, at least, not correctness of global program properties.
Why, because I missed a point of data in the ocean of hype and there is one? Two? Please, set the record straight, and provide us with some facts. Maybe in Haskell's 20 year history as the world's most hyped language there was one or two or maybe three non-compiler programs written in it that aren't very small. Maybe there is even one anecdote out there with some actual information in it, although I've looked for one -- a lot -- and couldn't find any. So right now, the people who are making stuff up and passing them off as fact are those who claim significant impact over and over and over without a shred of even the tiniest of anecdotal data. How is that justified?
I have absolutely no problem saying that Haskell works better for large interactive software than other languages, once there are a couple of anecdotes around, but the fact is that currently there aren't any (at least none that I could find). What we have at the moment is a lot of vague claims with zero metrics.
But perhaps I should explain the source of my skepticism. First, it stems from the almost unprecedented gap between hype and evidence. I would imagine that after so many years, given the grand claims there would at least be some good anecdotes. That there aren't any, inspires skepticism. Someone here likened Haskell's abstractions to load bearing materials as opposed to other language's mud. If that were true, the reasonable prediction would be to see Haskell skyscrapers towering over a sea of mud hats; that we actually observe the opposite, inspires skepticism. Second, it stems from my general skepticism (based on CS theory and 20 years of experience) towards the impact any language can make. I.e., I have not encountered a case where the choice of language was the determining factor. Haskell is perhaps the most notable example of a language that claims to make a significant difference by virtue of language-level features (as opposed to runtime features, like GC). I would very much like to see how big that contribution is, if it exists at all.
I'm not talking about Haskell, i'm talking about Scala written in a functional style where we've used a lot of the concepts from Haskell. We're using it at Verizon for extremely large projects and it's working quite well. I know you've been pointed that out by others, so when you say there's no anecdotal data, how can you justify that?
I've talked to others who work at extremely large corporations finding success with it too in extremely large projects.
I am not looking for anecdotes that it is possible to write large programs in a pure functional style. I know it is possible. But, given the high cost of the approach (training, new libraries, maybe a new language and even a new platform) I am looking for anecdotes that the approach provides benefits that significantly outweigh its cost. I have not found any.
BTW, "extremely large projects" are anything above, say 20MLOC. What projects of that size have been written in pure functional style? What large projects (>5MLOC) have?
> not a single piece of large software (except maybe a compiler or two) has ever been written using the skills/concepts of the higher levels of this chart.
I wonder: to what extent this is because those higher level concept tend to shrink programs in the first place? I mean, big is bad, there's no debating this. Big programs can only arise from necessity or stupidity. And if those fancy features are any use, they must be reducing the need for big.
I would ask myself a slightly different question: what kind of program still have to be big, even if you have all these fine concepts at your disposal?
And of course, big is risky, and that tend to make us chose more conservative options. Why use Haskell where C++ has shown in the past it could do that kind of job? Not to mention the network effects: even if Haskell as a language was better than C++ as some specific job, you may still choose C++ because of libraries, commercial support, and developer availability.
> When it comes to small (<100KLOC) and non-large (<1MLOC) programs
Oh, so that's how you're calibrated… My, to me, small means <10KLOC. 100KLOC is already big, and 1MLOC is gigantic. Besides, I've seen a couple multi-million lines programs, and I don't believe for a second they had to exceed 100KLOC. Such programs are more about piling historical accident on top of historical accident than about encoding a complex problem domain.
A small compiler takes a couple thousand lines of code. With the proper tools, it can often be squeezed into 2KLOC (Source: the STEPS project from http://vpri.org). It won't optimize like GCC, but you rarely have to anyway. Now you have a DSL for writing an executable specification. What kind of specification is so complex that it cannot even fit in 50 books?
Of course you have to keenly understand the problem domain to pull that of, and that often means solving it the crappy way the first time around. Which means you're never going to rewrite it the proper way: it would be way too risky. So of course it has seldom been put to the test. One does not just hinge an entire business on original research.
> You're equating upfront effort with a particular choice of technique, and one that has never been put to the test.
No, not never. I have at least one example: one of my uncles once had to write a number of database transactions. It was one of his first job. He had 1 year to do it. Seeing how tedious it would be, he first though about the problem, then devised a DSL in which he could write the damn transactions. Since he wasn't exactly senior, and had crappy tools (he used B), the DSL took him about 6-7 months to perfect. (During which management were on the verge of panic, because no transaction has been done yet.)
At the 8 month mark, all transactions were done, tested, and had a surprisingly low bug count. The client was very pleased and the contract was renewed for another batch of transactions. Same size, but to be done in 8 months this time. That was given to a co-worker, who used the my uncle's DSL to perform 8 months worth of contracted work in 1 month.
> to what extent this is because those higher level concept tend to shrink programs in the first place?
None at all. No one, AFAIK, even claims a reduction of even a single order of magnitude in code size.
> I would ask myself a slightly different question: what kind of program still have to be big, even if you have all these fine concepts at your disposal?
All those that are big now. I'm not talking about a single executable, but about an entire system (specification, development and debugging have always been done modularly, regardless of whether there's a single process or a distributed system). I see no way to implement the requirements of, say, an air-traffic control system, complex avionics or a banking system in software that isn't very big.
> 100KLOC is already big
A mid-sized enterprise software system is ~5MLOC. Google and Facebook have codebases that are measured in the hundreds of millions LOC. You can use whatever definitions, but if you look at software actually being constructed, much (if not most) of development effort in the industry is systems that are about ~5MLOC.
> What kind of specification is so complex that it cannot even fit in 50 books?
My guess? The majority of software written today is part of such specifications. I once worked on a medium-sized air-traffic control system designed for a relatively small area and number of planes, whose informal functional specification was ~10 books. The specifications for the avionics software of a fighter jet developed in the 80s were ~2000 pages of structured natural language (source: http://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts.H...)
> No one, AFAIK, even claims a reduction of even a single order of magnitude in code size.
http://vpri.org claims about 3, though most of it is not because of the languages, but because of the removed redundancies. The languages do seem to be responsible for at least 1 order of magnitude.
About the rest, of what you say, I won't claim anything. It just make me feel… uneasy. Okay, those specs are that big. Do they have to, though? I've seen a big fat list of requirements in my last gig, and many were duplicated into 2 slightly different versions. As were some pieces of the code. And that's the obvious stuff. There were more subtle waste, where simpler alternatives would have fit the bill, but weren't applied because of historical reasons (I asked the architect, there was a reason for everything).
Almost everywhere I look, I see a wasteland of useless code, and even the specs aren't that clean to begin with. It feels like proper DRY alone would have reduced the size of this stuff by 2 or 3. Maybe I was unlucky enough to work in especially crappy environments. But from what I hear, that's the norm. So far, you're the only one I met that challenges that perception.
> The languages do seem to be responsible for at least 1 order of magnitude.
Compared to what? C? I'm not talking about C, but about any modern language.
> Do they have to, though?
Yes. Or, at least, everybody (including me when I first saw them) says "this cannot possibly be this complicated", and after understanding them, everybody says, "oh, OK". But let me put it another way: if the modern world really only requires simple software, then our work is nearly done. Some would use Python, some would use Java, some would use Haskell -- but if the largest software system needs to be ~100KLOC, then none of it matters too much. Writing such a piece of software isn't hard regardless of what language you use (or, rather, the difficulty is in the essential complexity; there's not much accidental complexity in 100KLOC). Even assuming one methodology would be 15% than another, it wouldn't make much difference to the bottom line because producing 100KLOC software is cheap anyway. And if that were the case, investing in programming languages would be even a bigger waste, as investing in simplifying specifications would have a much bigger impact, and would cost a lot less.
But just to get a sense, GHC -- which is a compiler, and, as you've noted, compilers tend to be small -- is about 400KLOCs of Haskell. The Linux kernel is over 15MLOC of C. Reduce that by an order of magnitude, you still get 1.5MLOC, and that's just for an OS kernel.
> So far, you're the only one I met that challenges that perception.
I don't argue that there isn't a lot of waste. I argue that even without all that waste, we'd still need software that's very large (or that, alternatively, waste is unavoidable). There are no signs that Haskell reduces this waste at all, or that it dramatically reduces the size of programs.
> Compared to what? C? I'm not talking about C, but about any modern language.
The domain they tackle is personal computing, which means Kernel, Windowing system, remote communication (web/mail), multimedia… So, yeah: mostly C and C++, by the look of currently popular programs.
That said, much of the (apparently) needlessly complex stuff I have seen was written in C++, and it did look like they didn't have the real-time requirements or resources constraints that would justify the use of such a monster of a language.
> Or, at least, everybody (including me when I first saw them) says "this cannot possibly be this complicated", and after understanding them, everybody says, "oh, OK".
I have yet to reach the second stage. And on one occasion, I did reach a reasonable understanding of the whole system. It definitely had to be very complex to match the specification, but the specifications themselves didn't match the end user's needs.
> (or that, alternatively, waste is unavoidable)
That's the alternative I'm most scared of. I cannot comprehend unavoidable waste, but I can't rule it out either.
Alternatively, I've came across the idea of not solving some problems¹, because the return on investment is just crap. Okay, when safety is involved, you probably cannot do that. Still, the idea that the 80/20 rule is sometimes more like 99.9/0.1 is enticing. Sometimes, full automation is not best. For instance last I checked, the best Chess player ever is a human-computer team, not a computer.
> I've came across the idea of not solving some problems
That, too, cannot be solved by a programming language :)
But much of complex software deals with things that could not possibly have been solved more efficiently (or even efficiently enough) by humans (like sensor fusion, package tracking, manufacturing control etc.). Also, we're pretty far from full automation. Nobody trusts computers to make the decisions in air-traffic control systems, power plant control software, or even in ERP systems.
The reason we keep building large software is that -- in spite of many problems -- they really do work (in the sense of achieving their goal of higher-throughput; whether or not a higher throughput of flights or of business deals is good or bad for humanity is an entirely separate question).
> if the largest software system needs to be ~100KLOC, then none of it matters too much. Writing such a piece of software isn't hard regardless of what language you use
I think this is absolutely the difference between you and proponents of Haskell. Personally I have found working on 100k LOC codebases very hard in Python and easy in Haskell.
I wouldn't know because I've never written anything in Python. How many 100KLOC non-compiler programs would say have been written in Haskell? It would be great if the team behind one of them would write a technical report so we'd at least slowly get to find out if it's actually easier to work with Haskell than with Python, and if so, by how much.
I agree that would be great. I can't help thinking that you're holding Haskell up to a standard to which you do not hold Rust, Go or Julia. Have you asked their proponents for technical reports on Hacker News threads?
It is Haskell that holds itself to a higher standard. Rust, Julia and Go don't make claims that are anywhere as extreme as Haskell's fans (e.g., see on this page the claim that Haskell's high abstractions -- optical profunctors or whatever -- are "load-bearing materials" to other languages' "mud". Rust has presented itself as a safe alternative to C/C++; Go presents itself as a high-performance language with simple concurency (or Java without the JVM); Julia presents itself as a high-performance alternative to Matlab, R or NumPy. But Haskell has sort of painted itself into a corner. Because the approach is so foreign and the learning-curve so steep -- i.e., the adoption cost is high -- if it didn't make outlandish claims (like, if it compiles it works) then no one in industry would consider using it. I don't need to know by how much Go lowers development costs because it makes no claims that it does, so I simply assume that it doesn't. That it is faster than Python, easy to learn in a day or two, and that it compiles down to a native executable -- are all trivial to verify. If you want, all of its claims are supported by plentiful data.
But other than that: yes! It is trivial to verify that Rust indeed fulfills its claims, but it is not trivial to verify that that's enough, namely, that overall, the overall cost of developing in Rust is lower than in C++, which is why I wouldn't switch from C++ to Rust without seeing that at least the costs are comparable. I'm also eagerly waiting for data about Rust's concurrency approach.
I would also tell you this: while I find the pure-functional aesthetically unattractive for interactive programs and am very much impressed by Clojure's and Erlang's approaches to state, and while I've personally written a semi-popular Clojure library, I have expressed skepticism about Clojure's suitability for large-scale projects. How well a language works in big software is something that you simply cannot extrapolate from experience in small projects, and as much as I like Clojure, I have serious doubts about its applicability, and I would not use it to write a large system without the kind of data I expect from Haskell.
But all those languages (Erlang and Clojure included) have social advantages that make this data much more easy to come by: they are used by people who aren't enamored with the language itself, who aren't PL enthusiasts, and are much more goal-oriented. Their judgment stems mostly from how well things work, not how interesting the language is. So articles with pertinent data (maybe not enough to risk a large project, but certainly more than those you find for Haskell) are more abundant.
So I don't need to beg for technical reports as much because 1. no wild claims that aren't supported by data are made, and 2. there's data out there (for Go, Erlang, Clojure) or it's coming (Rust).
I think the simplest solution to all of this is for you to ignore the outlandish claims, or at the very least pay attention only to the claims of senior and respectable Haskell propenents, such as Simons Peyton Jones and Marlow, Don Stewart, Lennart Augustsson, Neil Mitchell etc.. Otherwise you're simply allowing yourself to be gently trolled.
By the way "if it compiles it works" is supposed to be somewhat tongue in cheek, as is "avoid success at all costs".
That's not a solution because my problem isn't with Haskell itself (and I do familiarize myself with the theory when I find it interesting, and I'm well aware that most of the researchers aren't like that), but with the tendency of people in the industry to market things so enthusiastically, that they basically encourage suspension of critical thinking (and this is doubly annoying when what they evangelize isn't something as cheap as a new profiler, but something as expensive as a whole new language with a new, unproven, programming paradigm). Ignoring it wouldn't solve the problem. And in case you think this isn't an actual problem, consider that over-excitement about promises that couldn't be kept has caused at least two "research winters" in CS: one in AI and one in formal methods. The industry started believing its own hype, academia didn't mind the extra funding, but when the industry ended up disappointed, funding dried completely and research slowed to a crawl.
Singlehandedly? Absolutely not. Why, am I the only Haskell skeptic you know? There are more on this page alone. There are even PLT researchers who warn against overselling results in PLT, and especially typed FP.
And we're not saving the industry; the industry as a whole isn't suicidal and will never bet too much money on unproven technologies. The industry isn't so fragile, but research is. Given the amount of money in the industry, even small parts that do bet on unproven tech can cause a funding surge followed by a research winter when the disappointment hits. We're saving Haskell.
If those who adopt Haskell do it with clear vision and not by believing some messianic claims, there would be no great disappointment and no research winter. Nothing is more dangerous to research than wild claims. You need to promise less and deliver more, not the other way around. And because the industry is competitive, it is actually quick to adopt ideas once they show actual significant benefit -- i.e., once they're ready. The industry adopted garbage collection almost overnight; it was almost 40 years after it had been invented, but almost immediately after it's been productized well enough to provide a significant advantage.
If pure-FP is really the way to go in general-purpose programming, eventually the industry will adopt it in some well-productized form. Getting there does require some early adopters, but they're already here and overselling doesn't help to get more good ones. It helps to get precisely those adopters who will end up causing a research winter.
----
What bothers me in terms of harm to the industry is the waste caused by people switching from one language to another, rewriting libraries over and over and thinning out the effort. I think we're losing years of progress. But there's really nothing I can do about that because even if I can convince someone that a language that is 1% "better" than another does not justify the effort of porting libraries over, there are already too many languages and platforms around, and there's always the question of, so which few languages should we pick, and why should it be the ones you like and not the ones I like. Haskell actually does not contribute much to this problem because its adoption rate doesn't yet make a dent.
Two potential counter-examples to your productive software claim:
Opaleye [0], an Arrow-based [1] DSL for Postgres SQL that allows the user to create type-safe, composable, and (generally) optimally fast SQL queries.
Halogen [2], a type-safe frontend framework for PureScript [3] that models view component interaction as an algebraic datatype containing query actions (among other fairly advanced concepts).
If someone is used to a certain level of abstraction, then the higher levels might seem unnecessary until they invest the time into learning them. The only thing unique to functional programming is that it has inherited a lot of terms from theory (profunctor, monad, algebra) rather than inventing friendlier ones.
> Two potential counter-examples to your productive software claim
How are those counterexamples? Do they provide benefits that significantly outweigh their cost?
> the higher levels might seem unnecessary until they invest the time into learning them
Except that this can't go on forever. Because we know that programming productivity has a theoretical upper bound, therefore there must be diminishing returns, and at some point the cost of more abstraction would outweigh the benefit. We just don't know where that point is. Finding out is not a matter of faith or even personal feeling, but of actual results.
Now, I know that collecting meaningful productivity data is hard, but that evidence of a claim is hard to come by doesn't make that evidence any less necessary for the claim. You're free to say that you like FP because you enjoy it more, or even that it makes you feel more productive. But you can't make actual empirical claims without actual evidence.
> The only thing unique to functional programming is that it has inherited a lot of terms from theory (profunctor, monad, algebra) rather than inventing friendlier ones.
I'm not entirely sure that's the case. FP is often its own theory. I'm not sure many of those concepts were studied heavily outside the context of FP, but I could be wrong about that.
Profunctor and Monad are heavily studied in CT and many related mathematical fields that use category theory. For instance, algebraic geometry and logic. Algebra is incredibly well studied under the name "abstract algebra" and is one of the core concepts in modern mathematics since Bourbaki.
But this is just a side point to what you're saying.
I don't understand why this doesn't have more votes. Asking for evidence is good, not bad. It isn't in any way self-evident that FP increases productivity. That's not to say that I don't like it, on the contrary.
I agree that this wording makes it sound pretentious, but combinators are actually a very useful, practical technique. Parser combinators, in particular, are awesome.
The real point behind this is so programmers who understand trivia like singletons and recursion schemes can walk around during conferences and feel good about themselves for being "experts".
Some of this stuff is cool and interesting, even to me -- but please don't rank people based on their inferiority of knowledge WRT to you, as you are explicitly doing
> When you write Javascript and use first class functions (ideally without side-effects, but who the fuck really cares), that's functional programming.
Sorry but no. Functional programming requires a whole set of new skills that most programmers don't have.
Learning the basics, pure functions, higher order components, immutability, second order functions, first class functions, lambdas, partial application, currying and point-free style. That's functional programming.
I'm glad that's the general attitude. I've been learning Elixir for a couple of months now and my first though was: "crap! I've never even heard of most of this stuff, never mind understanding it."
I know I have a long way to go before I'm a competent with functional programming, but I thought I at least understood the basic landscape, even if I don't know how to apply it properly yet.
Elitism is learning more powerful constructs? There's a lot of elitism in the construction industry then with people checking the load bearing of materials rather than building everything from mud.
Load bearing materials actually carry significantly higher loads than mud; those Haskell constructs have not even been used to build software as intricate as common industry practice, let alone break new ground. Are they truly more powerful? That's a popular hypothesis among those who are aesthetically drawn to those concepts.
I gave you an example last week, which you are conveniently ignoring. We have millions of lines of Haskell code at Standard Chartered bank. The type safety and purity have helped us build a more reliable and maintainable codebase compared to previous efforts. Ultimately the language is just a tool. If you are not interested in type safety, purity or expressivity then it isn't the tool for you.
I am not ignoring anything. On the contrary: You yourself have said that the kind of software you write isn't any bigger or more complex than similar software written in more mainstream languages, and that you have no evidence of significant bottom line benefit (except claim that your software is more maintainable, but without any metrics). I never claimed that Haskell can't be used to write simple software as well as other languages, or even marginally better (for some definition of better). But the commenter I responded to claimed that those advanced abstractions are analogous to building materials that are used to build much bigger buildings than those possible without them. You have not even made that claim.
No one is interested in type-safety or purity for their own sake (except for aesthetic reasons). I'm interested in better, cheaper software. If Haskell's purity (there are simpler languages with similar type systems) provides that then there should be evidence of that before the claim is made. You have not provided any such evidence, and even implied that you're not even carefully collecting metrics that can be used to construct such evidence.
> I never claimed that Haskell can't be used to write simple software
I can assure you our software is not at all simple. Standard Chartered has industry leading portfolio compression and XVA compute. Barclays has industry leading pricing of exotic derivatives (including pricing on GPUs). Both industry leading solutions built in Haskell on top of frameworks/APIs in Haskell.
> No one is interested in type-safety or purity for their own sake (except for aesthetic reasons).
So are all forms of static verification just aesthetics to you? Or just those offered in Haskell?
> I'm interested in better, cheaper software.
Then hire good people and let them use the tools they want.
The only evidence I could ever hope to offer is that good people build good software. But that would never make an interesting management report.
> So are all forms of static verification just aesthetics to you? Or just those offered in Haskell?
As someone who makes use of formal methods regularly, I am aware of how weak Haskell's static guarantees are. That is not to say they may not be useful in practice, but that's an empirical question, and the one that I'm desperately waiting for answers to.
> The only evidence I could ever hope to offer is that good people build good software. But that would never make an interesting management report.
It also doesn't support the claims made that Haskell makes a significant contribution.
That's excellent. So please publish some numbers so we could estimate the contribution. With so much hype and zero data I think that the skepticism is very well justified.
> The person responsible for our industry leading portfolio compression joined us because we use Haskell.
I have no doubt that some people really like Haskell, but that's not data.
Any of the MLs, really. Their type systems aren't necessarily equivalent to Haskell's, but given the lack of evidence for Haskell's superiority over pretty much any modern language, I doubt those differences make for even a marginal difference. The big thing about Haskell is that it's pure functional. Rust's type system is definitely in that ballpark, too, and maybe even Kotlin's.
When working with functional languages (my current choice is Scala), I do feel that they require a level of mastery that "traditional" imperative languages do not. I also feel that if computer science/software engineering is ever going to gain any sort of repeatability it will be through functional design.
That said, functional programming is incredibly difficult to do well without years of practice and the languages still suffer from limitations with performance that are hard to debug. While I still feel that functional programming is the "future" of software engineering, I feel it is quite a ways in the future.
From the rr site: Remember, you're debugging the recorded trace deterministically; not a live, nondeterministic execution. The replayed execution's address spaces, register contents, syscall data etc are exactly the same in every run.
Elm's debugger (and similar projects like the Pux debugger from PureScript-land) are live systems. They are replaying all events deterministically but you can edit your code, reload it and then replay the exact same series of events or toggle events on/off and see what the resulting state is. rr is super cool and I'm very glad it exists but it's a static analysis tool where the Elm debugger is almost REPL-like in terms of how you can play with things and see the results live.
4. Writes programs that are maintainable by others, and communicate module intent clearly to team members who interact with it
5. Writes programs that are maintainable etc., and also make good use of resources, both computational and human
The emphasis on language concepts seems so distracting that I wander how much time people have left to make their program externally better (better functionality, better performance, more maintainable by others) rather than internally better (make use of clever abstractions). That the two are related is a hypothesis which doesn't seem to be supported by evidence, and is believed by those who enjoy thinking about the latter more than about the former and want to justify their focus.
It says something slightly worrisome about the functional paradigm that "profile, debug, and optimize purely functional code with minimal sacrifice" is considered an expert-level skill.
Oh come on. Please do not troll. That says absolutely nothing about functional programming in general and absolutely everything about the publisher. Not only could this not be further from the truth, but it's a cheap and lazy attack on a rigorous and proven paradigm. Give it a try. You might just surprise yourself.
I'll give you rigorous, but how is it proven? I think it is a fair guess that the number of non-tiny programs (>100KLOC) making use of the "expert" concepts and skills is ~0. The number of large programs (>1MLOC) at the proficient level is also ~0.
How what proportion of the entire global population of functional programmers who qualify as proficient or better on this scale are employed by Standard Chartered?
(This is intended as a serious question, not a troll. It often seems that in discussions of FP, and particularly of Haskell, someone will suggest that there are few widely-known, large-scale projects written in this style that can be used to evaluate its effectiveness, and someone else will then reply with one or more of the same very small collection of larger projects or high profile organisations using FP/Haskell that are publicly known.)
What I heard is that all of their Haskell software combined is about ~1MLOC (and they don't comprise a single system), and a huge chunk of that (a couple 100KLOCs at least) is the Haskell compiler itself, which they've modified and consider a part of their codebase. The Haskell compiler is still the largest Haskell program, and if it isn't, there are no more than a couple programs that are larger.
> a huge chunk of that (a couple 100KLOCs at least) is the Haskell compiler itself, which they've modified and consider a part of their codebase
Please don't talk about what you don't know. Standard Chartered's Haskell compiler is written completely from scratch and is not based on any existing compiler.
Sorry, that's what I'd heard (or I may have misinterpreted "a variant of Haskell" as "a variant of the Haskell compiler" all on my own); thank you for correcting me. So what you're saying is that biggest Haskell program isn't the Haskell compiler, but that the two biggest[1] Haskell programs are two completely different Haskell compilers.
(Also, while it has little to do with my point, I've also heard that the person behind SC's Haskell compiler is the one who'd written the first ever Haskell compiler, long before he started working for SC; is that true?)
[1]: Please don't take this too literally. In the two decades that have passed since Haskell was declared the language to end world hunger, someone may have written a bigger program. Maybe even two.
So in that case, what's important about your 1MLOC requirement? I can't even begin to imagine that much Haskell code versus a similar line count in say C.
Modern programming languages do not vary in their line-count by too much. If you want to compare to C, you may be able to get a 1 order-of-magnitude reduction in size if you're lucky, but 10MLOC C programs are common (Linux kernel is >15MLOC; MS Office is 30MLOC; LibreOffice is 12.5MLOC), as are 1MLOC programs in more modern programming languages.
Its the 80-20 rule in effect. Most optimization happens either by choosing the more efficient implementation or by better use of IO (see Efficient Persistent Data Structures). By the time you are down to putting in your own rewrite rules in for the optimization pass you have gone a long way into the weeds and it is a pretty advanced skill.
I am generally a pragmatic person, but some of these arguments are wandering into a defense of ignorance. I am hoping that learning about these things will improve my problem-solving in general, and I don't mind being labeled a beginner.
Even being a Haskell enthusiast and loving playing with stuff in the list, this does not reflect well on Lambda conf and, unfortunately, on Haskell.
The smugness and self-importance displayed in there is quite the opposite of the virtue of code simplicity put forward by e.g. Don Stewart at HX this year.
I agree with others that there's far too much Hasochism in this list. I do think of FP as being like a ladder or spectrum, but rather than progressing towards some elitist niche, I see it as more empowering and anti elitist.
In my opinion, if you refactor a Java class to change a setter into something which returns an altered copy, you've just made your code a little bit 'more functional'; if you use a list comprehension in Python instead of a for-loop, you're code's become 'more functional'; and so on. The nice thing is that many of these small changes are also good practices to be following regardless of whether you want to be 'functional' or not.
Regarding the 'advanced' concepts, I'd say they exist mostly for library/DSL authors who don't want their users to have to care about those concepts.
Using libraries based on these ideas is great, as the author may have chosen these techniques in order to give the library correctness guarantees, or efficient resource usage, or good error messages, etc.
Using these ideas to write a library can, occasionally, provide a solution to an otherwise tricky situation, e.g. if you need to maintain state, whilst offering the users a pure interface.
Using these ideas to write a library for your own usage changes the cost/benefit calculation considerably, since you're not limited to providing an API: if you want efficient resource usage, can you achieve that by keeping resources in mind when writing the application code? If so, you probably don't need to over-engineer one particular component to enforce this for you. And so on.
Another thing I'd change about this ordering is to put dependent types much earlier, e.g. at "advanced beginner". Dependent type systems are very simple; they're just a slightly more complex version of lambda calculus. Sure you can use dependent types in complicated ways, just as you can write complicated lambda functions. They're certainly easier to grasp, and strictly more powerful, than Haskell's complicated mess of monomorphism restrictions, singletons, kinds, type families, etc.
I agree with you about the gradual refactoring. I love FP and according to that list, I'm mostly an "advanced beginner"... and I see nothing wrong with that.
Even "just" refactoring Java or Python code to be more functional is a huge win. Even "just" using Scala in a more or less FP way is a huge win. I don't fret being unfamiliar with Lenses or whatnot yet.
HaskellBook.com will teach you quite a number of them. :)
The Reddits for the different functional programming languages are a good place to hangout (and a frequent source of blogs and videos on these topics), and for FP in non-FP languages, there are good Github communities (e.g. http://github.com/fantasyland/, no association with FIOL).
I'd also humbly suggest that LambdaConf 2017 (May 25-27) is a great place to learn more about functional programming. There will be a special two-day LambdaConf workshop prior to the conference that introduces the basics of functional programming (no background knowledge), another that is aimed at a slightly more experience audience, and then at the subsequent conference, plenty of workshops and sessions to learn many of these topics (and others).
It's a journey, but everyone can get there if they have the interest. Most of the resources out there (blogs, videos, even e-books) are free, and the remainder are low-cost if you are already working in tech.
Good luck and please just let any of us lurker functional programmers know if you need a hand. :)
Is there anywhere to preview some content from HaskellBook.com ? Its a little pricey for to just take a chance without seeing any same content. The TOC looks good but that's not much indication of writing style.
It is pricey, but it is also very good. The content is quite up-to-date (e.g. covers Foldable/Traversable, teaches the Functor-Applicative-Monad progression) and working through the exercises made a lot of things click that I didn't understand before.
It's terse but comprehensive, and the exercises have been a pleasant source of a-ha moments. There's a more recent version of the course offered, but I haven't been through it.
I love this list. Sure, shortcomings might be there, but trying to create a list of actual topics and ranking them is not a bad thing. It would really help folks at any end of the spectrum to explore new topics on their own.
It's funny to see that singleton types are within the highest level: 'expert'. While in Typescript they're, together with union types, the most intuïtive thing ever.
I guess that's because to express them in a language that doesn't have native support requires advanced deftness in typelevel programming. Like f.i. implementing them yourself in Scala 2.x.
So I expect the coolness factor of singleton types and union types to fall precipitously as soon as Scala 3.x/Dotty becomes prevalent. They'll be 'just' a powerful feature that everybody understands and uses all the time.
But overall I like the idea as a roadmap for learning particular language or skill set. I'd like to something like that but with references to material where particular skill can be learned, etc
Best way to convince someone that they will never, ever have any chance of understanding functional programming and therefore should not even bother trying it.
Why? Even at the level of Advanced Beginner (which is pretty concrete and easily attainable), FP is already hugely useful. Everything at that level is both useful and relatively easy to understand. It's going to improve your code even if you use a language not particularly tailored for FP.
The rest is just icing on the cake, and you don't have to deal with it if you don't find it interesting.
The other is to ignore any insult, intended, perceived, or a mix of both, ignore all the hierarchy labeling - 'beginner', 'advanced beginner', "standard ladder" etc who tf cares? - and just see if there are skills you can pick up or explore on your own.
That said, I laughed at 'equational reasoning' being considered an 'expert skill'. It is considered an advanced technique but imo the basics are trivial to pick up, and I had a lecture/demo on this at a local fp conf. Sure it can get very hard if you tackle a hard problem, but that is true for everything.