Functional programming discussions on HN are pretty depressing. Many of the statements about FP that I see here right now are the same old shit I've heard about Java in mid 00s. You just need to mentally translate some buzzwords, but the essence is the same. Seems like the software industry is just running in circles. Something get hyped, people jump on it, fail, then search for the next bandwagon.
1. Endless yammering about low-level correctness. As if it's biggest problem in software engineering right now. In reality, most domains don't need perfection. They just need a reasonably low defect rate, which is not that hard to achieve if you know what you're doing.
2. Spewing of buzzwords, incomprehensible tirades about design patterns. FP people don't use the term "design pattern" often, but that's what most monadic stuff really is. Much of it is rather trivial stuff once you cut through the terminology. (Contrast this with talks by someone like Rich Hickey, who manages to communicate complex and broad concepts with no jargon.)
3. People who talk about "maintainability" of things while clearly never having to maintain a large body of someone else's code.
The #1 problem in software right now is not correctness or modularity or some other programming buzzword. It's the insane, ever-growing level of complexity and the resulting lack of human agency affecting both IT professionals and users.
The way I've dealt with complexity in large code bases is through being fearless about refactoring. Refactoring may not reduce complexity in terms of what the software does, but it reduces the complexity of understanding the code base tremendously by realigning the structure of the code with the actual problems being solved.
Refactoring gets a lot less scary when you have greater confidence in the low level correctness of the code.
On your second point, yes, I have found that FP has some, shall we say, interesting jargon. But I have trouble thinking of succinct names for a lot of FP constructs that are nonetheless useful, such as monads. A lot of more colloquial terms that come to mind in brainstorm sessions might even undermine understanding by providing a false equivalence. I think the same argument can be made for mathematical notation.
In summary I'd turn around your last sentence a bit. Yes, the #1 problem is complexity, but you can reduce complexity significantly by applying correctness and modularity and other programming 'buzzwords'.
You can rail against complexity itself, but I think we're probably on the bottom end of a very large complexity slope over the next decades. So we'll need better and better constructs to deal with it.
At that point you're talking multiple codebases and the complexities become managing transactions, data transformations, and contracts across discrete processes.
I'm not sure how that's germane to the discussion at hand. In fact, to the opposite point, I've found that in multi organization refactors and designs functional programming continues being a useful mine for concepts to simplify thinking around data transformations, immutability, and data contracts.
When I think about an algorithm like merge sort, mini-max decision trees or other low-level algorithms, the concept of modularity doesn't even enter my head. It doesn't make any sense to modularize an algorithm because it is an implementation detail; not an abstraction and not a business concern.
Modularity should be based on high level business concerns and abstractions. The idea that one should modularize low-level algorithms shows a deep misunderstanding of what it means to write modular software in a real-life context outside of academia.
It seems that FP induces confusion in the minds of its followers by blurring the boundary between implementation details and abstractions. OOP, on the other hand, makes the difference absolutely clear. In fact, the entire premise of OOP is to separate abstraction from implementation.
Referential transparency is not abstraction, in fact, it goes against abstraction. If your black box is transparent in terms of how it manages its state, then it's not really a black box.
I've developed and maintained projects that are more than a million lines of code (probably much more), and I've also written large haskell programs (5000 lines is large, since it encompasses what would have taken me maybe 30000 lines in C++). I can say that the maintenance time and error rate of my Haskell programs dwarfs that of any C++ program I've written or maintained.
We've also learned in software engineering that the defect rate is mainly correlated with the code size, ie. the complexity of the code and how much there is, or simply the entropy of the code. With functional abstraction, the abstractions aren't "leaky" and actually allow you to reduce complexity and forget about the lower level details entirely.
Think of it this way. In order to make something as modular as possible you must break it down into the smallest possible unit of modularity.
State and functions are separate concepts that can be modularized. OOP is an explicit wall that stops users from modularizing state and functions by forcing the user to unionize state and functions into a single entity.
Merge sort is a good example. It can't be broken down into smaller modules in either OOP or functional programming. The problem exists at a higher level.
IN FP mergeSort can be composed with any other function that has correct types. In OOP mergeSort lives in the context of an object and theoretically relies on the instantiated state of that object to work. So to re-use mergeSort in another context, a MergeSort Object must be instantiated, that object must be passed along to another ObjectThatNeedsMergeSort in order to be reused. ObjectThatNeedsMergeSort has an explicit dependency on antoehr object and is useless without the MergeSort object. Remember modules don't depend on one another hence this isn't modularity, this is dependency injection which is a pattern that promotes the creation of objects that are reliant on one another rather then objects that are modular.
I know there are "Design patterns" and all sorts of garbage syntax like static objects that are designed to help you get around this. However the main theoretical idea still stands: I have a function that I want to re-use, everything is harder for me in OOP because all functions in OOP are methods in an object and to use that method you have to drag along the entire parent object with it.
Modularity in functional programming languages penetrates to the lowest level. Functional programming encourages the composition of powerful, general functions to accomplish a task, as opposed to the accretion of imperative statements to do the same. With currying, a function that takes four arguments is trivially also four separate functions that can be further composed. The facilities for programming in the large are also arguably more general and expressive than in OOP languages: take a look at a Standard ML-style module system, where entire modules can be composed almost as easily as functions.
>It seems that FP induces confusion in the minds of its followers by blurring the boundary between implementation details and abstractions. OOP, on the other hand, makes the difference absolutely clear. In fact, the entire premise of OOP is to separate abstraction from implementation.
I'm not sure I understand you here entirely, but implementation details matter. Is this collection concurrency safe? Is this function going to give me back a null? Is it dependent on state outside its scope that I don't control? Etcetera. Furthermore, when it's necessary to hide implementation details, it's still eminently possible. Haskell and OCaml support exporting types as opaque except for the functions that operate on them in their own module, which is at least as powerful as similar functionality in OOP languages.
>Referential transparency is not abstraction, in fact, it goes against abstraction. If your black box is transparent in terms of how it manages its state, then it's not really a black box.
Yeah, I've lost you here. Would you mind clarifying?
I've written plenty of very short OOP programs. They don't have to be huge to be effective. The reason why you sometimes see very large OOP software and rarely see large FP software is not because FP makes code shorter, it's because FP logic would become impossible to follow beyond a certain size.
My point about black box and referential transparency is that a black box hides/encapsulates state changes (mutations) by containing the state. Referential transparency prevents your function from hiding/encapsulating state changes (mutations) and thus it prevents functions from containing the state which is relevant to them; instead, the relevant state needs to be passed in from some (usually) far-flung outside part of the code... A part of the code which has nothing to do with the business domain which that state is about. To make proper black boxes, state needs to be encapsulated by the logic which mutates it.
Keep in mind that in OOP is essentially Forced currying. A method that returns an object full of other methods is identical to currying except that method isn't returning a single function... It's returning a group of functions that all rely on shared state... way more complicated.
I haven’t found a single large OOP program in the line of business that was easy to understand. Quite contrary to my experience with large FP code bases, of which many exist, to be clear. They are just a lot smaller than what equivalent OOP code would look like, and I challenge you to refute that with evidence.
I completely disagree about black boxes and think they are actually a complete scourge on software engineering. I should know everything that is relevant to me from a function’s type signature. In languages with pervasive side effects, this is not possible.
For me, the most important principles of software engineering are:
1. Black boxing (in terms of exposing a simple interface for achieving some results and whose implementation is irrelevant).
2. Separation of concerns (in terms of business concerns; these are the ones that can be described in plain language to a non-technical person)
You need these two principles to design effective abstractions. You can design abstractions without following these principles, but they will not be useful abstractions.
Black boxes are a huge part of our lives.
If I want to go on a holiday to a different country, I don't need to know anything about how the internet works, how houses are built or how airplanes work in order to book a unit in a foreign country on AirBnB and fly there. The complexity and amount of detail which is abstracted is unfathomable but absolutely necessary to get the desired results. The complexity is not just abstracted from the users, but even the engineers who built all these different components knew literally nothing about each other's work.
As a user, the enormous complexity behind achieving my goal is hidden away behind very simple interfaces such as an intuitive website UI, train tickets, plane tickets, passport control, maps for location, house keys. These interfaces are highly interoperable and can be combined in many ways to achieve an almost limitless number of goals.
I couldn't explain to anyone anything about how airplanes work but I could easily explain to them how to use a plane ticket to go to a different country.
With programming, it should be the same. The interfaces should be easy to explain to any regular junior developer.
I generally think of spaghetti code as code that has unclear control flow (e.g. GOTOs everywhere, too many instance variables being used to maintain global state, etc.) Currying, plainly, does not cause this.
>1. Black boxing (in terms of exposing a simple interface for achieving some results and whose implementation is irrelevant).
Sure, completely possible in ML-family languages and Haskell. Refer to what I said about opaque types earlier.
>2. Separation of concerns (in terms of business concerns; these are the ones that can be described in plain language to a non-technical person)
Again, nothing in functional languages betrays this. You are talking about code organization at scale, and none of what you have said so far is precluded by using pure functions and modules and such.
>If I want to go on a holiday to a different country, I don't need to know anything about how the internet works, how houses are built or how airplanes work in order to book a unit in a foreign country on AirBnB and fly there. The complexity and amount of detail which is abstracted is unfathomable but absolutely necessary to get the desired results. The complexity is not just abstracted from the users, but even the engineers who built all these different components knew literally nothing about each other's work.
I do not like analogies in general, though for this one I will suggest that you should at least know what the baseline social expectations are of the place you are traveling to. That is, plainly, what I am arguing that functional programming makes clearer and easier to deal with.
>As a user, the enormous complexity behind achieving my goal is hidden away behind very simple interfaces such as an intuitive website UI, train tickets, plane tickets, passport control, maps for location, house keys. These interfaces are highly interoperable and can be combined in many ways to achieve an almost limitless number of goals.
Yes, and underneath that program in a functional programming language are lots of small, carefully composed functions that are often just as applicable to many other problems and problem domains.
>I couldn't explain to anyone anything about how airplanes work but I could easily explain to them how to use a plane ticket to go to a different country.
This is why I don't like analogies. I have no idea what you are talking about here.
>With programming, it should be the same. The interfaces should be easy to explain to any regular junior developer.
What makes functionally-styled APIs hard to explain to a junior developer?
Some time after leaving university, when your statements are no longer labeled as correct or incorrect - people like the GP start strongly believing that any nonsense that enters their head is a fundamental truth. GP's statement is an example of the above.
This is a good thing because I've found in my own experience that it's hard to say at what level you're at when implementing something. Implementing a standard deviation function? Is it happening over in memory data or persistant? Is it going to happen in parallel when it can? Is it going to be distributed across servers? Suddenly you're back at the high level.
This is the point from your rant which bugs me the most.
Quite literally due to maintainability my team uses an FP language. It is so easy to pick up code you wrote 3 years ago, code someone else wrote 6 months ago, code the guy who left 4 weeks ago wrote, code 10+ people are working on at the same time, and then continue to maintain that code without fear of misunderstanding the deep complexity that is also associated with it on top of the additional complexity you are about to add to it.
My team doesn't need to waste days doing archeological digs and comprehensions on the 18+ projects we now maintain. We simply grab the code base, modify the code which uses the idiomatic "write small components, build bigger components with those" mentality that comes with writing in a FP language, and then fix the chain of compiler errors along the way until our new feature works.
By attempting to be more precise (through either terminology or code correctness), introducing higher confidence levels (programmer confidence, code operability confidence), and wrangling complexity through well designed idioms (monads, proper effect handling), you end up delivering a large amount of value to your customers which impact their bottom line. Faster feature delivery, lower bug rates, nearly zero risk of data leaks, uncrashable software, maintainable custom software over 5+ year timelines.
I've been in this industry for 20 years now and have seen a wide spectrum of good and bad. Using FP over the last 3 years has definitely moved the bar in the "good" direction a lot further than I originally anticipated, but I suppose YMMV.
"We don't need to read and understand stuff. We just grab the code by the horns and spur it with changes until the compiler stops thrashing. Yeehaw!"
There are myriad things that distinguish long-term maintenance from greenfield development. Like transferring application ownership from one team to another, reverse engineering, adapting to the changes in external systems you integrate with, investigating bug reports and performance issues, doing monitoring, etc. etc. If your only concern in "maintenance" is making some changes while avoiding the kinds of bugs that can be prevented by a static type checker, then something seriously does not add up.
Again, the wast majority of claims made about FP on Hacker News right now (even in this thread) were made about Java in early 00s. Almost word-to-word, except some terminology. Unfortunately, back then I didn't have enough experience to spot the issues with those claims and people who did have real experience were mostly silent.
At the end of the day, sure, we all have opinions on tools and how they make our lives better. You and I may not agree that FP is the right tool, but I’m not going to make sweeping generalizations that you know nothing about large scale development and and maintenance solely based on your language paradigm choice and a few focused HN comments.
This is exactly backwards. A monad is not a design pattern - a design pattern is an awkward manual reimplementation of a monad (or another category). In OO design patterns the structure behind what you're actually doing is buried under both the clunky type system of most OO languages and the arcane memorization of patterns and their names.
The whole reason design patterns exist is that in C++/Java/Smalltalk/etc the type system is not quite good enough to enforce consistency with certain complex designs - failure management in concurrent systems being an excellent tricky example. In imperative/OO-dominant languages there is inevitably a huge amount of boilerplate around checking nulls, lots of wrapping things in try/catches, and so on. Design patterns are an useful abstraction of this boilerplate in a way that's easy to maintain (and, more importantly, are an intutive common language for many programmers). But they are no substitute for categories, which allows the compiler to make sure the design pattern is actually properly implemented.
But this stuff is quite complicated. It really does sound like a lot of navel-gazing mathy jargon. But the way you've phrased this makes me wonder - and I am aware this is condescending - if you haven't actually used categories in practice.
My thinking on this has been strongly influenced by this excellent blog series from Mark Seemann: https://blog.ploeh.dk/2017/10/04/from-design-patterns-to-cat...
This blog series is a very very good lower-level introduction from the same blog, with examples in C#/F#/Haskell: https://blog.ploeh.dk/2018/03/22/functors/
 That said, you can be pretty fancy in template C++ with category-level type programming.
Tbh I think the whole “category theory” obsession one sees in some parts of online FP evangelism needs to die. Haskell has (endo)functors which are a useful concept for which one needs to know zero category theory (similarly for monads). But otherwise, FP basically only ever has one category which has all the nice properties one could want (ok if you have a weird type system based on a weird logic you might have a slightly different category), so the thing people call category theory is just the theory of the one category you live in. One doesn’t say group theory is category theory just because there is a category of groups.
Actual category theory isn’t much about specific categories so much as it is about the relationships between them, and ideas common to many categories (natural transformations, limits, terminal objects, etc).
I’m not saying category theory is bad, but I think to get much out of it one needs more than one category (and hopefully one can think of a category which isn’t a topos too), and one doesn’t tend to come across categories in day to day functional programming. Some definitions and constructions from category theory may be useful in constructing a type system for a new ML-family programming language.
This isn't correct either. Design patterns are simply cocategories.
2. Yeah, monads and applicatives aren't too hard, but really understanding monad transformers well is challenging.
3. I mean, yes, there are a lot of junior-ish devs who see the potential about FP and then spout how much better it is, but isn't that just like complaining about how annoying people are on twitter? I'm unsure its meaningful to make this criticism, or at least I'd like an example.
It appears in thousands of talks and blog posts. It’s completely subjective and unquantifiable what you or I think is easier to reason about. It’s largely (entirely?) about aesthetics.
The entire function is way harder to reason about, because I can't tell from the outside what other parts of the world it may have accidentally modified. The lack of basic encapsulation really turns me off to FP.
Supposedly the best FP answer to the problem was lenses, which would drive the complexity of my code up into the stratosphere. Shortly after trying those out, I migrated my project to imperative OO and haven't looked back since.
Anyway beyond that the relevant part is not that one function call in isolation but wherever it’s getting called from and why it is getting called there. Without more context I don’t know how I would solve that but suffice it to say getting into lenses and the entire world and such sounds massively unnecessary.
"easier to reason about" is not someone's opinion, it's math. Please educate yourself on formal verification methods before drawing broad conclusions including substrings like "no idea" or "mindless repeating".
Possibly interesting: https://overreacted.io/the-bug-o-notation/
Typed Functional programming is part of the solution to this. It's not just correctness and modularity (Both of which reduce complexity btw).
Also you're right about the whole monad thing, it is a pattern, and like patterns in OOP, doesn't necessarily reduce complexity.
Maybe that's what the FP people mean by DDD and bounded contexts. If you constrain the complexity into something bite-size that in turn has well-documented interface points, then it all starts getting easier to manage again.
It really seems to me that a big reason Rich Hickey seems so profound is that what he's saying means 10 different things to 10 different people.