It's applying Functional Programming to plain old Enterprise line of business apps in F#. The author argues why it's so much simpler, and simply walks through an example with cases. It's hard not to be persuaded. It leaves behind all of the lofty crazy maths you see in a lot of presentations of FP, and just shows how it's a simpler technique to solve real world problems.
Domain Modeling Made Functional -
https://www.youtube.com/watch?v=Up7LcbGZFuo&feature=youtu.be... (Watch on 1.25x speed)
It's really impressive, and also surprising why more people promoting FP don't show it's tangible benefits and simplicity vs. the theoretical/abstract way it's usually presented.
It's also a great introduction to 'Domain Driven Design' and data modeling regardless of the language you use.
I encourage anyone who wants to learn functional programming to pick up the classic "Little Schemer" by Friedman and Felleisen. It's a charming book, designed for undergraduates, that teaches you all the important and appealing aspects of FP. And you won't be made to feel like an idiot because you don't know what a monoid is.
Where can I learn more about this magic called functional programming? Is there a good course? I want to do it for BLOBAs (as he calls them) apps as I'm not too math-heavy.
Also, to what extent does JS support FP? I use anonymous functions/callbacks (I mean red functions, pun intended) and closures (even as poor man's objects, lol) but I don't really know why that is FP and what makes it FP.
This is a bit of too big of an ask now that I think about it. I sound like a person who never programmed before and to be like: so what programming language should I use?
Is there anything good around for JS to create BLOBAs?
Additionally he does a really great job of walking through two things that I don't think are covered well enough at all for beginning and intermediate programmers (and even experienced ones like myself may have missed parts of along the way).
1. How to effectively gather customer requirements, what questions to ask, what things to dig into, how to organize them. A simple walk through, better than the hand-wavy stuff most people do when requirements gathering.
2. How to model a domain / problem space before implementing. How to effectively reason about your entities, services and actions. And iterate on these with your customer before coding.
I seriously wish I had run across this when I first started coding. A really great collection of tangible, actionable advice.
Domain Modeling Made Functional - Scott Wlaschin
His blog https://fsharpforfunandprofit.com/
For languages that compile to JS, Elm looks a bit more like the F# in the video (https://elm-lang.org/) and Typescript (https://www.typescriptlang.org/) is a much more like standard JS.
To really go whole hog, try setting one constraint for yourself: only use immutable data. Make all your variables `const`s, and only use immutable data structures (Immutable.js is a useful library for this).
You'll discover that it's basically impossible to write code in the familiar imperative style – everything needs to be a pure function, that just takes an input and returns a consistent output, without mutating state. This really is the "secret sauce" of FP.
The other domain-modelling-in-FP feature missing from JS is discriminated unions (aka sum types or enums in some languages).
This domain modelling is a big important feature in rust too (with more invariants which may be expressed using ownership constraints) but no one claims it’s a particularly functional language. Indeed I don’t think one needs FP for this kind of domain modelling that we get via ML-family languages.
It was weird to see a talk like that and realize to myself "but this is just how my brain works!" I happen to be quite a noun-driven person . And I guess I fell into the trap that this made everything seem so easy and understandable!
Which is why I got excited :P I just bought his book, lol. That was too easy to understand and yet entertaining at the same time while feeling useful as well.
 Sometimes even creating new nouns into the Dutch language to make arcane but important concepts clear. My most fun noun is "search term scavenger hunt" (in Dutch it is conjugated) for the situation where you know what you're looking for but don't know the right term to get hits on a search engine and you're hunting for the right word.
Personally, I find languages like Scala to be right there in the middle, where neither performance nor FP expressiveness is sacrificed severely.
That makes Scala flippin complex like C++ was, but it is probably the best attempt to bridge this gap if we all just want to learn "one language to rule them all" and incorporates both FP and imnperative/mutable style techniques that milk the hardware for what it's worth as well as give you the power/expressiveness of FP.
Of course one can write an operating system in Haskell... just as one can imitate functional style programming in C... or we can come up w/some "middle ground" language that takes the cake from both worlds and call that "the best".
My point is, this quest or journey for the best language/paradigm never ends and always shifts across the years as hardware develops and as we discover better ways of doing more with less (e.g. the crux of FP).
IMHO, ultimately you have to pick an appropriate language (tool) for the job at hand. If we forget that languages/programming paradigms are just tools... then, you'll forgive me for saying, we become tools ourselves. Or... fools, rather.
To me, whomever has realized this trade-off, is a true zen master of programming/engineering...not the one who espouses one paradigm/language over another. That's just me tho... YMMV.
Can and will are very different statements here though, particularly as the quality of a compiler makes an enormous difference in the performance of the resulting code.
The analysis of why we need modularity and that this means we need new kinds (plural) of "glue" is spot on. Note the plural. But then he goes on to present exactly two kinds of glue, so the smallest N for which the plural is justifiable.
I contend that he was richt that we need a lots of different kinds of glue, which means that the glue must be user-definable. And what is "glue"? Well, architectural connectors, that's what.
Why Architecture Oriented Programming Matters https://blog.metaobject.com/2019/02/why-architecture-oriente...
The only real benefit of functional programming is that it makes code much easier to reason about. But this difference is so great, that in most cases I am ready to pay the price.
P.S. Just to clarify my position to the commenters who want to convert me: I write code in functional languages most of the time for the living, so I know well enough how ST monad works, about "O(log n) == O(1)" meme, and so on. But I also have some background in HPC, where I used imperative languages. So I have plenty of experience in both paradigms, and I know exactly what are their weak and strong parts.
This is born out in practice where Haskell for example may not necessarily out perform highly optimized C, but it is frequently able to keep pace with Java and will almost always beat imperative python code.
In terms of being concise it’s laughable to me to even make the comparison. I’ve written a lot of C, Python and Ruby in my career in addition to Haskell, and Haskell wins on expensiveness to such an absurd degree that the comparison feels unfair.
The one area that I think FP, although really Haskell in particular, suffers is being to reason about. Performance and memory usage of course are notoriously hard to reason about in lazy languages, but it can also be challenging to understand what’s happening in large code bases that may have some deep MTL stack or be using multiple interpreters over some free monad, since the semantics of how computations are being evaluated can often be defined very far from the computation itself.
Functional code is almost always more concise than the equivalent imperative code. It's one of the main reasons why people prefer functional code. Almost everyone who looks at it pretty much agrees. The only cases where it will not be is if someone is directly re-writing an imperative algorithm into a functional language, then the functional can be longer. But for the most part, that's translating < 50 lines of code or something. Anything of substance functional is more concise.
If you haven't seen that to be the case, then you may not have looked at any substantial functional code.
Regardless of whether people prefer functional or not, that's almost a universal take-away from anyone re-writing an imperative project into a functional language. It's usually a factor of 2 or 3 shorter and simpler.
Second, more concise doesn't always mean better, let alone more readable or more maintainable. It's often the opposite.
Third, functional programming is usually preferred because of aspects such as composition or immutability, not conciseness.
Fourth, rewrites of an existing code base usually leads to shorter code bases, simply because of the acquired knowledge, regardless of the language or the techniques.
Okay, I'm really hoping there's a good surprise behind this, because I've been an imperative programmer for 10 years and a functional programmer for 5 and I have only rarely seen imperative code that was more concise than equivalent functional code. Faster? Sure. More efficient? I'll buy it. Perhaps even easier to read, especially when you get into point-free style on the functional side. But more concise overall? I find that hard to believe, but I'm genuinely excited now! The ultimate trump here would be APL but that's admittedly an eccentric case.
Functional persistent data structures bring many advantages, even to imperative programmers. For example, if your text editor used a persistent data structure, it probably wouldn't need to lock you out during a save and would support better undo.
Mutation is efficient, but comes with many drawbacks, especially when state is shared or provenance is required. This is why functional data structures and ideas are appearing outside of functional programming, for example ZFS, git, Blockchain, Spark, Kafka etc etc
> Imperative code is also more concise than equivalent functional code.
This couldn't be further from the truth. Just as Fortran liberated arithmetic expressions from the verbosity of imperative programming, FP allows us to move beyond non-compositional word-at-time sequences of mutation commands and build much higher-level abstractions. Modern imperative languages have borrowed many ideas from FP, but are still no way near as expressive as e.g. Haskell. Of course, imperative programming is still very useful and that's why Haskell has good support for it.
In imperative languages you don't need Edward Kmett to invent lenses for you.
Read this if you don't follow: "In Rust, ordinary vectors are values" http://smallcultfollowing.com/babysteps/blog/2018/02/01/in-r...
I almost think of Rust as a lower-level OCaml (faster and better support for memory management, but with somewhat weaker support for typeclasses and an inconvenient ADT syntax). An ML language you can conceivably write an OS in, but I would personally be nervous to write a complex finance application vs. OCaml.
And of course Rust also has typeclasses, with a design strongly influenced by (but somewhat weaker than) Haskell. Monads and Readers are all over the shop in modern Rust precisely because they are so useful in modern Haskell.
Basically, the reason why Rust is such a good low-level programming language is the reason that functional programming matters.
OCaml is a garbage collected and mono-thread language.
You obviously have the same guarantees in that you can't do what would be problematic.
As a side note, a lot of the people currently writing Rust would ihmo be better served by Ocaml. You rarely need the protections Rust provides and they have a real complexity cost.
Well, yes, but that's the point. Rust gives you the same guarantees without limiting you to using a single thread or requiring you to use garbage collection.
>You rarely need the protections Rust provides and they have a real complexity cost.
> ...higher-order functions and lazy evaluation, can contribute significantly to modularity.
You can not have lazy evaluation without referential transparency, and you can not get referential transparency with mutable variables.
Rust does good use of high-order functions, but gets nothing near the modularity of Haskell code.
This has not been my experience.
After years of doing pure functional programming (in Elm) professionally, I was surprised how much Rust felt like writing in an ML family language.
I still prefer referential transparency (though I don't think it would have been the right design for Rust), but "nothing near" does not fit my experience. I'd say the ergonomics of borrow checking versus a GC (which you don't have to think about) is the bigger gap.
John Hughes is a big believer in the value of laziness. There are many of us who believe laziness turned out to be a dead end; it makes some code nicer/more elegant, makes performance optimization much harder, and brings space leaks into your life. Are those costs worth the benefits? Not even close to worth it, if you ask me. I'll gladly take the referential transparency and pass on the laziness.
Almost all the researchers who started working on Haskell specifically because they wanted to explore the power of laziness...ended up pivoting to research type theory instead. I don't think that's a coincidence.
I haven't used Rust heavily enough to comment on how it compares in great detail, but comparing to Elm as a proxy for Haskell doesn't really work.
Frankly, paradigms are a really lousy way to think about languages. I wrote a series of blog posts about this, but this opening lecture from one of Brown's PL courses I think does a better job of making the point:
It's useful to talk about what say, GC, laziness, lifetimes, ownership, typeclasses/traits, higher-kinded types, higher rank types, variants, elm-style records, etc. do to a language, and how they compose, but I think you can't go very far talking about how "paradigms" compare.
Sure. My experience has been:
* GC, lifetimes, and ownership are all high-benefit and high-cost. The cost with GC is at runtime (where the cost is so high that in many domains GC is not tolerated at all; in many others, of course, we take it for granted as fine), and the high costs of lifetimes and ownership are at development time.
* Variants and records are high-benefit, low-cost.
* Higher-rank types are low-cost, low-benefit.
* Laziness and higher-kinded types are both features with costs that significantly outweigh their benefits.
It sounds like you disagree with the last bullet point. If so, then either we've had different experiences or we walked away with different conclusions from them.
> It sounds like you disagree with the last bullet point. If so, then either we've had different experiences or we walked away with different conclusions from them.
I more or less agree on laziness (at least lazy-by-default; having a lazy type as found in OCaml available is a big win for little downside).
Re: Higher-kinded types: I'm curious as to what you think the high costs are? My impression is that they've mostly been left out of Elm due to pedagogical concerns. Is it just that or are there other things?
Another cost is in standard library complexity. You can't have HKP and not have a stdlib with Functor/Monoid/Monad etc. As Scala has demonstrated, if you have HKP but don't put these in the stdlib, a large faction will emerge pushing an alternative stdlib that has them. A larger, more complex stdlib is a cost, and so is a fractured community and ecosystem; HKP means you'll have to pick one of those two.
API design is another. Without HKP you write a function that takes a List. With HKP you now need to decide: should it actually take a List, or is it better to take a Functor/Semigroup/Monoid/Applicative/Monad instead?
If you choose one of the more generic ones, now it takes more mental steps to collapse the indirection when reading it. (1. I have a Maybe Int. 2. This function expects an `s`. 3. `s` is constrained by `Semigroup s`. 4. Can I pass a Maybe Int to something expecting a Semigroup? Compare to "This function takes a `Maybe a`", and multiply that small delta of effort by a massive coefficient; this is something everyone who reads these types will do many, many times.)
This indirection also has implementation costs; in theory you could make docs and error messages about as nice if HKP is involved as if not, but there's an implementation cost there, and it seems like it must be pretty steep if you stack up languages with HKP and their quality of error messages and docs against other typed languages that don't.
So I'd say it's one huge cost (automatic induction into the highest tier of learning curve steepness), one big cost (either a larger and more complex stdlib or fractured community), and several smaller costs with high coefficients because they come up extremely often.
Yeah there are benefits too, but I don't think they get anywhere near outweighing the costs.
> HKP is practically unavoidable on the road to understanding a Haskell program that prints Hello World.
main :: IO ()
main = putStrLn "Hello World"
> With HKP you now need to decide: should it actually take a List, or is it better to take a Functor/Semigroup/Monoid/Applicative/Monad instead?
It doesn't seem like a decision that would involves a lot of cognitive overhead. In general I'd probably just go with whatever type GHC infers. Failing that, it kind of arises naturally from like, what the function is about. Is it about reducing the List to a single value? Use Foldable. Is it about transforming the elements of the list? Use Functor. Is it about nondeterminism, but the code isn't necessarily specific to that computational context? Use Applicative if there are no sequential dependencies (which the type checker knows anyway) and Monad otherwise. Ok, maybe it seems a little complex when you write it out, but it's really fairly instinctual.
I think that you overstate the cognitive overhead of reading polymorphic type signatures to those who are reasonably familiar with common idioms. Taking a second to remember that `Maybe a` is a semigroup if its argument is seems like a small cost to pay to me.
I think there's a significant benefit to highly polymorphic functions in the standard library which seems to rarely be brought up. Polymorphic functions are applicable more often. So then if the function is already written for you, and you use it, then anyone who reads your code has to look at one less definition to understand it (if they're familiar with the library function). This forms a larger common vocabulary and in some ways imposes a smaller cognitive overhead on the reader.
Speaking more broadly, people are often frustrated by the number of abstractions from the standard library that they have to learn to be productive. But they don't notice all of the abstractions they now don't have to learn in individual codebases - because they don't have to exist. And this effect adds up - every time there would have been a slightly less well implemented, proprietary, monomorphic version of a function in a codebase and you use the polymorphic standard library one instead, everyone who reads that code has one less thing to get their head around. It pays for itself.
I also feel like you understate the benefits - the amount of times I think for 20 seconds and realize that the complicated function I was about to write is just like, `traverse` or something is incredible to me.
I think it's possible to to have legitimate disagreements here, which are often driven by differing personal experiences, and by the kinds of domains people are operating in and so on. I don't think there's a one size fits all answer.
Semigroup doesn't involve higher kinds at all. What you seem to be discussing is either type classes or just a pile of junk from abstract algebra. Fwiw, Elm has semigroup too -- it's called appendable (which is a much much better name...).
> Learning curve is a very high cost by itself; HKP is the reason Haskell is notoriously difficult to learn.
I don't think any one feature of Haskell is why it's hard to learn. I think the reasons are much more mundane, the main ones being:
1. The language is just enormous. It's a lot to need to have in your head to understand some bit of code you come across. Folks end up picking a (small) subset of it just to stay sane, but this doesn't help you when trying to come across a new library; you basically need to have most of the language in your mind somewhere to understand $RANDOM_NEW_LIBRARY reliably. And because it's an issue of sheer size, there's no short-cutting it. It has a lot of features with heavy overlap in use cases, so you spend a lot of time thinking about silly things like "Should I use FunctionalDependencies or TypeFamilies?" "I'm writing a library that needs to generate a bunch of boilerplate code, should I use GHC.Generics, TemplateHaskell, or something else?".
2. The community is really lousy about pedagogy. They tend to lead with the abstraction, which is just not how people learn. I really wish this had been written like a year earlier; it would have saved me a lot of trouble wading through useless instructional material trying to learn this stuff. It doesn't seem like the bulk of the community took that to heart though, and while there are some good learning resources out there, there's a sea of worse-than-useless ones.
3. There's a culture of complexity/over-engineering. I don't think this is unavoidable, but it's particularly a hazard of being research language where to a large extent the whole point is to play with crazy ideas. The maintainers still see the language as primarily a platform for experimentation, so KISS can be a hard thing to push for.
> and it seems like it must be pretty steep if you stack up languages with HKP and their quality of error messages and docs against other typed languages that don't.
I'm not really sure I buy this; I think the list of languages that have these things and have seriously made good error messages a priority is pretty short (empty?). I can point to some simpler ML dialects that still have some really lousy error messages.
Sure you can: https://homepages.inf.ed.ac.uk/wadler/topics/linear-logic.ht...
Rust's ownership types and lifetimes actually allow for this just fine. The latter is the essence of how Haskell's ST Monad works; you can use lifetimes to get locally-mutable state without violating global invariants, since once they go out of scope they can't be reused.
It's interesting to observe that, without "magic" standard library functions and `unsafe`, Rust's type system actually completely constrains mutability, and if a function doesn't have `mut` somewhere in its type signature, it doesn't break referential transparency.
That said, in practice, the language does have magic functions that violate this property, and they do so in a way that means you can't use the above reasoning principle at all. Also, mutability being constrained by the types is not the same thing as typical code not using it everywhere, which is the situation with rust-as-found.
"… allowed to update the unique object destructively without any consequences for referential transparency."
Immutable datastructures in functional languages are not datastructures that cannot be mutated, they are datastructure that efficiently produces a new, updated, version on each operation (such as the cons list).
While they are slower than traditional datastructure for a typical use case, they tend to be a lot easier to think about, make parallel code much easier and let you easily get back in time to previous versions of your datastructures (when discovering rust, I was actually disapointed that they were not available in the std).
The im crate offers what seem to be good Rust implementations of that kind of datastructures (with abetter explanaition of why you would want them).
This claim seems dubious. But in any case, I think the OP was just talking about immutable data, i.e., in Rust terms, data to which there is no mutable reference. The data in question could be a simple integer.
As a matter of fact I have seen data structures, in FP language, called immutable while having operations to mutate them in place.
I believe the OP spoke of both Rust immutable data and the fact that most FP languages have only immutable datastructures (different mecanism to deal with similar problems).
Functional programming discussions on HN are pretty depressing. Many of the statements about FP that I see here right now are the same old shit I've heard about Java in mid 00s. You just need to mentally translate some buzzwords, but the essence is the same. Seems like the software industry is just running in circles. Something get hyped, people jump on it, fail, then search for the next bandwagon.
1. Endless yammering about low-level correctness. As if it's biggest problem in software engineering right now. In reality, most domains don't need perfection. They just need a reasonably low defect rate, which is not that hard to achieve if you know what you're doing.
2. Spewing of buzzwords, incomprehensible tirades about design patterns. FP people don't use the term "design pattern" often, but that's what most monadic stuff really is. Much of it is rather trivial stuff once you cut through the terminology. (Contrast this with talks by someone like Rich Hickey, who manages to communicate complex and broad concepts with no jargon.)
3. People who talk about "maintainability" of things while clearly never having to maintain a large body of someone else's code.
The #1 problem in software right now is not correctness or modularity or some other programming buzzword. It's the insane, ever-growing level of complexity and the resulting lack of human agency affecting both IT professionals and users.
The way I've dealt with complexity in large code bases is through being fearless about refactoring. Refactoring may not reduce complexity in terms of what the software does, but it reduces the complexity of understanding the code base tremendously by realigning the structure of the code with the actual problems being solved.
Refactoring gets a lot less scary when you have greater confidence in the low level correctness of the code.
On your second point, yes, I have found that FP has some, shall we say, interesting jargon. But I have trouble thinking of succinct names for a lot of FP constructs that are nonetheless useful, such as monads. A lot of more colloquial terms that come to mind in brainstorm sessions might even undermine understanding by providing a false equivalence. I think the same argument can be made for mathematical notation.
In summary I'd turn around your last sentence a bit. Yes, the #1 problem is complexity, but you can reduce complexity significantly by applying correctness and modularity and other programming 'buzzwords'.
You can rail against complexity itself, but I think we're probably on the bottom end of a very large complexity slope over the next decades. So we'll need better and better constructs to deal with it.
At that point you're talking multiple codebases and the complexities become managing transactions, data transformations, and contracts across discrete processes.
I'm not sure how that's germane to the discussion at hand. In fact, to the opposite point, I've found that in multi organization refactors and designs functional programming continues being a useful mine for concepts to simplify thinking around data transformations, immutability, and data contracts.
When I think about an algorithm like merge sort, mini-max decision trees or other low-level algorithms, the concept of modularity doesn't even enter my head. It doesn't make any sense to modularize an algorithm because it is an implementation detail; not an abstraction and not a business concern.
Modularity should be based on high level business concerns and abstractions. The idea that one should modularize low-level algorithms shows a deep misunderstanding of what it means to write modular software in a real-life context outside of academia.
It seems that FP induces confusion in the minds of its followers by blurring the boundary between implementation details and abstractions. OOP, on the other hand, makes the difference absolutely clear. In fact, the entire premise of OOP is to separate abstraction from implementation.
Referential transparency is not abstraction, in fact, it goes against abstraction. If your black box is transparent in terms of how it manages its state, then it's not really a black box.
I've developed and maintained projects that are more than a million lines of code (probably much more), and I've also written large haskell programs (5000 lines is large, since it encompasses what would have taken me maybe 30000 lines in C++). I can say that the maintenance time and error rate of my Haskell programs dwarfs that of any C++ program I've written or maintained.
We've also learned in software engineering that the defect rate is mainly correlated with the code size, ie. the complexity of the code and how much there is, or simply the entropy of the code. With functional abstraction, the abstractions aren't "leaky" and actually allow you to reduce complexity and forget about the lower level details entirely.
Think of it this way. In order to make something as modular as possible you must break it down into the smallest possible unit of modularity.
State and functions are separate concepts that can be modularized. OOP is an explicit wall that stops users from modularizing state and functions by forcing the user to unionize state and functions into a single entity.
Merge sort is a good example. It can't be broken down into smaller modules in either OOP or functional programming. The problem exists at a higher level.
IN FP mergeSort can be composed with any other function that has correct types. In OOP mergeSort lives in the context of an object and theoretically relies on the instantiated state of that object to work. So to re-use mergeSort in another context, a MergeSort Object must be instantiated, that object must be passed along to another ObjectThatNeedsMergeSort in order to be reused. ObjectThatNeedsMergeSort has an explicit dependency on antoehr object and is useless without the MergeSort object. Remember modules don't depend on one another hence this isn't modularity, this is dependency injection which is a pattern that promotes the creation of objects that are reliant on one another rather then objects that are modular.
I know there are "Design patterns" and all sorts of garbage syntax like static objects that are designed to help you get around this. However the main theoretical idea still stands: I have a function that I want to re-use, everything is harder for me in OOP because all functions in OOP are methods in an object and to use that method you have to drag along the entire parent object with it.
Modularity in functional programming languages penetrates to the lowest level. Functional programming encourages the composition of powerful, general functions to accomplish a task, as opposed to the accretion of imperative statements to do the same. With currying, a function that takes four arguments is trivially also four separate functions that can be further composed. The facilities for programming in the large are also arguably more general and expressive than in OOP languages: take a look at a Standard ML-style module system, where entire modules can be composed almost as easily as functions.
>It seems that FP induces confusion in the minds of its followers by blurring the boundary between implementation details and abstractions. OOP, on the other hand, makes the difference absolutely clear. In fact, the entire premise of OOP is to separate abstraction from implementation.
I'm not sure I understand you here entirely, but implementation details matter. Is this collection concurrency safe? Is this function going to give me back a null? Is it dependent on state outside its scope that I don't control? Etcetera. Furthermore, when it's necessary to hide implementation details, it's still eminently possible. Haskell and OCaml support exporting types as opaque except for the functions that operate on them in their own module, which is at least as powerful as similar functionality in OOP languages.
>Referential transparency is not abstraction, in fact, it goes against abstraction. If your black box is transparent in terms of how it manages its state, then it's not really a black box.
Yeah, I've lost you here. Would you mind clarifying?
I've written plenty of very short OOP programs. They don't have to be huge to be effective. The reason why you sometimes see very large OOP software and rarely see large FP software is not because FP makes code shorter, it's because FP logic would become impossible to follow beyond a certain size.
My point about black box and referential transparency is that a black box hides/encapsulates state changes (mutations) by containing the state. Referential transparency prevents your function from hiding/encapsulating state changes (mutations) and thus it prevents functions from containing the state which is relevant to them; instead, the relevant state needs to be passed in from some (usually) far-flung outside part of the code... A part of the code which has nothing to do with the business domain which that state is about. To make proper black boxes, state needs to be encapsulated by the logic which mutates it.
Keep in mind that in OOP is essentially Forced currying. A method that returns an object full of other methods is identical to currying except that method isn't returning a single function... It's returning a group of functions that all rely on shared state... way more complicated.
I haven’t found a single large OOP program in the line of business that was easy to understand. Quite contrary to my experience with large FP code bases, of which many exist, to be clear. They are just a lot smaller than what equivalent OOP code would look like, and I challenge you to refute that with evidence.
I completely disagree about black boxes and think they are actually a complete scourge on software engineering. I should know everything that is relevant to me from a function’s type signature. In languages with pervasive side effects, this is not possible.
For me, the most important principles of software engineering are:
1. Black boxing (in terms of exposing a simple interface for achieving some results and whose implementation is irrelevant).
2. Separation of concerns (in terms of business concerns; these are the ones that can be described in plain language to a non-technical person)
You need these two principles to design effective abstractions. You can design abstractions without following these principles, but they will not be useful abstractions.
Black boxes are a huge part of our lives.
If I want to go on a holiday to a different country, I don't need to know anything about how the internet works, how houses are built or how airplanes work in order to book a unit in a foreign country on AirBnB and fly there. The complexity and amount of detail which is abstracted is unfathomable but absolutely necessary to get the desired results. The complexity is not just abstracted from the users, but even the engineers who built all these different components knew literally nothing about each other's work.
As a user, the enormous complexity behind achieving my goal is hidden away behind very simple interfaces such as an intuitive website UI, train tickets, plane tickets, passport control, maps for location, house keys. These interfaces are highly interoperable and can be combined in many ways to achieve an almost limitless number of goals.
I couldn't explain to anyone anything about how airplanes work but I could easily explain to them how to use a plane ticket to go to a different country.
With programming, it should be the same. The interfaces should be easy to explain to any regular junior developer.
I generally think of spaghetti code as code that has unclear control flow (e.g. GOTOs everywhere, too many instance variables being used to maintain global state, etc.) Currying, plainly, does not cause this.
>1. Black boxing (in terms of exposing a simple interface for achieving some results and whose implementation is irrelevant).
Sure, completely possible in ML-family languages and Haskell. Refer to what I said about opaque types earlier.
>2. Separation of concerns (in terms of business concerns; these are the ones that can be described in plain language to a non-technical person)
Again, nothing in functional languages betrays this. You are talking about code organization at scale, and none of what you have said so far is precluded by using pure functions and modules and such.
>If I want to go on a holiday to a different country, I don't need to know anything about how the internet works, how houses are built or how airplanes work in order to book a unit in a foreign country on AirBnB and fly there. The complexity and amount of detail which is abstracted is unfathomable but absolutely necessary to get the desired results. The complexity is not just abstracted from the users, but even the engineers who built all these different components knew literally nothing about each other's work.
I do not like analogies in general, though for this one I will suggest that you should at least know what the baseline social expectations are of the place you are traveling to. That is, plainly, what I am arguing that functional programming makes clearer and easier to deal with.
>As a user, the enormous complexity behind achieving my goal is hidden away behind very simple interfaces such as an intuitive website UI, train tickets, plane tickets, passport control, maps for location, house keys. These interfaces are highly interoperable and can be combined in many ways to achieve an almost limitless number of goals.
Yes, and underneath that program in a functional programming language are lots of small, carefully composed functions that are often just as applicable to many other problems and problem domains.
>I couldn't explain to anyone anything about how airplanes work but I could easily explain to them how to use a plane ticket to go to a different country.
This is why I don't like analogies. I have no idea what you are talking about here.
>With programming, it should be the same. The interfaces should be easy to explain to any regular junior developer.
What makes functionally-styled APIs hard to explain to a junior developer?
Some time after leaving university, when your statements are no longer labeled as correct or incorrect - people like the GP start strongly believing that any nonsense that enters their head is a fundamental truth. GP's statement is an example of the above.
This is a good thing because I've found in my own experience that it's hard to say at what level you're at when implementing something. Implementing a standard deviation function? Is it happening over in memory data or persistant? Is it going to happen in parallel when it can? Is it going to be distributed across servers? Suddenly you're back at the high level.
This is the point from your rant which bugs me the most.
Quite literally due to maintainability my team uses an FP language. It is so easy to pick up code you wrote 3 years ago, code someone else wrote 6 months ago, code the guy who left 4 weeks ago wrote, code 10+ people are working on at the same time, and then continue to maintain that code without fear of misunderstanding the deep complexity that is also associated with it on top of the additional complexity you are about to add to it.
My team doesn't need to waste days doing archeological digs and comprehensions on the 18+ projects we now maintain. We simply grab the code base, modify the code which uses the idiomatic "write small components, build bigger components with those" mentality that comes with writing in a FP language, and then fix the chain of compiler errors along the way until our new feature works.
By attempting to be more precise (through either terminology or code correctness), introducing higher confidence levels (programmer confidence, code operability confidence), and wrangling complexity through well designed idioms (monads, proper effect handling), you end up delivering a large amount of value to your customers which impact their bottom line. Faster feature delivery, lower bug rates, nearly zero risk of data leaks, uncrashable software, maintainable custom software over 5+ year timelines.
I've been in this industry for 20 years now and have seen a wide spectrum of good and bad. Using FP over the last 3 years has definitely moved the bar in the "good" direction a lot further than I originally anticipated, but I suppose YMMV.
"We don't need to read and understand stuff. We just grab the code by the horns and spur it with changes until the compiler stops thrashing. Yeehaw!"
There are myriad things that distinguish long-term maintenance from greenfield development. Like transferring application ownership from one team to another, reverse engineering, adapting to the changes in external systems you integrate with, investigating bug reports and performance issues, doing monitoring, etc. etc. If your only concern in "maintenance" is making some changes while avoiding the kinds of bugs that can be prevented by a static type checker, then something seriously does not add up.
Again, the wast majority of claims made about FP on Hacker News right now (even in this thread) were made about Java in early 00s. Almost word-to-word, except some terminology. Unfortunately, back then I didn't have enough experience to spot the issues with those claims and people who did have real experience were mostly silent.
At the end of the day, sure, we all have opinions on tools and how they make our lives better. You and I may not agree that FP is the right tool, but I’m not going to make sweeping generalizations that you know nothing about large scale development and and maintenance solely based on your language paradigm choice and a few focused HN comments.
This is exactly backwards. A monad is not a design pattern - a design pattern is an awkward manual reimplementation of a monad (or another category). In OO design patterns the structure behind what you're actually doing is buried under both the clunky type system of most OO languages and the arcane memorization of patterns and their names.
The whole reason design patterns exist is that in C++/Java/Smalltalk/etc the type system is not quite good enough to enforce consistency with certain complex designs - failure management in concurrent systems being an excellent tricky example. In imperative/OO-dominant languages there is inevitably a huge amount of boilerplate around checking nulls, lots of wrapping things in try/catches, and so on. Design patterns are an useful abstraction of this boilerplate in a way that's easy to maintain (and, more importantly, are an intutive common language for many programmers). But they are no substitute for categories, which allows the compiler to make sure the design pattern is actually properly implemented.
But this stuff is quite complicated. It really does sound like a lot of navel-gazing mathy jargon. But the way you've phrased this makes me wonder - and I am aware this is condescending - if you haven't actually used categories in practice.
My thinking on this has been strongly influenced by this excellent blog series from Mark Seemann: https://blog.ploeh.dk/2017/10/04/from-design-patterns-to-cat...
This blog series is a very very good lower-level introduction from the same blog, with examples in C#/F#/Haskell: https://blog.ploeh.dk/2018/03/22/functors/
 That said, you can be pretty fancy in template C++ with category-level type programming.
Tbh I think the whole “category theory” obsession one sees in some parts of online FP evangelism needs to die. Haskell has (endo)functors which are a useful concept for which one needs to know zero category theory (similarly for monads). But otherwise, FP basically only ever has one category which has all the nice properties one could want (ok if you have a weird type system based on a weird logic you might have a slightly different category), so the thing people call category theory is just the theory of the one category you live in. One doesn’t say group theory is category theory just because there is a category of groups.
Actual category theory isn’t much about specific categories so much as it is about the relationships between them, and ideas common to many categories (natural transformations, limits, terminal objects, etc).
I’m not saying category theory is bad, but I think to get much out of it one needs more than one category (and hopefully one can think of a category which isn’t a topos too), and one doesn’t tend to come across categories in day to day functional programming. Some definitions and constructions from category theory may be useful in constructing a type system for a new ML-family programming language.
This isn't correct either. Design patterns are simply cocategories.
2. Yeah, monads and applicatives aren't too hard, but really understanding monad transformers well is challenging.
3. I mean, yes, there are a lot of junior-ish devs who see the potential about FP and then spout how much better it is, but isn't that just like complaining about how annoying people are on twitter? I'm unsure its meaningful to make this criticism, or at least I'd like an example.
It appears in thousands of talks and blog posts. It’s completely subjective and unquantifiable what you or I think is easier to reason about. It’s largely (entirely?) about aesthetics.
The entire function is way harder to reason about, because I can't tell from the outside what other parts of the world it may have accidentally modified. The lack of basic encapsulation really turns me off to FP.
Supposedly the best FP answer to the problem was lenses, which would drive the complexity of my code up into the stratosphere. Shortly after trying those out, I migrated my project to imperative OO and haven't looked back since.
Anyway beyond that the relevant part is not that one function call in isolation but wherever it’s getting called from and why it is getting called there. Without more context I don’t know how I would solve that but suffice it to say getting into lenses and the entire world and such sounds massively unnecessary.
"easier to reason about" is not someone's opinion, it's math. Please educate yourself on formal verification methods before drawing broad conclusions including substrings like "no idea" or "mindless repeating".
Possibly interesting: https://overreacted.io/the-bug-o-notation/
Typed Functional programming is part of the solution to this. It's not just correctness and modularity (Both of which reduce complexity btw).
Also you're right about the whole monad thing, it is a pattern, and like patterns in OOP, doesn't necessarily reduce complexity.
Maybe that's what the FP people mean by DDD and bounded contexts. If you constrain the complexity into something bite-size that in turn has well-documented interface points, then it all starts getting easier to manage again.
It really seems to me that a big reason Rich Hickey seems so profound is that what he's saying means 10 different things to 10 different people.
It is now generally accepted that modular design is the key to successful programming... However, there is a very important point that is often missed. When writing a modular program to solve a problem, one first divides the problem into subproblems, then solves the subproblems, and finally combines the solutions. The ways in which one can divide up the original problem depend directly on the ways in which one can glue solutions together. Therefore, to increase one’s ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language. Complicated scope rules and provision for separate compilation help only with clerical details — they can never make a great contribution to modularization
I now tend to see all the other design principles as ways to improve modularity. Everything else is mostly just fluff.