
Why functional programming matters (1990) [pdf] - palerdot
https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf
======
BoiledCabbage
If anyone is looking for a really eye-opening video take a look at that this.
"Domain Modeling Made Functional"

It's applying Functional Programming to plain old Enterprise line of business
apps in F#. The author argues why it's so much simpler, and simply walks
through an example with cases. It's hard not to be persuaded. It leaves behind
all of the lofty crazy maths you see in a lot of presentations of FP, and just
shows how it's a simpler technique to solve real world problems.

Domain Modeling Made Functional -
[https://www.youtube.com/watch?v=Up7LcbGZFuo&feature=youtu.be...](https://www.youtube.com/watch?v=Up7LcbGZFuo&feature=youtu.be&t=508)
(Watch on 1.25x speed)

It's really impressive, and also surprising why more people promoting FP don't
show it's tangible benefits and simplicity vs. the theoretical/abstract way
it's usually presented.

It's also a great introduction to 'Domain Driven Design' and data modeling
regardless of the language you use.

~~~
mettamage
After watching it for 20 minutes, I want to know more.

Where can I learn more about this magic called functional programming? Is
there a good course? I want to do it for BLOBAs (as he calls them) apps as I'm
not too math-heavy.

Also, to what extent does JS support FP? I use anonymous functions/callbacks
(I mean red functions, pun intended) and closures (even as poor man's objects,
lol) but I don't really know why that is FP and what makes it FP.

This is a bit of too big of an ask now that I think about it. I sound like a
person who never programmed before and to be like: so what programming
language should I use?

Is there anything good around for JS to create BLOBAs?

~~~
BoiledCabbage
So I didn't include it in the original link, but he has a book he wrote under
the same title (Domain Modeling Made Functional). I've been going through it
now. It's a much more in-depth treatment of the same topics. He walks through
a small product from start to finish.

Additionally he does a really great job of walking through two things that I
don't think are covered well enough at all for beginning and intermediate
programmers (and even experienced ones like myself may have missed parts of
along the way).

1\. How to effectively gather customer requirements, what questions to ask,
what things to dig into, how to organize them. A simple walk through, better
than the hand-wavy stuff most people do when requirements gathering.

2\. How to model a domain / problem space before implementing. How to
effectively reason about your entities, services and actions. And iterate on
these with your customer before coding.

I seriously wish I had run across this when I first started coding. A really
great collection of tangible, actionable advice.

I've never done it in javascript so won't try to guess, but the first two
parts of the three part book are really applicable regardless of the language.
Will have to finish the third to see how it is.

Domain Modeling Made Functional - Scott Wlaschin

    
    
      https://pragprog.com/book/swdddf/domain-modeling-made-functional
    
      https://www.amazon.com/Domain-Modeling-Made-Functional-Domain-Driven/dp/1680502549
    

(I have zero affiliation with the author and get nothing from any links)

His blog
[https://fsharpforfunandprofit.com/](https://fsharpforfunandprofit.com/)

~~~
mettamage
Thanks! I bought the book.

------
martin1975
On one end we have FP, which is great for expressiveness and its declarative
nature of getting things done, side-effect free, with nearly perl like
terseness :)... but on the opposite end you have von Neumann CPU architectures
which know nothing about FP. The utopian "silver bullet" is always something
in the middle of course, a language that can give you the power and
expressiveness of FP without sacrificing all the things that FP style compels
you to bear - like mutability, performance optimizations/fine tuning, etc.

Personally, I find languages like Scala to be right there in the middle, where
neither performance nor FP expressiveness is sacrificed severely.

That makes Scala flippin complex like C++ was, but it is probably the best
attempt to bridge this gap if we all just want to learn "one language to rule
them all" and incorporates both FP and imnperative/mutable style techniques
that milk the hardware for what it's worth as well as give you the
power/expressiveness of FP.

Of course one can write an operating system in Haskell... just as one can
imitate functional style programming in C... or we can come up w/some "middle
ground" language that takes the cake from both worlds and call that "the
best".

My point is, this quest or journey for the best language/paradigm never ends
and always shifts across the years as hardware develops and as we discover
better ways of doing more with less (e.g. the crux of FP).

IMHO, ultimately you have to pick an appropriate language (tool) for the job
at hand. If we forget that languages/programming paradigms are just tools...
then, you'll forgive me for saying, we become tools ourselves. Or... fools,
rather.

To me, whomever has realized this trade-off, is a true zen master of
programming/engineering...not the one who espouses one paradigm/language over
another. That's just me tho... YMMV.

~~~
lalaithion
Is Scala really that much more performant than Haskell? I haven't done much
high performance computing in either, but the Scala compiler is significantly
slower than the Haskell compiler (both self hosted).

~~~
randomidiot123
Slower compilation doesn't mean slower execution. Scala has "Native" and JVM
implementations with different performance characteristics. Scala is a hybrid
that allows imperative code with a lower level of abstraction than Haskell,
closer to hardware. Thus it is potentially more performant.

~~~
jolux
Sure, but has it been benchmarked? From what I've seen on the benchmarks game
GHC is neck and neck with the JVM in most stuff, which is damn impressive.
That is admittedly a silly comparison, though.

~~~
randomidiot123
Of course they have been benchmarked. Both compile to native code, so
performance can be roughly the same if the algorithms are the same. However,
figuring out how to design a Haskell program that reaches maximum performance
is very difficult, and the code probably won't be idiomatic.

~~~
jolux
> Both compile to native code, so performance can be roughly the same if the
> algorithms are the same.

Can and will are very different statements here though, particularly as the
quality of a compiler makes an enormous difference in the performance of the
resulting code.

------
brudgers
Some previous discussions

[https://news.ycombinator.com/item?id=14138196](https://news.ycombinator.com/item?id=14138196)

[https://news.ycombinator.com/item?id=13129540](https://news.ycombinator.com/item?id=13129540)

[https://news.ycombinator.com/item?id=9502049](https://news.ycombinator.com/item?id=9502049)

------
mpweiher
This is an odd article, brilliant in parts, somewhat less so elsewhere.

The analysis of why we need modularity and that this means we need new kinds
(plural) of "glue" is spot on. Note the plural. But then he goes on to present
exactly two kinds of glue, so the smallest N for which the plural is
justifiable.

I contend that he was richt that we need a lots of different kinds of glue,
which means that the glue must be user-definable. And what is "glue"? Well,
architectural connectors, that's what.

 _Why Architecture Oriented Programming Matters_
[https://blog.metaobject.com/2019/02/why-architecture-
oriente...](https://blog.metaobject.com/2019/02/why-architecture-oriented-
programming.html)

------
h91wka
Purely functional algorithms and data structures are typically slower.
Imperative code is also more concise than equivalent functional code.

The only real benefit of functional programming is that it makes code much
easier to reason about. But this difference is so great, that in most cases I
am ready to pay the price.

P.S. Just to clarify my position to the commenters who want to convert me: I
write code in functional languages most of the time for the living, so I know
well enough how ST monad works, about "O(log n) == O(1)" meme, and so on. But
I also have some background in HPC, where I used imperative languages. So I
have plenty of experience in both paradigms, and I know exactly what are their
weak and strong parts.

~~~
rebeccaskinner
I find this to be largely untrue. There are certainly classes of algorithms
and data structures that benefit from mutation for performance reasons, but at
that point you can simply implement mutation in your pure functional language
(e.g. with monads). Even ignoring that case, there are a great many data
structures and algorithms that don’t meaningfully suffer from a lack of
mutation at all, especially when the compiler is able to leverage assumptions
from being in a pure language.

This is born out in practice where Haskell for example may not necessarily out
perform highly optimized C, but it is frequently able to keep pace with Java
and will almost always beat imperative python code.

In terms of being concise it’s laughable to me to even make the comparison.
I’ve written a lot of C, Python and Ruby in my career in addition to Haskell,
and Haskell wins on expensiveness to such an absurd degree that the comparison
feels unfair.

The one area that I think FP, although really Haskell in particular, suffers
is being to reason about. Performance and memory usage of course are
notoriously hard to reason about in lazy languages, but it can also be
challenging to understand what’s happening in large code bases that may have
some deep MTL stack or be using multiple interpreters over some free monad,
since the semantics of how computations are being evaluated can often be
defined very far from the computation itself.

~~~
thsealienbstrds
When it comes to Haskell, I want to add that especially when you start doing
parallel programming, it becomes even harder to reason about. You might expect
that doing that extra data type conversion in parallel rather than in sequence
is going to make things faster, but you could find that the GC needs to do
more work for whatever reason, and it ends up actually becoming slower. It's
very counter-intuitive. It's a shame because writing parallel code in Haskell
can be very elegant. If only the behavior was more obvious it would be so
great.

------
AlexanderDhoore
For me (and I'm going to sound like a massive fan boy here) Rust has surpassed
functional programming. It gives me mutable data when needed and immutable
when shared. So ONLY having immutable data structures now seems stupid. We can
have best of both worlds.

Read this if you don't follow: "In Rust, ordinary vectors are values"
[http://smallcultfollowing.com/babysteps/blog/2018/02/01/in-r...](http://smallcultfollowing.com/babysteps/blog/2018/02/01/in-
rust-ordinary-vectors-are-values/)

~~~
marcosdumay
From the abstract, the main point of the article is:

> ...higher-order functions and lazy evaluation, can contribute significantly
> to modularity.

You can not have lazy evaluation without referential transparency, and you can
not get referential transparency with mutable variables.

Rust does good use of high-order functions, but gets nothing near the
modularity of Haskell code.

~~~
rtfeldman
> Rust does good use of high-order functions, but gets nothing near the
> modularity of Haskell code.

This has not been my experience.

After years of doing pure functional programming (in Elm) professionally, I
was surprised how much Rust felt like writing in an ML family language.

I still prefer referential transparency (though I don't think it would have
been the right design for Rust), but "nothing near" does not fit my
experience. I'd say the ergonomics of borrow checking versus a GC (which you
don't have to think about) is the bigger gap.

John Hughes is a big believer in the value of laziness. There are many of us
who believe laziness turned out to be a dead end; it makes some code
nicer/more elegant, makes performance optimization much harder, and brings
space leaks into your life. Are those costs worth the benefits? Not even close
to worth it, if you ask me. I'll gladly take the referential transparency and
pass on the laziness.

Almost all the researchers who started working on Haskell specifically because
they wanted to explore the power of laziness...ended up pivoting to research
type theory instead. I don't think that's a coincidence.

~~~
zenhack
Don't get me wrong, I like Elm a lot, but the drop in expressiveness/ability
to build abstractions from Haskell is substantial. Especially when you're
looking at modularity, there are a bunch of things that Elm can't abstract out
that _both_ Rust and Haskell can manage just fine.

I haven't used Rust heavily enough to comment on how it compares in great
detail, but comparing to Elm as a proxy for Haskell doesn't really work.

Frankly, paradigms are a really lousy way to think about languages. I wrote a
series of blog posts about this[1], but this opening lecture from one of
Brown's PL courses I think does a better job of making the point:

[https://www.youtube.com/watch?v=3N__tvmZrzc](https://www.youtube.com/watch?v=3N__tvmZrzc)

It's useful to talk about what say, GC, laziness, lifetimes, ownership,
typeclasses/traits, higher-kinded types, higher rank types, variants, elm-
style records, etc. do to a language, and how they compose, but I think you
can't go very far talking about how "paradigms" compare.

[1]: [https://zenhack.net/2018/07/14/three-funerals-in-the-name-
of...](https://zenhack.net/2018/07/14/three-funerals-in-the-name-of-
clarity-3-systems.html)

~~~
rtfeldman
> It's useful to talk about what say, GC, laziness, lifetimes, ownership,
> typeclasses/traits, higher-kinded types, higher rank types, variants, elm-
> style records, etc. do to a language, and how they compose, but I think you
> can't go very far talking about how "paradigms" compare.

Sure. My experience has been:

* GC, lifetimes, and ownership are all high-benefit and high-cost. The cost with GC is at runtime (where the cost is so high that in many domains GC is not tolerated at all; in many others, of course, we take it for granted as fine), and the high costs of lifetimes and ownership are at development time.

* Variants and records are high-benefit, low-cost.

* Higher-rank types are low-cost, low-benefit.

* Laziness and higher-kinded types are both features with costs that significantly outweigh their benefits.

It sounds like you disagree with the last bullet point. If so, then either
we've had different experiences or we walked away with different conclusions
from them.

~~~
zenhack
> * Laziness and higher-kinded types are both features with costs that
> significantly outweigh their benefits.

> It sounds like you disagree with the last bullet point. If so, then either
> we've had different experiences or we walked away with different conclusions
> from them.

I more or less agree on laziness (at least lazy-by-default; having a lazy type
as found in OCaml available is a big win for little downside).

Re: Higher-kinded types: I'm curious as to what you think the high costs are?
My impression is that they've mostly been left out of Elm due to pedagogical
concerns. Is it just that or are there other things?

~~~
rtfeldman
Learning curve is a very high cost by itself; HKP is _the_ reason Haskell is
notoriously difficult to learn. (Compare to Elm, which is also a referentially
transparent typed ML with parameteric polymorohism and row types, with nearly
identical syntax to Haskell...and which is notoriously _easy_ to learn.)
Haskell beginners don't need to get into GADTs and type families and higher-
ranked types, but HKP is practically unavoidable on the road to understanding
a Haskell program that prints Hello World.

Another cost is in standard library complexity. You can't have HKP and not
have a stdlib with Functor/Monoid/Monad etc. As Scala has demonstrated, if you
have HKP but don't put these in the stdlib, a large faction will emerge
pushing an alternative stdlib that has them. A larger, more complex stdlib is
a cost, and so is a fractured community and ecosystem; HKP means you'll have
to pick one of those two.

API design is another. Without HKP you write a function that takes a List.
With HKP you now need to decide: should it actually take a List, or is it
better to take a Functor/Semigroup/Monoid/Applicative/Monad instead?

If you choose one of the more generic ones, now it takes more mental steps to
collapse the indirection when reading it. (1. I have a Maybe Int. 2. This
function expects an `s`. 3. `s` is constrained by `Semigroup s`. 4. Can I pass
a Maybe Int to something expecting a Semigroup? Compare to "This function
takes a `Maybe a`", and multiply that small delta of effort by a massive
coefficient; this is something everyone who reads these types will do many,
many times.)

This indirection also has implementation costs; in theory you could make docs
and error messages about as nice if HKP is involved as if not, but there's an
implementation cost there, and it seems like it must be pretty steep if you
stack up languages with HKP and their quality of error messages and docs
against other typed languages that don't.

So I'd say it's one huge cost (automatic induction into the highest tier of
learning curve steepness), one big cost (either a larger and more complex
stdlib or fractured community), and several smaller costs with high
coefficients because they come up extremely often.

Yeah there are benefits too, but I don't think they get anywhere near
outweighing the costs.

~~~
sullyj3
To indulge in fisking a tiny bit -

> HKP is practically unavoidable on the road to understanding a Haskell
> program that prints Hello World.
    
    
      main :: IO ()
      main = putStrLn "Hello World" 
    

Doesn't require an understanding of HKP at all. There isn't even any LKP. I do
agree that it's necessary for a productive employed Haskeller, but not a
beginner playing around with simple command line apps. There's a distinction
to be made between understanding enough to make it run (Just label the IO bits
with IO and think of 'do' as kind of like imperative programming but not
really) and understanding more deeply, which only becomes necessary later.

> With HKP you now need to decide: should it actually take a List, or is it
> better to take a Functor/Semigroup/Monoid/Applicative/Monad instead?

It doesn't seem like a decision that would involves a lot of cognitive
overhead. In general I'd probably just go with whatever type GHC infers.
Failing that, it kind of arises naturally from like, what the function is
_about_. Is it about reducing the List to a single value? Use Foldable. Is it
about transforming the elements of the list? Use Functor. Is it about
nondeterminism, but the code isn't necessarily specific to that computational
context? Use Applicative if there are no sequential dependencies (which the
type checker knows anyway) and Monad otherwise. Ok, maybe it seems a little
complex when you write it out, but it's really fairly instinctual.

------
gambler
Rant ahead.

Functional programming discussions on HN are pretty depressing. Many of the
statements about FP that I see here right now are the same old shit I've heard
about Java in mid 00s. You just need to mentally translate some buzzwords, but
the essence is the same. Seems like the software industry is just running in
circles. Something get hyped, people jump on it, fail, then search for the
next bandwagon.

Some examples:

1\. Endless yammering about low-level correctness. As if it's biggest problem
in software engineering right now. In reality, most domains don't need
perfection. They just need a reasonably low defect rate, which is not that
hard to achieve if you know what you're doing.

2\. Spewing of buzzwords, incomprehensible tirades about design patterns. FP
people don't use the term "design pattern" often, but that's what most monadic
stuff really is. Much of it is rather trivial stuff once you cut through the
terminology. (Contrast this with talks by someone like Rich Hickey, who
manages to communicate complex and broad concepts with no jargon.)

3\. People who talk about "maintainability" of things while clearly never
having to maintain a large body of someone else's code.

Etc.

\---

The #1 problem in software right now is not correctness or modularity or some
other programming buzzword. It's the insane, ever-growing level of complexity
and the resulting lack of human agency affecting both IT professionals and
users.

~~~
cryptica
Amen. When I read that paper, it was clear that the author's definition of
modularity was very different from my own.

When I think about an algorithm like merge sort, mini-max decision trees or
other low-level algorithms, the concept of modularity doesn't even enter my
head. It doesn't make any sense to modularize an algorithm because it is an
implementation detail; not an abstraction and not a business concern.

Modularity should be based on high level business concerns and abstractions.
The idea that one should modularize low-level algorithms shows a deep
misunderstanding of what it means to write modular software in a real-life
context outside of academia.

It seems that FP induces confusion in the minds of its followers by blurring
the boundary between implementation details and abstractions. OOP, on the
other hand, makes the difference absolutely clear. In fact, the entire premise
of OOP is to separate abstraction from implementation.

Referential transparency is not abstraction, in fact, it goes against
abstraction. If your black box is transparent in terms of how it manages its
state, then it's not really a black box.

~~~
jolux
>Modularity should be based on high level business concerns and abstractions.
The idea that one should modularize low-level algorithms shows a deep
misunderstanding of what it means to write modular software in a real-life
context outside of academia.

Modularity in functional programming languages penetrates to the lowest level.
Functional programming encourages the composition of powerful, general
functions to accomplish a task, as opposed to the accretion of imperative
statements to do the same. With currying, a function that takes four arguments
is trivially _also_ four separate functions that can be further composed. The
facilities for programming in the large are also arguably more general and
expressive than in OOP languages: take a look at a Standard ML-style module
system, where entire modules can be composed almost as easily as functions.

>It seems that FP induces confusion in the minds of its followers by blurring
the boundary between implementation details and abstractions. OOP, on the
other hand, makes the difference absolutely clear. In fact, the entire premise
of OOP is to separate abstraction from implementation.

I'm not sure I understand you here entirely, but implementation details
_matter_. Is this collection concurrency safe? Is this function going to give
me back a null? Is it dependent on state outside its scope that I don't
control? Etcetera. Furthermore, when it's necessary to hide implementation
details, it's still eminently possible. Haskell and OCaml support exporting
types as opaque except for the functions that operate on them in their own
module, which is at least as powerful as similar functionality in OOP
languages.

>Referential transparency is not abstraction, in fact, it goes against
abstraction. If your black box is transparent in terms of how it manages its
state, then it's not really a black box.

Yeah, I've lost you here. Would you mind clarifying?

~~~
cryptica
Currying is just another example of poor abstraction. You have a function
which returns another function which may be passed around to a different part
of the code and then called and it returns another function... Abstraction
doesn't get any leakier than this. It literally encourages spaghetti code. I
despise this aspect of FP that the code ends up passed around all over the
place and keeping track of what came from where is a nightmare.

I've written plenty of very short OOP programs. They don't have to be huge to
be effective. The reason why you sometimes see very large OOP software and
rarely see large FP software is not because FP makes code shorter, it's
because FP logic would become impossible to follow beyond a certain size.

My point about black box and referential transparency is that a black box
hides/encapsulates state changes (mutations) by containing the state.
Referential transparency prevents your function from hiding/encapsulating
state changes (mutations) and thus it prevents functions from containing the
state which is relevant to them; instead, the relevant state needs to be
passed in from some (usually) far-flung outside part of the code... A part of
the code which has nothing to do with the business domain which that state is
about. To make proper black boxes, state needs to be encapsulated by the logic
which mutates it.

~~~
jolux
Can you explain why you see currying as an abstraction leakage? What
implementation details does it betray?

I haven’t found a single large OOP program in the line of business that was
easy to understand. Quite contrary to my experience with large FP code bases,
of which _many_ exist, to be clear. They are just a lot smaller than what
equivalent OOP code would look like, and I challenge you to refute that with
evidence.

I completely disagree about black boxes and think they are actually a complete
scourge on software engineering. I should know everything that is relevant to
me from a function’s type signature. In languages with pervasive side effects,
this is not possible.

~~~
cryptica
I thought my example about being able to call a function to get another
function and then passing it to some other part of the code and calling it
there was enough to illustrate the kind of confusion and disorganization that
currying can cause.

For me, the most important principles of software engineering are:

1\. Black boxing (in terms of exposing a simple interface for achieving some
results and whose implementation is irrelevant).

2\. Separation of concerns (in terms of business concerns; these are the ones
that can be described in plain language to a non-technical person)

You need these two principles to design effective abstractions. You can design
abstractions without following these principles, but they will not be useful
abstractions.

Black boxes are a huge part of our lives.

If I want to go on a holiday to a different country, I don't need to know
anything about how the internet works, how houses are built or how airplanes
work in order to book a unit in a foreign country on AirBnB and fly there. The
complexity and amount of detail which is abstracted is unfathomable but
absolutely necessary to get the desired results. The complexity is not just
abstracted from the users, but even the engineers who built all these
different components knew literally nothing about each other's work.

As a user, the enormous complexity behind achieving my goal is hidden away
behind very simple interfaces such as an intuitive website UI, train tickets,
plane tickets, passport control, maps for location, house keys. These
interfaces are highly interoperable and can be combined in many ways to
achieve an almost limitless number of goals.

I couldn't explain to anyone anything about how airplanes work but I could
easily explain to them how to use a plane ticket to go to a different country.

With programming, it should be the same. The interfaces should be easy to
explain to any regular junior developer.

~~~
jolux
>I thought my example about being able to call a function to get another
function and then passing it to some other part of the code and calling it
there was enough to illustrate the kind of confusion and disorganization that
currying can cause.

I generally think of spaghetti code as code that has unclear control flow
(e.g. GOTOs everywhere, too many instance variables being used to maintain
global state, etc.) Currying, plainly, does not cause this.

>1\. Black boxing (in terms of exposing a simple interface for achieving some
results and whose implementation is irrelevant).

Sure, completely possible in ML-family languages and Haskell. Refer to what I
said about opaque types earlier.

>2\. Separation of concerns (in terms of business concerns; these are the ones
that can be described in plain language to a non-technical person)

Again, nothing in functional languages betrays this. You are talking about
code organization at scale, and none of what you have said so far is precluded
by using pure functions and modules and such.

>If I want to go on a holiday to a different country, I don't need to know
anything about how the internet works, how houses are built or how airplanes
work in order to book a unit in a foreign country on AirBnB and fly there. The
complexity and amount of detail which is abstracted is unfathomable but
absolutely necessary to get the desired results. The complexity is not just
abstracted from the users, but even the engineers who built all these
different components knew literally nothing about each other's work.

I do not like analogies in general, though for this one I will suggest that
you should at least know what the baseline social expectations are of the
place you are traveling to. That is, plainly, what I am arguing that
functional programming makes clearer and easier to deal with.

>As a user, the enormous complexity behind achieving my goal is hidden away
behind very simple interfaces such as an intuitive website UI, train tickets,
plane tickets, passport control, maps for location, house keys. These
interfaces are highly interoperable and can be combined in many ways to
achieve an almost limitless number of goals.

Yes, and underneath that program in a functional programming language are lots
of small, carefully composed functions that are often just as applicable to
many other problems and problem domains.

>I couldn't explain to anyone anything about how airplanes work but I could
easily explain to them how to use a plane ticket to go to a different country.

This is why I don't like analogies. I have no idea what you are talking about
here.

>With programming, it should be the same. The interfaces should be easy to
explain to any regular junior developer.

What makes functionally-styled APIs hard to explain to a junior developer?

------
iflywithbook
One of my favorite papers. Incidentally, although I'd spent my entire career
chasing some elusive notion of "good design", this paper was the first I have
seen to explicitly define "good design" as "modular".

It is now generally accepted that modular design is the key to successful
programming... However, there is a very important point that is often missed.
When writing a modular program to solve a problem, one first divides the
problem into subproblems, then solves the subproblems, and finally combines
the solutions. The ways in which one can divide up the original problem depend
directly on the ways in which one can glue solutions together. Therefore, to
increase one’s ability to modularize a problem conceptually, one must provide
new kinds of glue in the programming language. Complicated scope rules and
provision for separate compilation help only with clerical details — they can
never make a great contribution to modularization

I now tend to see all the other design principles as ways to improve
modularity. Everything else is mostly just fluff.

------
fwguru
I think functional programming is an useful (and probably fun for some people)
exercise to see how far you can go when solving problems with a limited or
"pure" set of tools. We got and are steel getting a lot of insight from it.
Like forcing a boxer to only box with one hand while the other one is tied
behind his back. I'm sure one would have learn quite a few new tricks not to
get battered in the ring that way. Things do get weird when people start
forgetting the the world is a bigger place and there are a lot of ways do the
same thing and the reality doesn't care about purity. That makes some people
bitter and they dig down even harder into their limited world and start
evangelizing it even harder to signal their smartness. In their bitterness and
anger they do not see that functional programming has made it big time. Almost
every important language today supports multiple paradigms, including
functional.

------
bitL
If it mattered we wouldn't have needed articles like this. It's a useful niche
for certain tasks, that's about it. Certain types of people with certain
thinking patterns strongly prefer it, other people don't. :shrug:

~~~
willtim
People should learn a functional language, even if they do not intend to ever
use it professionally. Languages are tools for thought and if you only know
imperative programming, it will limit your thinking (for example, one well-
respected OOP programmer on stackoverflow seriously suggested modelling a bank
account with a mutable number). Functional programming ideas are gaining
traction and have made big contributions to distributed computation,
filesystems and databases recently. Don't get left behind by dismissing it all
as niche. You'll also get a preview of the upcoming Java and C# features.

~~~
bitL
I agree; I recommend everybody to master imperative, OOP, functional, logical
and constraint-based declarative languages. They are all useful, certain
things are much easier in one or the other type.

