Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Functional Style (2018) (codurance.com)
135 points by yogthos on March 22, 2019 | hide | past | favorite | 92 comments


Given the comment section is devoted to definitions, I'll add mine: the single most important characteristic of proper FP is _strictly segregated side effects_.

At a high level, many e.g. Clojure programs are indistinguishable from an OOP program. Functional composition and immutability are just superficial details - another 'skin' containing the same stuff.

Does it really matter if you expressed something as a `for` or as a `reduce`? Is local mutability distinguishable from immutability?

Worth noting, real OOP webapps do very little in-process mutability. They're 'functional': request in, response out. State actually lives in Redis/Postgres so the actual mutability story is the same for FP and OOP langs.

That's not to say this functional-ish style is wrong or wasteful. It's a nice step forward. But we really ought to appreciate "strictly segregated side effects", namely actually pure functions, without exceptions.

There's no such enforcement in a great deal of FP langs, and IMO there really should be (at least in the form of a lightweight linter) if we want FP to be genuinely superior.


The single most important characteristic of functional programming is that all control flow routing is done by function calls.

Functional programming can include side effects.

This is still basically functional:

  1> (let ((cntr 0))
       (list (mapcar (lambda (x) (inc cntr) (* x x)) '(1 2 3)) cntr))
  ((1 4 9) 3)
Just not "purely functional". Functional means we work mostly by applying arguments to functions, and avoiding loops, ifs and gotos in favor of canned mapping operations that work with sequences and such. If an assignment sneaks in here and there, it's still functional.

   (if antecedent consequent alternative)
is not functional because control flow is being diverted by a special operator without the use of functions. But if we have a function iff:

   (iff f-antecedent f-consequent f-alternative)
such that iff will call the f-antecedent function and then either call f-consequent or f-alternative based on what f-antecedent returns, then that is functional.

Functional just means "obsessed with functions"; without further qualification, it has nothing to do with purity. That's just certain politico-religious camps trying to hijack the word for their own agenda.


> the single most important characteristic of proper FP is _strictly segregated side effects_.

This is one of the best definitions I've come across.


If we have pure funtions, that is not "strictly segregated side effects"; pure functions do not have side effects, period.

We can segregate side effects very well in imperative code.

The segregation of side effects is clearer in a strictly evaluated imperative language than in a lazily evaluated functional language.


The reason why purity ([0]) falls flat is because the pure approach only handles 90% of the problems efficiently. The remaining 10% will take more effort so you do not actually see a 90% reduction in cognitive burden but perhaps a 70% reduction. If there is an escape hatch for the remaining 10% of problems the cognitive burden would be reduced by the full 90%. But purists don't like escape hatches, because they think all the benefits of purity are gone, because it's an all or nothing thing (it isn't).

[0] It doesn't really matter what type of purity we are talking about here. Whether it's about pure functions as in FP or Rust code that never depends on unsafe code.


I find the authors definition of functional programming really insufficient. FP should not be defined in terms of immutable state, but as mathematical functions as the basic form of abstraction, and that programs are composed as an evaluation of a single expression in spirit with the lambda calculus. Immutable state is just a correlary.

With that in mind, you really can’t do FP without automatic currying and the function composition and application operators (. And $) in Haskell. Without these, you lack the key components needed to work with your only means of abstraction. Language author seem to think lambdas are enough, but if you can’t compose functions to build new functions, partially apply them to get specific versions of behavior, then you aren’t going to get very far.


This is a generic "no true Scotsman" argument. Not everybody shares the same definition of FP and while "My definition of FP is what Haskell gives me, and the author's definition is more akin to what Clojure and Elixir give them" is a correct statement, it's also not a very insightful point.

You can't deny that there are lots of languages and techniques that people call "functional" where the key ingredient isn't "programs composed as an evaluation of a single expression" but something else (such as "immutable state").


OOP's got a similar problem: When the idea was originally dreamed up, message passing was an essential part of the definition. This wasn't an arbitrary thing, message passing was a precondition of the characteristics that people were trying to achieve in the first place.

Nowadays, the definition's been deeply watered down, and there are vanishingly few languages that both call themselves object oriented and stick to the original idea. So maybe it's not worth fighting over the terminology anymore. But still, I can't help but feel like we've lost something in the process.

FP didn't become the darling until more recently, and I think that it's just possible that people could hold the line and prevent the term from being watered down. And the original idea was very much one of programs that were composed of smaller parts, such that the whole thing was ultimately just the evaluation of a single expression.

Note that this definition predates Haskell by a decade or so. It was trying to capture what the designers of APL were going for.


Yes, it's a game of chinese whispers, and the issue is usually people from another paradigm (OO) who aren't immersing themselves in FP enough to understand what it is and isn't. Then you have a few people who take bits and pieces and stick them in another language, and call it multi-paradigm. These communities normally have very few people who actually know enough about FP to teach it to the OOers. It's kind of a bubble problem. You need experienced FPers to teach the paradigm and dispel the rumors. All of this really boils down to what a paradigm is: it's not a set of features, it's a mindset and an approach to programming that a language facilitates.


Elixir's got |> for composition and & for currying (as well as for lambdas); I'm not sure that's the best language example for differences (given the specific things grandparent listed of things languages are missing).


|> isn't proper partial function application. It can only apply a single parameter and you can't store a piped-into function in a variable the way you can a partially applied function in eg Haskell. Eg the following won't work:

    splitname = name |> String.split
    splitname.(" ")
Also, it's a stretch to call & currying. It's shorthand syntax for making an anonymous function. If & is currying, then the following JS code is also currying:

    const cube = a => Math.pow(a, 3)
Elixir doesn't have currying or partial function application any more than JS or C# do. If your definition of "functional" requires these features then Elixir isn't functional (and that's fine).


Clojure's got `(->` and many other useful macros that are the equivalent of `|>` (well, more flexible, in fact), and it has support for currying, so I think it's probably relevant.


Please see my explanatory reply where I described the key components of FP. I don't deny that there are lots of languages people call functional or multi-paradigm that are in fact not functional or are only borrowing some ingredients without providing enough to follow the paradigm. When we define a paradigm, we are defining the core abstraction of how programs are constructed. In FP, that would be composing functions, hence the name.

We FPers borrow or share concepts with OOers. The State monad (our basic building block of holding state) is very much like OO method chaining for example. But those similarities do not make either equatable with the other.


> you really can’t do FP without automatic currying and the function composition and application operators (. And $) in Haskell

But lambdas are enough to define these operators e.g.

    (defn comp [f g] (fn [x] (f (g x)))
    (defn ap [f x] (f x))
Operators and currying by default can use useful but it's a syntactic issue rather than a fundamental one.


Programmers work with syntax. If syntax is cumbersome, programmers are inefficient. Syntax should be optimized for what the language wants you to do. Languages that just bolt on lambdas seldom are optimized to use them effectively. They're optimized for some other abstractions.


We are talking about a Lisp here...


The OP was making a semantic point. Take their example and write it in Python and imagine trying to actually work with it. I partially applied a function in Python once. Sucked.


Sure, in a language where you can't easily extend semantics they have to be baked into the language to be practical. However, with Lisps you can trivially create new semantics as you need them, and whenever something becomes cumbersome it can be addressed by the user as the need arises.


We can also create objects from closures and classes from functions, so why have syntax for it?

Some languages have some of the feature I mentioned, JavaScript has bind for example, but it’s cumbersome and using it as a means of partial application for FP would be an abuse.

These operators need to be syntactically easy to use, ergonomic, and recommended. Notice that in Haskell these are infix for a reason.

Also, in typed languages with generics building those operators is more tricky.


But the reason Javascript isn't a functional language is because its built-in data structures are all mutable which encourages an imperative style, not because its functions are tupled or it lacks a function composition operator.

I agree syntax is important, but your definition rules out languages like Clojure which most people would consider functional. Haskell programmers might disagree, but I would assume on the grounds of purity rather than the syntactic overhead of partial application.


Syntax is not the core of functional programming, semantics are. Lambdas are all that is necessary to express functional programming. Notice how both Clojure and Haskell feature a lambda in their logos.


There is currently a proposal to bring syntax support for partial application to JS: https://github.com/tc39/proposal-partial-application/blob/ma...


. and $ syntactic sugar that many will argue make the code easier to write but harder to read. One can compose and build new functions just fine with plain lambdas. Partial application is a mixed bag, as one has to memorize arguments and argument order by heart, plus deal with flip soup. Named arguments are usually more readable.


This is a genuine question - does having a statically typed language make flip soup and argument ordering slightly less of an issue?

If your types are well labelled, can you push the complexity of your control flow and functional-glue (as I like to call it) into the compiler?


In the limit, if all of your types are well enough labelled, you wouldn't even need positional arguments - it could be inferred / would only have one answer. For example if (using C++ pseudo code) "double slope_of(double x, double y)" were instead "Slope slope_of(XCoord x, YCoord y)", the same function called with "slope_of(y, x)" could un-scramble those arguments. Just consider all permutations of the argument list as static overloads, and disallow having two parameters of the same type. If the language supports Currying, you could then Curry any argument of any position at any time, because the unique types guarantee the compiler knows which parameter is being bound.


Try this with (a - b).


John Backus's seminal paper, "Can Programming be Liberated from the von Neumann Style? A Functional Style and its Algebra of Programs" argues for this really well.

http://worrydream.com/refs/Backus-CanProgrammingBeLiberated....

Declarativity, compositionality, and referential transparency are all part of how he defines it. But he also takes a moment to contrast his idea of functional programming with a "lambda-calculus based system" (read: lisp), and one of the things he specifically criticizes is that, in those systems, functions aren't automatically curried. (His actual wording was that they are allowed to take multiple arguments, but the meaning is roughly the same.) The gist of the argument is that requiring functions to only be unary limits the number of operations you need to compose functions, which ultimately keeps the system simpler and easier to understand. He compares limiting function arity in functional programming to limiting the number of available control structures in structured programming: Strictly speaking, you lose some expressivity, but it's justified by all the other things you get in return.


For me, FP is mainly support of closures in order to build higher order functions. I also think that without a GC and proper tail recursion, a FP is cripple. Above that, you have many families of FP. FP with/without strong typing, FP with lazy/strict evaluation (often mixed), FP that favours immutable state, FP with OO, FP with syntactic sugar (and/or macro) or homoiconicity, FP with continuations (scheme).

It is a question of choice.


You either are or you aren't using mathematical functions as the basic building block of your program. What you're listing is either FP with additional features, which is still FP, or features of FP used in OO. FP is paradigm, which is more about how you structure your program and how you think about solving problems.


I would guess most developers get "very far" without any of those. If you would please, perhaps give some examples to demonstrate what you mean?


There's some very good example of functional programming in these videos: https://www.youtube.com/playlist?list=PLguYJK7ydFE4aS8fq4D6D...

they are of a person doing the hacker rank questions, but in haskell. The last video (about the magic squares) is probably the most informative for your query.


Sure, so to go back to my original point, functional programming is about building a single expression to perform your computation. Obviously this would be unweildly if you had to write your expression on a single line, so we build abstractions out of functions (hence the name FP). To combine these functions in arbitrary ways and stick with the single expression model, we need some way to create a pipeline of functions. So we have the function composition operator .

program = (phase 3 . (phase 2 . phase 1))

The input of program is fed to the first phase, then the second phase, then the third phase which will produce the output of the program. Each phase can be replaced or the pipeline extended as long as the inputs and outputs match. This is an infinite composable and is the basis of how FPers architect applications, by splitting each phase in their own pipelines, smaller and smaller until you get to very basic functions. We like it compared to OO because it's straightforward to understand the flow of data. There is no need for a complicated object graph and any piece can be replaced if the types are correct.

As in OO with inheritance (maybe not the best comparison), you also need a way to specialize functions in order to get their signature to match the input and output requirements of a pipeline. For that, we can partially apply a function.

For example, if I need to connect the function:

Int String -> Int (That is a function with two parameters, an int and a string that produces an int) to the output of the function * -> String (a function that takes some arbitrary value and produces a string), I need to be able to partially apply the Int, and get a function that is just String -> Int. In Haskell, all functions are single input, single output. Multiple inputs is just a shorthand for a pipeline of functions. The function

Int String -> Int is really just the pipeline Int -> String -> Int. (You can see how this all starts to fit together).

The function application operator is the most basic, it's just a way of forcing the order of evaluation. When you're building pipelines, you want to be able to construct a pipeline before you apply an input to the pipeline. To get the correct order of precedence between . and application, you use the explicit $ operator.

What I'm trying to convey here is that these operators are akin to inheritance, composition, overloading, etc. in OO. They are ways of working with the basic building blocks of the paradigm to make larger and larger programs. As you can also see, being able to create pipelines is the basic piece of the puzzle, and building pipelines requires automatic currying to facilitate making the pieces fit, while the application operator allows us to control the order of evaluation.


Terms (including "functional programming") are defined by their use, not by their etymology or the original intention behind them.

We know that since Wittgenstein at least [1]. Those who forget it, are doomed to lament about "no true Scotsman" distinctions nobody cares about.

[1] http://existentialcomics.com/comic/268


Functional programming is simply the obsession with functions: the use of the function call, and indirection on functions, as the only control mechanism.

Functional programming emphasizes recursion, deferral of control decisions into functions (e.g. pass functional arguments to some function, and let it decide which are called and which are not), and the re-ification of all control mechanisms as functions (e.g. continuations).

If it has loops, ifs and gotos, it is not functional. If it just has function calls, then it's functional, even if there is mutation going on of variables or array elements and such.

Purity plays well into functional programming, because if we have structured the control flows in the program based on function calls, we can determine which functions are pure and which are not, and get a good grip on reasoning about and managing the side effects to avoid surprises.


I think the author was fairly careful in drawing a distinction between functional programming and programming in the functional style. You can get a lot of mileage from writing pure functions and using immutable state regardless of what language you're in. It doesn't make your language a functional programming language, nor does it make you a functional programmer, but does that matter?

If every pure function we write prevents another headache, why not seek out opportunities to write in that style, regardless of language?


> you really can’t do FP without automatic currying

I find automatic currying by default quite weird.. If you don't provide the correct number of arguments to your function the errors generated won't be nice..

Would a 'curry' operator to provide currying be good enough?


[flagged]


What are you, 15 year old, in some imagined turf war?

If, Yogthos, or anybody else, posts about clojure, that doesn't make them a "shill" (sic).

Shill of what? In your world, somebody pays them to promote Clojure? You think Clojure has a "shill" budget?

How about, they are someone who works with and likes the language, and posts whatever they consider an interesting article about it?

Not to mention, where the fuck did you get with this whole reasoning? I've checked Yogthos profile, and he has posted like 3 submissions in the last 2 days, 1 20 days ago, 10-12 more 4-5 months ago, and then no post since 2017 or so. That's "too much" for you? To the point that you have a whole BS theory about it?

Oh, and if you don't like someone's posts, that doesn't make them a "shitposter" (sic).

In HN, what someone posts is irrelevant. Posts are voted to get to the front page. If you see this post here, it's because many people on HN like it.


Also lacking (from skimming the article) is functions being first-class when functional programming.


I definitely agree with the premise of this post. For me the sweet spot of functional programming is learning the most important principles—quarantining mutable state and side effects, declarative data transformations, higher order functions, etc.—and then bringing these patterns back into more ‘practical’ languages.

A good example is the immer.js library, which takes the pain out of immutability while retaining the most important benefits. Instead of bending over backwards to transform hierarchical data with lenses and the like, you just mutate a ‘draft’ copy imperatively within a bounded context. While it has its limits, it’s often much quicker to write and easier to understand than the equivalent ‘purely immutable’ approach without any real downside.


My natural inclination when I was younger was the functional style, but over time I've come to see it as sub-optimal for the problems of day-to-day business coding. Almost always there is state somewhere and I find the restructuring of the program flow unnatural at times where an imperative, OOP, or FP-inspired-but-not-FP-actually style would meet performance requirements while also maximizing legibility to a future programmer.

But I'm not a zealot. It's fun in the right domain.


I agree very much. Working professionally in Haskell across 3 jobs caused me to pretty much lose interest in it except for personal projects with very small scope where I can personally control all the reasons for refactoring or behavior changes.

When those things come from a stream of business problems, Haskell, I hate to say, is a poor tool, for just the reason you stated. You’ll always come across situations where mutating state would make an incremental change very easy (despite being “bad practice” for long-term complexity) but where having rigorously designed a functional system, that mutation or change is now painfully hard and requires significant refactoring.

Because business pressures utterly do not care about engineering properties of the backend system, except for high level summaries of basic properties like overall cost or reliability (metrics that functional programming vs other paradigms doesn’t have much impact on), you end up never being able to justify the upkeep and refactor-as-you-go workloads necessary for the functional implementation to not get bastardized into the exact same sort of spaghetti code mess you get with other languages.

In my experience, the types of errors, bugs and extensibility headaches in a business setting are not addressable with static typing, designs leveraging a type system, formal verification, or immutable design patterns. Those things are fine, not knocking them. It’s just that they cannot help you for dealing with reactive business problems. They don’t live up to the hype about it.

The problems are all about a sudden context change in which a previously valid set of behaviors, performance characteristics, whatever, is instantly rendered invalid because circumstances have suddenly changed, and you need to simply react to it and pray you’re successful enough that the system even lives to some magical future time when you can refactor.


Right, I can think of many situations where a new business requirement is like "oh, and for this client, when we do X, we also need to send them an email." OOP, for all its faults, makes this kind of extensibility trivial, where I can't imagine easily making such a change in the context of a statically typed effects system. "This function was pure and now it's not" almost certainly leads to a refactor.

That's not to say that statically typed FP can't be written in such a way to allow this kind of arbitrary modification, but then you get into the realm of finding the perfect abstraction in the type system, which is exactly the same kind of maintenance problem that OO architecture astronauts bring to a code base.

This is all to say -- we should be writing Clojure, which allows a tight focus on actual business requirements while still being flexible enough to allow you to bail out of FP patterns when necessary. :)


I’m super confused by this.

In the Haskell systems I write, this change would be trivial. It’s a single line change. Maybe it’s just a benefit of the particular pattern I’m using (standard Yesod handler, which if I remember correctly is just the ReaderT IO pattern), but still, it seems completely wild in my view to trash an entire language and indeed an entire paradigm because you can’t imagine making the change in your example.


Business logic is pure and stateless. Lifting to IO is a breaking change that may or may not require non-trivial refactoring. You shouldn't be putting business logic in your HTTP handlers.

Regardless, the point is not that it's impossible to make such a change in a Haskell code base, just that it may or may not require non-trivial breaking changes, while such a change in an OO enterprise code base will always be trivial.

There's lots and lots wrong with OO patterns, but being able to respond to changing and arbitrary whacky business requirements is a clear strength.


> Business logic is pure and stateless.

Why? From where did you invent this completely arbitrary constraint?

> You shouldn't be putting business logic in your HTTP handlers.

Why not? How else could this work? The user has to interface with the business logic somehow.

There’s nothing at all wrong with a HTTP handler that takes in a request, delegates to a few different business logic things, and then sends a response.

Not only are you moving the goalposts with your argument, you’re also coming up with things that are completely false.

There’s nothing about OOP that inherently makes extending some code to send an email easy. Likewise, there’s nothing about FP that inherently makes the same task hard.


Yeah, I don't know. I've built three software products with Haskell, and yet, they all have state.

It just happens to be the cheapest and most reliable tech I've come across to run my businesses on.


Sort of surprised how restrictive some of ideas in this article and comment section are to say if you are writing functional code or not.

If you are writing pure functions and total functions, it's functional code. The rest is syntax.

And pure functions are probably best tagged with an asterisk because most of the time you have state to manage somewhere, you're just going to jump through some syntactic hoop (monad) to mask it


So, I've been working with Clojure professionally and this is simply not the case. Vast majority of the code is pure with side effects and state being pushed to the edges as a thin imperative shell around the functional core.

There's a great presentation discussing Pedestal HTTP library for Clojure, and one of the slides notes that Pedestal has around 18,000 lines of code, and 96% of it is pure functions. All the IO and side effects are encapsulated in the remaining 4% of the code: https://www.youtube.com/watch?v=0if71HOyVjY

This is a completely typical situation when you're working with a functional language.


this is typical is any paradigm. even something like active record its possible to isolate all the actual DB interaction to just one file.


It's not just about isolating DB interactions. Any code that deals with IO or creates side effects lives at the edges of the application.

Vast majority of the code is written pure functions that can be reasoned about independently of the rest of the application. This is not practical in languages that rely on mutable data because things get passed around by reference. As soon as you pass a reference to an object to a function that's used elsewhere you end up with implicit coupling that's difficult to reason about.

Of course, you could use immutable data structure libraries in imperative languages, but then it's completely on you to ensure that you never put any mutable data, such as an object reference, in those. At that point you might as well use a functional language.


while I agree with your overall argument, i think this is wrong:

> And pure functions are probably best tagged with an asterisk because most of the time you have state to manage somewhere

purity does not mean there is no state! This is exactly what the monad (and similiar structures) in haskell is for, which you correctly mentioned. But they don't violeate the purity.


This is kinda debatable. There is still state in Haskell, regardless of how it's compartmentalized. The implementation of the IO monad for example must rely on state to function, because IO is inherently stateful. You may not have to deal with it as much, but under the hood there is still state.

And this matters in practice, because sometimes the desired effects of your program are inherently stateful. For example, if you want to have lines of text output to the terminal in a certain order, you have to at least think about state at some level. If Haskell was truly free of any state, this would be impossible to guarantee.

What I will agree with though is that it's mostly possible to ignore state in Haskell, and that individual pure functions are truly pure. My point is that those pure functions can't do anything without invoking state at some point.


but I don't see how you are debating me!

I mostly agree, but that's my argument :)

> If Haskell was truly free of any state, this would be impossible to guarantee.

I was never claming that haskell was free of state, i was claiming the opposite! That state and purity are not mutually exclusive, they just result in constructions like monad.


i think it devolves into semantics at that point. when each line is doing some DB query or remote HTTP call it rarely matters if it's in a monad or not


Exactly, whether you use a monad or not is an implementation detail. The high level idea is that you decouple the code that does IO from the code that's responsible for the business logic.


well, i think it does matter! Especially if you're pure (or try to be) it really matters, because you need constructs like monads to remain pure.


Functional programming as defined in its purest form would be ( as some others defined here ), akin to "mathematical functions as the form of abstraction". In the context of computer programming, Haskell would be the epitome of this. That being said, elements of functional programming are already available and making more head-way into imperative languages creating a hybrid imperative/functional style which I think is great. Some elements of functional programming that provide huge benefits are hard to argue against:

1. high-order functions ( as inputs or outputs ) e.g. map, reduce, filter, etc. Direct benefit is a significant reduction of code and the ability to chain operations together and compose them.

2. immutability. Direct benefit here would obviously be minimizing shared state thus reducing bugs

3. side effects: minimizing and pushing side-effects to "edges" of an application. and/or designing programs such that side effects can be mocked out. e.g. building up a Http request that can later on be executed, thus facilitating unit-testing.

4. type checking: Converting runtime errors into compile time errors ( to some degree ). again, by offloading work to the compiler, you can catch errors much earlier on.

The biggest problem w/ FP ( as I have evidenced ) is that while these things are great, when taken too far for the sake of pure FP itself, can create complex code. so there has to be a balancing act involved.


It seems that FP zealots have a hard time appreciating that there’s both a value and a cost to FP adherence. I find that FP forces too many contortions in the name of some ethereal value without a consideration of cost.


I think I know what you mean. I wouldn't use the word zealot as much as astronaut.

FP Astronaut -- Super focused on hypothetical, academic, mathematical concerns and random performance optimizations (e.g. tail recursion) over clarity. I think scala's Slick library is an example of this.

FP Pragmatist -- Concerned about clarity, debugging, logging, correctness, readability by all skill levels, modifiability, documentation, the common cases first and the extreme cases last

I think you can be a big FP advocate without being an off-putting incomprehensible blowhard


I've never heard of tail recursion being used as a performance optimization. Generally it's required to prevent a recursive function from blowing the stack.

Iteration is usually more performant than recursion, but doesn't mix well with immutability.


Tail calls _are_ iteration, just expressed as a function.


I have seen this happen too - what would have been easier to understand is converted into this complex spaghetti of code, split across multiple files that it becomes difficult to understand, debug the flow. One example: Scala's implicit def could be misused in so many ways. It is described as "magic" in some tutorials...and that is a bad thing in my opinion.


It gets more cost-effective if combined with [semi-]automated tooling for verification, testing, and refactoring. Functional style makes these things easier. That's the draw in for me. So, I collect info on FP and verification followed by code generators and equivalence provers/testing for imperative and low-level forms of it. Examples include seL4 using Haskell->C, Lammich's functional->imperative converter for data structures, or maybe even compilers like CakeML.


Same applies to OOP or imperative programming in general.


You can have convoluted or simple FP programs as you can have the same for OOP. Nonetheless, is paradigm better at yielding more robust software you can reason about well more of the time? I would say yes, and FP does this.


It depends on what the problem is. If the hardest part of of the problem is reasoning about the algorithm, then FP may be great. But then you get into things like hard real time, where the hardest part is reasoning about time. Or high-performance computing, where an essential part is controlling memory layout. Or...

But you did say "more of the time" rather than "always", so I'm not actually disagreeing with you.


Oh, there's cost. But in my experience, that cost is still cheaper than any of the alternatives.


Only in as far as elegance and predictability can be considered ethereal values in programming.

> I find that FP forces too many contortions

In light of their underlying computational models, functional programming really isn't the paradigm that is rife with contortion here.


Discussions about functional programming really need to include R as an example. Highly popular with data scientists, it regularly makes top 10 lists and is much more mainstream than, say Haskell or F#.


Great work, thanks for sharing!

I was looking for that comment to upvote it but it didn’t exist. Everyone is expaining their pet feature of FP.

Maybe we should put aside our differences and lean towards cheering the people who put lots of effort into opening this world up for as many people as possible. Those who are not FP experts need exactly this kind of explanation, not mathematical mumbo jumbo.

I’ll save the URL for sharing with anyone interested.

So here it is again: Great work, thanks for sharing!


Personally, I like the idea of using immutability. In Swift, I declare as much as possible with let instead of var

    let pi = 3.141592
    let label = UILabel()
    let names = [“Java”, “Perl”, “Swift”]
These won’t accidentally be changed and they’ll never be null.

The editor/tools can now make additional checks before I try to run the code.

A small step towards correctness and readability.


My main problem with functional programming (mostly based on some experience with Haskell) is that I find it really hard to estimate the performance (in terms of runtime and memory consumption) of programs.

It's really nice to structure your program around types and build your program via function composition - but getting it performant (let's say similar to C, Rust or Java) is far from simple. In particular performant Haskell does not look like idiomatic Haskell, whereas performant C, Rust or Java does.

Now don't get me wrong - I absolutely love the idea of functional programming and λ-calculus is a beautiful system to express computation, but in practice it's still hard to make performant while keeping a readable style. Maybe Idris 2 can close the gap?


I think this is more of a problem with Haskell than functional programming in general.


Lazy evaluation is what's most commonly blamed for that, no?


Everyone talks about functional programming as if its this great new thing. I'm coming from a strong C# background, and to me this reads exactly like OOP.

> the output value of a function depends only on the arguments that are passed to the function, so calling a function f twice with the same value for an argument x produces the same result f(x) each time.

OK, that's basically a static method in C#. Is that wrong?

The piece goes on to say

> Functional programming, therefore, is programming so as to avoid these side effects wherever possible.

So functional programming should never handle state? Do you just move that logic into another framework (Angular, React, etc)?

Maybe there's some obvious stuff I am missing. Why should I use a functional language (like F#) over an existing OOP language?


"OK, that's basically a static method in C#. Is that wrong?"

A static method can have side effects and/or return different results even when given the same arguments. e.g. returning DateTime.Now would never yield the same results.

"So functional programming should never handle state?"

Any useful application will have state and side effects, the idea is to push as much of it as possible to the outer edges of your application and keep the core as pure as possible/practical. You can do that with any language, but a functional languages will be designed to make that easy and idiomatic.


> OK, that's basically a static method in C#. Is that wrong?

It it can be true of a static method, but a static method can also make an HTTP request, read the current time, etc, so that the answer doesn't just depend on the arguments.


> OK, that's basically a static method in C#. Is that wrong?

Yes, that is wrong. Static methods don't have access to object state, but they can still access mutable state (e.g., static data members) and invoke other side effects, so its results can depend on things other than the arguments.

> > Functional programming, therefore, is programming so as to avoid these side effects wherever possible.

> So functional programming should never handle state?

“Wherever possible” is not the same as “everywhere”.

> Do you just move that logic into another framework (Angular, React, etc)?

No, though pure languages generally do something loosely similar; Haskell programs, can be viewed (in one interpretation of values in the IO monad) as having a main function that produces an imperative program as the output, which by convention the runtime executes.


> that's basically a static method in C#

No, `Console.WriteLine` definitely has side-effects

> functional programming should never handle state

No, quoting the same quote as you did: "avoid these side effects wherever possible", emphasis on "wherever possible". Of course, a program which never has any side effects or state would be useless, as it would not accomplish anything.


OOP and FP are not orthogonal per se.


You can still have side effects in a static method.

A more functional style, say like that of the one in F#, strongly encourages you to avoid many occasions where side effects may otherwise appear. Look at this fiddle for instance:

https://dotnetfiddle.net/68jAIw


Non-functional languages seem to pay lip service to FP, confusing people into thinking they're already using it.

Example: It is well known that "Java Strings are immutable", which is a good idea. Yet you can absolutely write: > String str = "hello,"; > str += "world!"; So it's "immutable" for some lawyerish definition which does me no good. If I write a method that depends on s, it will produce different results each time.

> So functional programming should never handle state Not at all. You should know (and be able to enforce) when your code does mutations.


> Yet you can absolutely write: > String str = "hello,"; > str += "world!"; So it's "immutable" for some lawyerish definition which does me no good.

`str` is a mutable reference to an immutable object. You can make the reference point to something else, but the object being pointed to is immutable, so eg if you pass it to a function, that function cannot change it under you. If you want to make the reference immutable, declare it as final. I don't think understanding the difference between a pointer and the value it points to is lawyerish.


Thank you. ^ that's the lawyerish definition I was referring to ^


Strings are immutable. Variables can be reassigned. Those are two very different things.

Strings being immutable means that you can can pass around references to them, safe in the knowledge that they can't be modified.

For example:

  void example() {
      String s = "hello";
      someOtherMethod(s);
      // `s` is guaranteed to still be "hello" here.
  }
Or, a live example using repl.it: https://repl.it/repls/BothTrivialQuadrant


It's more about controlling state. I like to think of ring[0] as a really nice example of this that's pretty easy to reason about.

It abstracts http handling into a series of key/value data structures. The request that's pulled in looks like this:

   {:remote-addr "localhost"
    :headers {"host" "localhost"
              "content-type" "application/json"
              "accept" "application/json"}
    :server-port 80
    :content-type "application/json"
    :uri "/"
    :server-name "localhost"
    :query-string nil
    :body ""
    :scheme :http
    :request-method :get}
The response like this:

  {:status 200
   :headers {"Content-Type" "text/plain"}
   :body "Hello World"}
Functions manipulate these structures, but their input and outputs are primarily those data structures.

  (defn handler [request]
   {:status 200
    :headers {"Content-Type" "text/plain"}
    :body "Hello World"})
Your app is now just a function stack that get's called in this order:

  (def app
    (-> handler
      (wrap-content-type "text/html")))

  Request => wrap-content-type => handler (generate response) => wrap-content-type => Returned
Want to add middleware? Define a function that handles it, here's an example[1]:

  (defn middleware [handler]
    (fn [request]
      ;; Do something to the request before sending it down the chain.
      (let [response (handler request)]
        ;; Do something to the response that's coming back up the chain.
        response)))
Then add the function to your app:

  (def app
    (-> handler
      (wrap-content-type "text/html")
      (wrap-keyword-params)
      (wrap-params)))
You want to handle session data? Ring provides wrap session[2], add it and then use your own middleware or handler to manipulate the session by just changing the value at :session.

Example [3]:

  (defn handler [{session :session}]
    (let [count   (:count session 0)
          session (assoc session :count (inc count))]
      (-> (response (str "You accessed this page " count " times."))
          (assoc :session session))))
Much of your world is now reduced down to:

* Does this data structure look correct?

* How can I fix it?

* Do I need some resource like a db connection? Ok how do I add it to my data structure?

It all composes, and you end up worrying a lot less.

Finally I find that doing things this way ends up creating programs that just have much less explicit state, as opposed to temporary state that's derived from other values and so only changes when those things change.

You can program in other languages like this, it's just a bit harder, and you have to hope that no one else you're working with breaks one of the assumptions that a system like this is built on, as opposed to having the language enforce those for you.

Sorry if I've been unclear =)...

[0] - https://github.com/ring-clojure/ring/wiki (Examples taken from ring). [1] - https://stackoverflow.com/a/19459508 [2] - https://github.com/ring-clojure/ring/blob/95e4ca25d5b98c45f9... [3] - https://github.com/ring-clojure/ring/wiki/Sessions


This is the thing that is drawing me in. The fact that others really need babysitting.

I've never had huge problems with writing programs php or JS. Now everyone is freaking out about typescript and PHP type hints and all I'm doing is sitting here scratching my head asking but why? What does it do for you?

It looks like people were doing a lot of dirty state mutation and a lot of optional parameters that sometimes had mixed types because "then you can use it with an ID, an email, or a user object you silly billy".

Both of those things create a god awful hell. The second example may sound easy to use but then you have now way to reason about code, you can no longer clearly compose pieces and variable names become less descriptive. Types provide you with a simple pair of hand cuffs that prevent this issue. But they come at a cost.

My issue is that people with shitty habits should probably just change their shitty habbits. Changing from a dynamic language that lets you take a lot of safe shortcuts to a static language because you abused those shortcuts is really the wrong way to go.

Moving to something like clojure looks like a much cleaner solution to the problem.

Still handcuffs, but at least youre forcing the right thing (pushing the state mutation to the edge, leaving the business logic clear and rational without all the type safety bloat.


This is exactly my feeling about Typescript. As a Go! developer, I'm fully for strong types, even to the point of restriction, but only if they are enforced.

I'm currently maintaining a massive spaghetti code TypeScript nightmare that escape-hatches with @ts-ignore and `any` about every other line. I'm not sure why they even chose to give up all of the imperative, dynamic power of JS and then not even adhere to the pre-processor's own rules.

I think there's an argument for imperative and functional approaches, but I also think trying to change an imperative language into a functional one is a bad idea, and trying to shoe-horn in type safety into a language that doesn't have it is just foolish.

I feel like languages come with trade-offs inherent in their design (Go! trades DRY for type safety, JS trades data validation/type safety for convenience), and if you don't like the trade-offs of the language for your project, then you are probably using the wrong tool for the job. However, when it comes to JS, you usually don't have a choice (unless you go with Dart or WebASM, which I admittedly know very little about).

I suppose TS could have a place (I particularly understand libraries that attempt to improve their data contract using it), but then I wish it didn't provide such wide variety of ways to escape or ignore it's restrictions. I feel like they are fake guard rails and provide a false sense of security.


Agree wholeheartedly, I have no issue with typescript or typesafe languages.

I actually enjoy them a lot for other reasons, being able to guarantee type is a nice feeling. But it doesn't guarantee correctness. And I think that is the problem.

Developers see lack of type safety as a lack of correctness. Hell, I don't even blame them, a lot of code I deal with is impossible to reason about because of so many bad habbits code review is supposed to catch.

My issue with TS is that people are using it to solve a problem that it does this actually solve: the developers reluctance to code clean.


Is Haskell the only FP language where chaining instructions one after the other in not available by default ?


Any purely functional language will :

https://en.wikipedia.org/wiki/List_of_programming_languages_...

If a language is pure then there is no point running an instruction other than to return a result since the only point in ignoring a result would be if the instruction had a side efffect, which a purely functional language disallows.

Haskell does simulate this chaining instruction via the do notation..

   do 
     a <- getLine
     b <- getLine
     putStrLn ( a ++ b )
It looks like it is chaining instructions, but really this gets rewritten by the compiler to something like this (in pseudo js) :

    bind(getLine, function(a) { 
        bind(getLine, function (b) {
             putStrLn a + b; 
        })
    })


I have spent ages understanding this. And I am super happy that what you summarize is the mental model I have made up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: