Hacker News new | past | comments | ask | show | jobs | submit login
A practical introduction to functional programming (2013) (maryrosecook.com)
319 points by tosh on Dec 25, 2016 | hide | past | favorite | 151 comments



This is a slight Rust tangent but I've been thinking a lot about the properties that make functional programming interesting and are called out in the article. Specifically the immutability providing an easy way to reason about functions via only inputs and outputs.

I feel the the memory model of Rust(single mutable ref or unlimited non-mutable refs) combined with the fact that there are no mutable globals(inside safe code) gives you a much easier system to reason about. I know based on inputs and outputs what could be mutated and since I know there's only one mutable reference I can trace that single ref easily.

It really feels like getting a large majority of the benefits of functional programming without sacrificing the performance and predictability we've all become accustomed to in non-functional languages.


Immutability is one of many advantages of functional programming. Other advantages include higher-order functions (which Rust cannot support very well with its memory model), stronger type systems (only possibly because the Lambda calculus is on much better mathematical footing than any production imperative language semantics), and better/safer state management via ADTs (which Rust has decent support for).

That's the risk of reading articles like the OP, where they show you a couple functional tricks in an imperative language that inherently can't give you the whole picture. You get too limited an idea of what it means to use modern functional techniques. You simply cannot do anything approaching cutting-edge functional programming in a language like Java or JavaScript or Rust. They all lack some important aspects of what makes functional programming useful.


> That's the risk of reading articles like the OP, where they show you a couple functional tricks in an imperative language that inherently can't give you the whole picture.

The other risk is that people (like me, a few years ago) are sufficiently convinced by these simple loop examples that they decide to apply them. This might (or might not) lead to greater functional aspirations which make "whole picture" much more interesting. You might get to free monads, but not before you've used a fold on a list.


Can you explain why Rust "cannot support higher order functions very well?"

I'd also like to hear about "what makes functional programming useful" if you have a minute.


Sure. It's just because when you're passing around functions of type (a -> b) the only fixed-size representation is every possible input/output pair, which is obviously not a good representation. Instead we use a variable-size representation in the form of code (passed around as a code pointer) and captured variables (the format of which depends on the function being passed around). Together these are called a closure. Using variable-sized closures created at runtime requires dynamic allocation, which rust allows for but does not make convenient (intentionally). Sometimes you can get around this but not always. So the kind of stuff you can do with higher-order functions in rust is more restricted and usually has extra rigamorole required for memory management. It's not that rust is doing something wrong, it's just a necessary consequence of having such excellent guarantees about allocation (or lack thereof).

> I'd also like to hear about "what makes functional programming useful" if you have a minute.

This is sort of a book-length topic and I'm on my phone, but a few points worth looking up are (Generalized) Algebraic Data Types, Typeclasses (in particular Functor, Foldable, Applicative, Traversable, Monad (in particular Maybe, Either, and State)), Higher Kinded Types, corecursion, the Y combinator. The gist of it is that you can do a bunch of cool stuff you wouldn't be able to do (or even think about doing) in an imperative language. For some reason we're not entirely sure of, it seems to be way easier to isolate the essence of what we're trying to accomplish when using functional programming than when using imperative programming, and therefore to automate the boring work normally associated with doing things to data, like writing loop definitions.

It's kind of like how natural functional type systems have the same derivation semantics as various logical systems, even though it's not entirely clear why that ought to be the case. We seem to have just stumbled upon an abstraction that meshes nicely with the platonic universe of useful computer programs, as opposed to an abstraction that only exists because of the particulars of how our computers work.


> The gist of it is that you can do a bunch of cool stuff you wouldn't be able to do (or even think about doing) in an imperative language

That's an odd claim and one I've never heard. Surely you can do everything you can with imperative programming as you can with FP, it's just that you will do it differently and, arguably, in a way that will present some downsides (e.g. lack of safety).

Also, for what it's worth, you are enumerating a list of characteristics of functional programming but you're not answering OP's question, which was: what makes FP useful?


You can also do anything in assembly. You just won't, practically speaking. "Can do" in that case is shorthand for "is not inordinately expensive and difficult to do".

What I meant was not "you can write programs with FP that you can't in an imperative language" (which is obviously not true); what I meant was "you can syntactically and semantically do things in functional languages that you can't express in imperative languages within a reasonable amount of effort".

> what makes FP useful?

As I said, this is a book-length topic that I can't hope to spell out. The reason I listed those concepts is that learning about them will help to impart understanding of why FP is useful.


Obviously, functional and imperative languages are equivalent in terms of theoretical expressive power. That being said, functional languages often make it easier to build useful abstractions that are more correct. Okasaki's implementation of Red Black trees are an excellent example of this...


How do you define "more correct"?

Sure, the CH isomorphism can help you prove some properties of your code based on its types, but that doesn't mean you can't create an equally correct program imperatively.


> that doesn't mean you can't create an equally correct program imperatively.

You're getting hung up on the fact that the languages are all Turing complete. That's true, but not really relevant. The important thing is that it's much easier to do things correctly with functional constructs like (G)ADTs and strong type systems.


You're preaching to the choir about the superiority of static type systems, but your argument was about imperative vs/ functional, not dynamic vs/ strong.

GADT's and composition are trivial to achieve with imperative languages.


There's a reason not a single production imperative language supports GADTs. Imperative sequencing and mutability semantics, as done in practice, are inimical to rigorous formalization. When imperative language designers try to make a nice consistent type system, they hit road blocks early on and give up. Rust doesn't even support HKTs. GADTs are just an example of something that you rarely see in imperative languages because they almost never even make it that far.

One of the other nice things you get with languages with good functional/pure/immutable(/lazy) semantics are free theorems, where you can actually make strong useful statements about the nature of your program like "this will not crash", "this is memory safe", "this will terminate", "this follows the functor laws", etc. whereas it's effectively impossible to make such claims in a language with messier abstractions like loops and mutable variables.

Basically, the reason functional languages have better type systems is that their underlying untyped semantics are more sensible and simpler than the untyped semantics of mutable imperative (strict) languages, and this is reflected in the safety records of languages through mechanisms other than type safety.


> There's a reason not a single production imperative language supports GADTs.

Scala?

OCaml would also qualify as a counter example, although you cheated a bit by adding all these adjectives and you could argue that OCaml is not "production".

Haskell can easily do imperative as well.

I agree with you overall but I think you're drawing too strict a line between these concepts. Most of the guarantees you obtain from these languages do not even come close to "this will not crash" (this is very hard to guarantee in languages with no totality guarantee, i.e... most of them), let alone "this will terminate" (since most of these are Turing complete).


> Scala?

Well, I'd say Scala is definitely a functional language, but even then its GADT (and even plain old ADT) support is pretty limited and uncomfortable. "Case classes" are one of those things in Scala where it's obvious they're running into the limits of the JVM.

> OCaml would also qualify as a counter example

Again, I'd say OCaml is absolutely a functional language. It's also definitely a production language; I know of a number of firms that use it.

I should have said "Imperative and not Functional languages", a la C, C++, Java, Javascript, etc.

> Most of the guarantees you obtain from these languages

You're right, most of the guarantees you get from e.g. Haskell rely on people following the typeclass laws for whatever you're doing, which isn't necessarily the case. It's not a guarantee in the sense that e.g. Coq or Agda give you a guarantee; it's just a guarantee in the sense that if you follow some simple rules, you get good behavior.

As for totality and termination, it's true that you can't guarantee either in plain old Haskell, but Haskell (and some others) do support checking for pattern match completeness, and then it's not very hard to (informally) guarantee totality by sticking to certain pre-defined operations that operate on data (as opposed to codata) and have good decreasing/tightening rules. For example, if I saw some code composed entirely of functor/foldable/traversable operations over well-behaved data structures, I could be quite confident in correctness and termination.


> Using variable-sized closures created at runtime requires dynamic allocation,

While this itself may be strictly true, most closures in Rust do not require any allocation at all, and will even possibly get inlined like any other function.


I found this series convinced me I wish I could use FP

http://fsharpforfunandprofit.com/

Specifically the "Why F#" part and within that this article

http://fsharpforfunandprofit.com/posts/correctness-type-chec...

It actually kind of blew my mind (hypebole maybe). Like wow. Not only do I write less code but by language design way more issues are solved .

Kind of like static typing stops certain kinds of issues. FP solves the next level above that.


The interesting thing to me about the FP vs OO discussion that your link highlights very well is that one of the major benefits of FP languages is the expressive type systems that, at least the oo languages I've seen completely lack.

I'm not sure why no mainstream OO language addresses this.


It's because OO does not have firm mathematical foundations, so it's much harder/impossible to give OO languages a consistent and powerful type system. Class heirarchies in particular are inimical to good type systems.


A notable exception is Scala's type system, which is pretty expressive. It has been formalized and proven to be sound[0].

[0]: http://scala-lang.org/blog/2016/02/03/essence-of-scala.html


I wish people would stop making this distinction between OO and FP like you can only use one or the other

In imperative languages you typically have to use objects to create functors and other FP primitives

If you reading this as a "OO Programmer" you can benefit your "OO" style code with some "FP" and vice versa

The question I would really ask is have you stopped learning?


I'm not the OP, but higher-order functions in Rust are a leaky abstraction. Internally, functions are implemented as closures which might capture ownership and potentially duplicate/destroy variables in their scope. This means that there is no single function type in rust, there are several for "use once closures" (which capture ownership of a variable and potentially deallocate it), "linear closures" (which you can't duplicate), "non-linear closures" (normal functions), toplevel functions without environments, closures which allow borrows from their environment to escape, etc.

Which one of these categories your function falls into is a decision of the compiler and might change if the borrow checker improves/regresses.

---

These things are pretty much non-issues unless you make heavy use of higher-order programming. For simple second-order functions such as map/filter/fold it's still easy to wrap your head around all of this. However, experience in Haskell has shown that higher-order functions are a very useful abstraction and if their usage is lightweight and intuitive they pop up all over the place. For instance, parser combinators frequently involve fourth-order functions. At this point you do not want to think about the implementation details that your compiler has to fill in.

At this point, I'm afraid that we will not see a lot of elegant higher-order prorgamming in Rust, because it is potentially so difficult to keep track of the ownership story. I'd be happy to be proven wrong, though. :)


> Internally, functions are implemented as closures which might capture ownership

This seems backwards. Closures are implemented as a struct for the environment, plus a method on that struct that represents the function call.

This then ends up the exact same way as any other function or method call in Rust: with taking self by value, by reference, or by mutable reference.


The environment contains references/copies/borrows of local variables, depending on a number of conditions on the code. Since the environment struct is generated by the compiler, this is different from a method call where you specify the struct yourself.


It's still fundamentally sugar; on nightly, you could write it all yourself.


What exactly are you arguing for? Closures can be implemented using closure conversion as a derived form in most languages. That's how you compile the code in the first place...

The point is that you reason about closures as ordinary functions with local assumptions. You don't need to know how they are implemented at all. On the other hand, in Rust you need to be aware of the implementation in order to use higher-order functions effectively.


I'm arguing that the statement I quoted in my above comment is incorrect; closures are implemented with functions, not the other way around.


Care to give an example to illustrate what JavaScript is lacking?


Which languages do you recommend then?


Just jump straight to Haskell. The learning curve is substantial, but it will guarantee that you learn functional concepts correctly and not in the hackneyed way most "functional" js or Java code is written. If you end up not liking Haskell, every other popular functional language (e.g. Scala) has some subset of haskell's features, so you'll already be equipped to use them.


Can the same concept be learned from scheme? Or is there something special about Haskell regarding functional programing?


Shared mutable state is the problem.

Functional programming addresses this by eliminating mutability, but at the cost of losing many valuable algorithmic approaches. (The "quicksort" example usually used to illustrate Haskell doesn't perform a true quicksort and is horrendously slow).

Rust deals with it by eliminating sharing via its affine type system. I have only just started exploring Rust but I'm very impressed, it seems the perfect blend of theory and practicality so far.


To be clear, Haskell absolutely supports shared mutable state, even in sophisticated ways like transactional memory.

It's also mostly straightforward to implement a fast Quicksort using mutable arrays in Haskell.

The difference is that all that stuff is formulated within a theory based on pure evaluation of functional expressions.

Loosely speaking it's like how the math that physics uses doesn't have mutable variables, but it still manages to express time varying phenomena...


> It's also mostly straightforward to implement a fast Quicksort using mutable arrays in Haskell.

Do you know of one? All the fast array-based quicksorts I have seen in Haskell have been very awkward compared to implementations in imperative languages with good array support (and they have been quite slow as well).

I don't feel like this is much of an indictment against Haskell. Sure, a functional language isn't a good fit for a very imperative algorithm, but that's not really its target domain.


I wrote a pretty concise version of it a while ago to convince myself it was possible [1]. It isn't as short as the naive (incorrect) quicksort using lists, but every line has a very clear purpose.

Most implementations on Rosetta code in other languages [2] seem to be about as long.

[1] https://gist.github.com/harpocrates/bbed7b6837d524aafa02

[2] https://rosettacode.org/wiki/Sorting_algorithms/Quicksort


Thank you, that is a very nice implementation! I think the key is that the vector library is very well designed - it has certainly been the cause of the majority of my Haskell performance successes. I'll say that your implementation benefits from being able to access partitioning as a library function - the actual implementation of unstablePartition is not particularly pretty (but it is conceptually simple).



I can't find a quicksort implementation on the other side of that link.


Sorry, I meant to say you could pretty easily make quicksort using the techniques in that link.


> Loosely speaking it's like how the math that physics uses doesn't have mutable variables

I'm curious, can you elaborate on this? If you view time as discrete dimension, mutable state is clearly not necessary. But since we cannot travel through time (yet), nature seems quite mutable in practice. For example, when microwaving food, it is extremely hard to put it back to the same temperature gradients.

Similarly with functional programming: if you adhere to it purely, provenance of every state can be traced back. If not, state changes are non-trivial to reverse. So it seems like modelling physical phenomena with immutable states is a choice of perspective rather than a law of physics that can be empirically tested.


Nature is mutable, but we can describe it with mathematical languages that aren't based on mutation.

In some sense I'm saying that Haskell's immutability is superficial, just a semantic choice of perspective...


I think that's a basic misunderstanding of mathematics and physics that is often used as a justification for functional programming. Formulas in physics often reference implicit state such as the time - which is exactly what imperative programming languages do.

I'm agreeing with you about the difference of perspective.


But they don't typically assign a new value to the time variable. It's not part of the normal language of math to have a formula that changes a variable...

Time is more like a function variable or implicit parameter, not a piece of mutable state in the imperative programming sense. Such implicit parameters are also part of Haskell, by the way...


I randomly open Feynmans lectures on physics and see

E (between the sheets) = sigma/epsilon0

E (outside) = 0

assigning two different values to the variable E.

In applied math, there are many expressions in which a variable y is treated as a value (depending on implicit parameters) or as a function y(t) depending on explicit independent variables. This is a matter of convenience, not principle.


I don't understand the context of that E example so I'm not sure what it means... Is the value of E being mutated as in an algorithm, or does it just have two different values for different inputs like a function?


I can't find that at feynmanlectures.info, but I think it's describing the electric field in an idealized capacitor -- so yes, a function, as you'd expect. I don't know where the idea comes from that you write basic physics as 'twere Fortran programs.

If Feynman had used something like imperative programming in the Lectures, I wouldn't have been as baffled as (most of?) the rest of a class of physics undergraduates confronted with "LET I=I+1" in the introductory programming class. So I have some empirical evidence from the time when your first computer might be a PDP-10 down the road from Tony Hoare, that imperative programming is really unnatural for physicists.


You missed my point. Probably I expressed it poorly.


In a physics lecture, the audience is reasonably smart but slow - so you can use compact natural language flavored instructions: e.g. "as we take the limit of x to zero, expression(x) goes to c" which change values of variables. In programming, the agent being addressed is analogous to Turing's dumb schoolboy - it's very literal and needs easy to interpret instructions for each step so: c "= trytocomputelimit(f,x)". In both cases, the state of the calculating agent changes as it follows the steps of the calculation.


In defence of physicists, a few references are needed for these assertions, like them associating mathematical functions with a stateful "calculating agent".


I gave a reference to Feynman's note. The "stateful calculating agent" is the physicist or student.


The universe can be modeled at least as naturally as a composition of functions of type (Time -> a) as a mutation upon a global mutable state.


As a Haskell programmer who's done a little bit of Rust, I agree.

In Haskell, we do have an escape hatch in the form of the ST monad, which allows you to write an imperative algorithm with mutable state and call it directly from pure code. This is safe (i.e. it doesn't break referential transparency) because there isn't any way to introduce nondeterminism.

ST is more restrictive than the IO monad. You can't do file I/O or use concurrency. (You can use the same parallel evaluation tools that are available in pure code.) So, pure Haskell code plus ST is definitely still more restrictive than Rust safe code, but the difference isn't quite as large as it might appear at first. And sometimes Haskell's added restrictions are a good thing, if that's what you want.

It is pretty interesting that if you start with the premise that sharing + mutability is bad, you can go either in the direction of Haskell and disable mutability (and then re-enable it in some contexts that are known to be safe) or the direction of Rust and allow sharing and mutability, just not at the same time. Both seem to be good strategies that will appeal to somewhat different audiences.


i think it is important to understand the practical restrictions the languages are created for. You don't use haskell for space/latency restricted systems, but (as a much as i admire rust) haskell is easier to program with if you have (most of the) topics where that don't have these hard restrictions. Haskell is concise and short but still performant. Haskell, after all those years, has potential. Haskell is simple, but not always easy, to reason about, especially as a beginner ;). Every tool has their up- and downsides. While the ST-monad has their restrictions, these restrictions contain much power in the hands of compiler engineers because of the guarantees you have.

I am a strong Proponent of a "no free-lunch theorem" (control vs optimisability) for programming languages and don't like the abundance of multi-multi paradigm language because they end up being no good at anything. Just look at Java-8s horrible lambdas and pseudo FP, it's not low- but also not a great high-level.


Yes, I've noticed that as well. That's why there was even a suggestion to remove `mut` - because the compiler can figure out which things should be `mut` and which shouldn't.

Functional programmers had a meltdown, but I wonder if that wasn't actually a better choice.


I see it as part of the documentation - sure the compiler doesn't need it, but it's useful to me.

(Although I say that as someone who is interested in rust and learning rust, but hasn't yet wrote any substantial code in it)


Documenting and enforcing important intentions is super useful, and is all over the place in statically checked languages.


I don't understand why, in this situation, just make it a compiler option? What are the downsides to options like this? Confusion among Rust users when perusing another code base?


The downsides of compiler options that modify the language is that code can no longer be understood on its own. Most languages stay away from these for that reason.

One notable exception is Haskell, which uses pragmas to modify the language. At least these pragmas are part of the source code, but they absolutely need to be included if you want people to understand a snippet of your code. Take a look at Haskell questions on StackOverflow to see what I mean.

It's not something you want to see in an industrial strength language because it basically turns your language in factorial(n) languages, where n is the number of pragmas.


If a language will have N number of features I'd rather have some behind pragmas than not. Let's say by some measure GHC Haskell and C++ have the same number of features. C++ will always be N factorial, unless I read the entire source code to see that certain features are not being used. In Haskell I can look at the top line and easily see that many features are not being used.

An interesting alternative (one which a C++ compoler could employ) is to allow pragmatic that disable features. E.g. I could quickly see that a C++ file is not using macros. This is the wrong default though. Features should be opt-in rather than opt-out to encourage a minimal use of features.


> It's not something you want to see in an industrial strength language because it basically turns your language in factorial(n) languages, where n is the number of pragmas.

I don't see this as an issue because here it's just simply an intrinsical part of the "language". Nobody writes the core language consisting of just some ~6 primitives, everything on top is syntactic sugar and language-extension pragmas are just the same principle for the more obscure/experimental (but in no way unstable/less robust) ones.


> Nobody writes the core language consisting of just some ~6 primitives

https://en.wikipedia.org/wiki/Lisp_(programming_language)


A discussion on minimal forms is here:

http://www.faqs.org/faqs/lisp-faq/part1/section-6.html

"Nobody" implements practical Lips with just six special forms. To get a ridiculously low count, you have to leave a lot to uncounted library function. Such as, oh, any numeric support. Programmers want everyday things like (+ 2 3.0) to be implemented as primitives.


Note that to _use_ an already written Haskell module, I don't need to know what extensions it uses. I don't even need to use the same ones. Haskell's LANGUAGE pragma is analogous to Rust having a bunch of experimental features you can turn on - they extend the language in a particular direction in a natural way.


Haskell LANGUAGE pragmas are like "from __future__" in Python and various other equivalents in many languages.


Obviously it fragments the language in a way that's harmful for human readers as well as software like compilers, editors/IDE plugins, and so on. How to know which variant of Rust you're reading? As you add more options that change the syntax and semantics of the core language, these difficulties will be compounded.

Maybe a bit of fragmentation isn't the end of the world, but I think you'd need a really good argument in its favor and in this case frankly I don't see one. Additive extensions/features (like GADTs in Haskell) are an easier sell for me because a) they actually add something useful and b) they generally don't change the core language, but rather sit on top of it.


There's probably less opportunity to make an error when most things are immutable by default. I'm not sure if that's the case or not.


The thing with functional languages is that most imperative languages can "do functional things", but not vice versa.

For example, many algorithms are beautiful when expressed in a recursive manner (Fibonacci, for example) , and pretty much all languages permit one to do recursion (even old fashioned C).

But many algorithms are much more elegant when expressed through for loops. For example, if one needs to iterate through lines keeping state into account, a

for (int i =0;i<len;i++){ if(lines[i] == "a"){ i+=2; } }

seems much cleaner than an equivalently functional algorithm.

-----

EDIT.

I (purposefully) left out the "main" logic (as it's not really relevant to the post).

Of course in a "real" example, the code would look like: for (int i =0;i<len;i++){ if(lines[i] == "a"){ i+=2; } else{ parseLine(lines[i]); } }

The goal is to skip the line after a line starting with the letter "a".


> for (int i =0;i<len;i++){ if(lines[i] == "a"){ i+=2; } }

I think you made a typo. This doesn't do anything. If that's the case, a functional formulation would have actually made it quite obvious.

But here's a way to do it in Haskell assuming you didn't make a typo.

    f i xs
     | i >= len xs   = i 
     | xs ! i == 'a' = f (i+2) xs
     | otherwise     = f (i+1) xs
There is almost certainly a more elegant way to write your algorithm if I understood what you were trying to do, but here's sort of a "worst case" direct translation.

EDIT:

In response to your edit, a substantially more elegant functional approach looks like

    map parse . filter (not . startsWith "a")
I think you ironically chose a task that functional programming is particularly better for.


    > map parse . filter (not . startsWith "a")
Almost! You have to skip two lines.


True. OK, let's start with the OP's goal, not his code (which appears to have been consistently incorrect, conveniently and humorously contradicting his intended point).

> The goal is to skip the line after a line starting with the letter "a".

OK, so we want the following:

    ["x", "y", "a", "skip", "z"] -> ["x", "y", "a", "z"]
Here's a non-clever but still more elegant (and correct) way:

    f (x:y:zs)
      | ('a':_) <- x = x : f zs
      | otherwise    = x : f (y : zs)
    f zs = zs
I guarantee you that there are multiple zippy one-liners that do the same thing, but I'll leave it at that for now. It's already a huge improvement, and does what it purports to!


('a':_) <- x = x : f zs

What's the `<- x`for? I know some Haskell and I've never seen that syntax. And `('a':_) = x : f zs` should be enough I think. Also I'm pretty sure each line is supposed to be a string, so it should be `("a":_) = x : f zs` :)


That's pattern-guard syntax. The bits between | and = ("the guard") are tried in turn, and the stuff to the right of the = is done for the first that succeeds. This can also introduce new bindings, although it doesn't here.

To do this directly without guards, it'd be:

    f (x@('a':_):_:zs) = x : f zs
    f (x:ys) = x : f ys
    f [] = []
Not obviously better, not obviously worse.


Thanks, I get it now! I know how guards work, I've just never seen a construct like `('a':_) <- x` (and misunderstood the task a bit :). My understanding is, that's basically inline do-notation, is that correct?.

On a sidenote. I've seen code like `f (x:ys) = x : f ys` many times, for example in definitions of map or filter. It always seemed to me like we're wasting a lot of breath typing that - deconstructing the list with pattern matching, then reassembling it again, slightly changed. I feel like there's a general pattern here that could be factored out, do you know of anything like that?


> My understanding is, that's basically inline do-notation, is that correct?.

Not really. They're superficially similar, but translate differently.

Do-notation desugars `x <- y` to `y >>= \ x -> ...`, whereas the pattern guard is more like 'case y of x -> ...; _ -> ...'. Most importantly, there is not necessarily any monad involved for pattern guards - `5 <- 5` is legitimate. `5 <- 5` is nonsensical in do notation (without some awfully weird instance of `Num`).

> On a sidenote. I've seen code like `f (x:ys) = x : f ys` many times, for example in definitions of map or filter. It always seemed to me like we're wasting a lot of breath typing that - deconstructing the list with pattern matching, then reassembling it again, slightly changed. I feel like there's a general pattern here that could be factored out, do you know of anything like that?

The general pattern, for map and filter, is a fold.

    Prelude Test.QuickCheck> let map' f = foldr (\ x -> (f x:)) []
    Prelude Test.QuickCheck> let filter' f = foldr (\ x -> if f x then (x:) else id) []
    Prelude Test.QuickCheck> quickCheck $ \ (Blind f, xs) -> map f xs == map' f xs
    +++ OK, passed 100 tests.
    Prelude Test.QuickCheck> 
    Prelude Test.QuickCheck> quickCheck $ \ (Blind f, xs) -> filter f xs == filter' f xs
    +++ OK, passed 100 tests.


>for (int i =0;i<len;i++){ if(lines[i] == "a"){ i+=2; } }

This algorithm can, without modification, be done in a functional language. For example, consider the following Haskell:

  import Control.Monad.Loops 
  import Control.Monad

  for_ :: (Monad m) => m () -> m Bool -> m () -> m ()
  for_ init guard step body = 
      init >>
      whileM_ guard (body >> step)

 
  loop :: [string] -> State Int ()
  loop lines = for_ (put 0) (get (<(length lines))) (modify (+1)) (when (get >>= \i -> (lines!!i == "a")) (modify (+2)))

I did not use do notation to avoid accusations of cheating; and am defining for_ here because Haskell's standard library for loop only iterates through data-structures (like the for loop of many imperative languages). I will also admit that the Haskell version is ugly due to its verbosity (because, even though it is possible, it is not how you are supposed to do things in Haskell).

EDIT: Made for_ generic and added type signature for loop.

If you want multiple "variables" in this approach, you can define a datatype to store them:

    data S = S { i :: Int ... }
In this case, we can access i by replacing "get" with "gets i" in the above example ( "i" being an accessor function that Haskell automatically defines when we create a datatype with a field called "i"). You can imagine the generated value "i" being a more complicated datatype that can be both a getter and setter, in which case we can define "puts i" to be the analog of "gets i" and "put".

I think I have seen this approach done in Haskell but cannot remember what it is called.


> This algorithm can, without modification, be done in a functional language. For example, consider the following Haskell:

The OP clearly stated the iterative version is cleaner, and you seem to confirm his pint.


Except it is not cleaner. It is just more familiar to you because most languages in use in the past 40 years have been influenced by Algol family of languages for their syntax.

Now the author is being modest here:

First, `for` is defined out of simpler function `while`. You can do the same for the for loop in C of course, but only as a function call and to pass arbitrary body you would need to use a function pointer, which I argue is overwhelmingly more confusing. You may have different opinions merits of of baking non-orthagonal features into the compiler, but surely you'll appreciate the clarity here.

Second, `do` notation is not used which would make it look more like Algol family of languages with semicolons to delimit (if "statements" (really, monadic expressions) are on the same line otherwise line feed is enough.) and less `>>=` would appear.

Third, giving type annotations which can always be inferred unless you are using advanced extensions such as Generalised Algebraic Data Types (GADTs), which are not used here. Also less characters do not mean cleaner. For example the `loop`'s type annotation produces a `State` monad with `Int` as the state. This means that `loop` will not be connecting the Internet, it will not be throwing an exception [0], etc. Even in the narrow effect of manipulating state it guarantees it will only simulate mutation of a single variable of type `Int`. All of that guaranteed from a single line is a remarkable manifestation of clarity.

[0] I don't account for undefined and the problem of strictness, that's a discussion for another day.


Do notation isn't cheating! For instance see these tips on writing g readable Haskell:

http://www.haskellforall.com/2015/09/how-to-make-your-haskel...

One of the big ones is always using do notation.


do notation is not cheating - I would not avoid it. IMO its important to share awareness that Haskell can do procedural, imperative code just fine and even has nice sugar for it.


I could imagine the above being expressed very concisely and elegantly in a language like Haskell:

    f = sum $ map (\x -> if x == 'a' then 2 else 0)
I tend to think that as the accumulation condition gets more complicated, the for loop begins to be the clearer choice, but of course there's always a way to simplify into a number of map steps. This can lead to a bunch of allocation overhead for the intermediate lists unless iterators are used in languages where those are supported.

EDIT: Whoops, read and replied on mobile, missed the parseLine() call :/

EDIT EDIT: Looks like my code matched the parent as of when I read it, before the edit. wyager is correct that I messed up my point-free syntax


I think you want to replace the "$" with a ".".

But either way, you haven't done the same thing as the GP. It's not actually clear what the GP's algorithm is supposed to do, and I made the same reading mistake at first as well.

> I tend to think that as the accumulation condition gets more complicated, the for loop begins to be the clearer choice

You can just use recursion. It's very easy to translate a loop into a tight tail-recursive function in Haskell. The generated assembly code usually looks about the same.

In the most extreme case, you can just do a loop inside the State monad, which allows you to get (better defined) imperative semantics without mutability (although the compiler will often generate mutable-equivalent assembly when it sees that doing so is safe).

> This can lead to a bunch of allocation overhead for the intermediate lists

Haskell uses list fusion so this usually isn't a concern.

I transliterated the OP's example in another comment. It's a pretty straightforward recursive function of type e.g. "Int -> ByteString -> Int". Substitute ByteString with whatever constant-lookup data structure you want.


With Haskell this isn't an issue because lists are evaluated lazily, but what I was trying to get at and failed to make clear was that other languages which allow functional-like behavior can result in unnecessary allocation. For example in Python doing multiple maps/filters will allocate multiple lists unless you use the itertools versions in Python 2.7.


> With Haskell this isn't an issue because lists are evaluated lazily

Even more than that, the compiler will actually do things like replace

    map f . map g
with

    map (f . g)
To eliminate unnecessary allocations.


Thats great. My greatest petpeevee is having to keep in mind the language gotchas when all I want to do is get it done.


Haskell has far fewer correctness gotchas than other languages. They stuck to consistent, predictable behavior - even when it made things tedious for years until new techniques were discovered.

Haskell does have performance gotchas, though.


I think the proper answer here is that Python 2.x should be treated as legacy. Python 3.x defaults to generators for maps/filters by default.


Your function doesn't do the same thing his does -- his skips comparing some of the lines because he's selectively incrementing the loop counter.


I think that's a perfect example of why a functional implementation is superior here. The index is being mutated in two different spots (i++ and i += 2) and it's easy to miss the fact that the i += 2 isn't just a running sum but actually impacting the future traversal.

In Haskell you'd do something like:

    skip [] = []
    skip ("a":xs) = skip (drop 2 xs)
    skip (x:xs) = x : (skip xs)
Which makes it super clear that you're skipping over items in the list. The parsing then simply becomes:

    map parseLine (skip lines)
Which is at least an order of magnitude clearer and doesn't mix a bunch of unrelated logic together.

Even worse, the original author says the goal of his code is to:

> The goal is to skip the line after a line starting with the letter "a".

But he actually skips two lines after an "a". The above Haskell code makes it clear that we're skipping two lines with "drop 2".


Note that you are using lists, not arrays. Lists admit a simple recursive definition at the type level, which helps writing recursive code. But if you had to use arrays, how would you do?


The main takeaway from my example, I'd hope, wouldn't be that recursion is awesome (though it is). It's really trying to highlight that separation of concerns, purity, and building functionality through composition of smaller pieces can lead to beautiful and simple code. And while this is possible in an imperative style, the functional style tends to encourage this behavior more consistently.

Using lists and recursion just further lends itself to this style.

In op's example, he's simultaneously tracking loop state, figuring out which lines to parse, and parsing those lines. It's a very imperative way to do it. I'd prefer to look at the problem through another lens:

    Q1: What is he doing?
    A1: Parsing a subset of lines.

    Q2: How do you determine the subset of lines?
    A2: Skip lines that are equal to "a" and two lines after that.
That's what led to the two sections of my code. Concerns have been separated.

If I was forced to use arrays and, furthermore, work in an imperative language, I'd still have one function to parse the lines and one function to figure out which lines to parse.

In Python, first compute which indices are of interest:

    def indices(xs):
        i = 0
        while i < len(xs):
            if xs[i] == "a":
                i += 3
            else:
                yield i
                i += 1
And have a separate place for actually parsing each line:

    [parseLine(lines[i]) for i in indices(lines)]
This translates relatively nicely to Haskell as well, though definitely feels a little forced / unnatural:

    indices lines = indices' 0
      where indices' i
              | i >= (length lines) = []
              | shouldSkip          = indices' (i + 3)
              | otherwise           = i : (indices' (i + 1))
              where shouldSkip = (lookup i lines) == Just "a"

    [parseLine <$> (lookup i lines) | i <- (indices lines)]


> This translates relatively nicely to Haskell as well, though definitely feels a little forced / unnatural:

Thanks for the answer. I must say that I find the original problem quite artificial.


  import Control.Lens.Cons
  case uncons v of
    Nothing -> .. empty vector ..
    Just (head, tail) -> ..


Interesting, thanks.

I am right that this will extract consecutive views of the original array as pairs of (item, slice) elements? I guess array elements need not be copied, but there will still be a slight memory overhead, right? or is it some "zero-cost" abstraction?

I am not trying to attack the code for efficiency, just curious about what current implementations can make from this.


It shouldn't copy anything - I'd expect it to be optimized to not allocate any memory at all.


I wrote the comment before the parent made their edit so the logic did match what it was at the time.

Having said that it would've been a very simple extension to make it a recursive algorithm which includes the else case, and arguably would be clearer than the for loop solution.


His? Could be hers..


aduffy is Andrew Duffy, who at least on twitter, presents as male. Sometimes people do know other people's gender.


I wonder why I would be downvoted for pointing this out? The gender is assumed here, when we don't know. Why is this wrong to point out? I don't see anywhere in a Hacker News profile where one can indicate one's gender, so we can't assume what it would be.


Well, because of recent cultural changes you have several options here, but none are elegant/natural. You can put "he or she" everywhere, or use "they" instead, even though you mean a single person. Many modern writers explicitly use "her" consistently whenever a gender-neutral singular pronoun is needed. I upvoted you, but I perfectly understand why others would react differently - in today's world you can hardly do anything without touching politics, and we're just fed up with that.


> in today's world you can hardly do anything without touching politics, and we're just fed up with that.

This makes sense, and I appreciate your reply. I don't think however that it's anything new (or specific to this time period), and even being born means you will constantly be challenged to learn new things, or stay stuck. Also I am not sure about the 'political' sense of this.. I mean, people of different gender identities and beliefs have formed political groups, but I don't think it's a direct connexion.

(I do prefer 'they')


> or use "they" instead, even though you mean a single person

Using 'they' to mean a single person is valid though.

https://en.oxforddictionaries.com/definition/they


While I'm not a huge fan of functional programming, this can be done pretty easily with a fold. For example in Scala:

  xs.foldLeft(""){ (prev, cur) => if (prev != "a" && cur != "a") parseLine(cur); cur}
Although the result is pretty and shorter than your solution, it took me quite a while to figure out how to do it. Also, your code has a bug. You are skipping two lines after every "a". The code should be i += 1 instead of i += 2. <Insert lame joke about mutation causing bugs here.>


Here's your version translated to C#:

    xs.Aggregate("", (prev, cur) => { if (prev != "a" && cur != "a") parseLine(cur); return cur; });
Working demo: https://repl.it/EwaE


>Also, your code has a bug

Oops. You're right. Got spoiled by foreach loops lately :)


Most functional languages cannot do imperative things as smoothly as imperative languages, but there are very few functional languages that can't do them at all. OCaml, Haskell, and their derivatives can all do precisely what you listed above just fine. It's just a tradeoff to make one kind of programming easier at the expense of another (through perhaps not a necessary tradeoff).


> Most functional languages cannot do imperative things as smoothly as imperative languages

I don't even buy this for most definitions of "imperative things". The only algorithms that I've ever found unwieldy in functional programming are heavily array-oriented algorithms like random shuffles. Almost everything else comes out more elegant.


And even array-oriented algorithms can be quite nice, syntactically, if nice operators are defined for them[1].

But the FP community does not care that much about that kind of code, so nice operators are just not defined for them.

[1]: http://augustss.blogspot.co.il/2007/08/programming-in-c-ummm...


When I say "imperative things", I mean insisting on doing things in an imperative manner (for loops, mutation, etc.), not the problems themselves.


When transliterating code from distributed systems papers (uncomfortably imperative) to Haskell, I had an excellent experience with Lenses and the State monad. It allows you to reproduce imperative, mutable code in functional Haskell and without mutability. The semantics are also much more restricted (in a good way) than imperative code in a real imperative language. For example, you have to specify the entire state beforehand, which prevents you from falling for ultimately unproductive temptations like global variables. You can even do for loops if you want. http://hackage.haskell.org/package/base-4.9.0.0/docs/Data-Tr...

    for [1,2,3] $ \i -> do
      counter += i 
      print i

I wrote a bit about the experience here if you're interested. It was on HN a while back. http://yager.io/Distributed/Distributed.html


Not all functional languages are purely functional. (Common) Lisp can be described as a functional language, but it can do loops like those just fine.

A lot of imperative languages don't even have map or reduce; some don't even have proper first-class functions. The same argument goes the other way too.


> iterate through lines keeping state into account

You mean like a fold, the bread and butter of FP data processing?


I find it more interesting that this program has a bug even though there are just two pieces of code, not more than 10 characters apart that modify only one shared mutable integer.


I'm a Haskell beginner. But this is my attempt at solving your algo functionally in Haskell. Note there's no shared state.

fn ("a":lines') = fn (skip2 lines')

fn (l:lines') = (parseLine l) : fn lines'

skip2 = tail . tail

Gave spaces above for readability. The function "fn" pattern matches "a", then skips 2 elements of the array and recursively calls itself.

Haskell programmers can correct if the program is correct/sound. I just wrote this from my mind. Not near any computer now.


Oops forgot to add the base condition to the recursion:

fn [] = []


    def parsedLines = lines.withIndex().filter{l, i -> i < 1 || lines[i - 1] != 'a'}.collect{parseLine(it)}
Do you really think that the imperative version is simpler that this small groovy one liner? I strongly disagree. And probably in a more functional language than groovy you can do even better. And btw you are not doing anything with the result of parseLine, while I am saving it in a new collection of parsedLines.


  > def parsedLines = lines.withIndex().filter{l, i -> i < 1 || lines[i - 1] != 'a'}.collect{parseLine(it)}
In that Apache Groovy sample, you put an `l` as the first parameter to the `filter` but don't use it anywhere, which is not simple for people reading it. The logic instead requires you to refer to the global input `lines`, which is better expressed imperatively.

And btw in Groovy that code would probably only work on ASCII data.


Really is it so difficult to ignore one parameter for people reading that one liner? I could have used the underscore but honestly when I'm writing from the phone I don't usually care for small things that don't change absolutely nothing. At least my code doesn't have the bug that skips TWO lines after the a, like the original code, and it actually saves the parsed lines in a new collection rather than throwing them away and doing nothing at all apart from burning cpu. But for some strange reason you still prefer that obscure bug-ridden and useless imperative code rather than the self explaining and declarative functional version. Enjoy your imperative bugs :)


Any code that's not tested has bugs. As for reading code, when I read yours I can't work out why you put

  .filter{l, i -> i < 1 || lines[i - 1] != 'a'}
instead of

  .filter{l, i -> i < 1 || lines[i - 1][0] != 'a'}
to skip the line after one that begins with an 'a', or

  .filter{l, i -> i < 1 || lines[i - 1] != "a"}
to skip the line after one that is an "a". I'd need to go run it to work out what it does.


What does this algorithm do? As far as I can tell there's only an effect if len is wrong, or the list has length 2^31-1 and ends with a.

Even if you move i out of the for loop, you would just get whether the list ends in "a".

Edit: Whoops, 2^31-2 is sufficient for the overflow and i's value at the end of the loop is more subtle than I gave it credit for.


This is a good example of the result of being inculcated into the von Neumann machine method of computing (small chunks at a time vs operating on a whole).

It's very familiar to most of us and it is not easy to find new points of view.


Cleaner, but there's mutation involved, which could bring side effects.


Mutation could cause bugs, but so can logic mistakes cause bugs.


You can shoot yourself in the foot with any language, granted.

But I'd rather use languages that have less ways to shoot myself in the foot.


Logical errors can introduce bugs in any language. That doesn't mean eliminating a different class of errors isn't worthwhile.


   Or 

   sum(filter (lambda line: line == "a", lines)) * 2


I misread the comment. I'm incorrect here.


deleted, misread code


> for (int i =0;i<len;i++){ if(lines[i] == "a"){ i+=2; } else{ parseLine(lines[i]); } }

You must be one of those managers who like to dabble in coding! I don't even know what you wanted to do. Is that C++? That part `lines[i] == "a"` is string comparison, right? And `len` is the amount of lines, right? I'll use a list instead.

I'd do it in C# with LINQ but I'm at my folks, so I only have an iPad. Here's the biggest Haskell program I ever wrote in my entire career:

    dropTwoIf :: (Eq a) => a -> [a] -> [a]
    dropTwoIf _ [] = []
    dropTwoIf s (x:xs)
      | s == x    = dropTwoIf s (drop 1 xs)
      | otherwise = x : dropTwoIf s xs

    myList = [ "will pass", "will pass", "a", "will skip", "will pass", "a", "will skip", "a" ]
    main = mapM_ print (dropTwoIf "a" myList)
Result:

    "will pass"
    "will pass"
    "will pass"
Thanks for forcing me to actually do something bigger than Hello World of fib in this Haskell thing.

EDIT: Sure, it looks bigger than the for loop, but at least it has a name (but I'm not that good at naming) and it can be put in a library and reused. I'm pretty sure I could do MUCH better if I actually knew the standard library.

EDIT 2: Huh, figured out the type.


>You must be one of those managers who like to dabble in coding! I don't even know what you wanted to do. Is that C++? That part `lines[i] == "a"` is string comparison, right? And `len` is the amount of lines, right? I'll use a list instead.

It was more along the lines of pseudo-code (I couldn't be bothered to type the whole srcpy business or to look up C++ array length method).


If your function for the same input give you a different output, then you are in dysfunctional world. Hence don't put random() in the middle of your supposed pure function.

  Remove state

  This is a functional version of the car race code:

  from random import random

  def move_cars(car_positions):
    return map(lambda x: x + 1 if random() > 0.3 else x,
               car_positions)


How, then, is randomness handled?


For example, a random number is passed as a parameter, so that the function can be kept pure.


Then how do you obtain such a random number that you use as a parameter? After all, it cannot be the result of a function, because then we'd be in a 'dysfunctional world'. Yet, it cannot be a constant either. I guess you'd have to handle it like file IO, that is, you are forced to introduce that impurity into your code.


"Functional" doesn't necessarily imply "pure", but some functional languages do offer great tools for reasoning about purity and state. Here is a simple example in Haskell:

  showTenRandoms :: IO ()
  showTenRandoms = do
    gen <- getStdGen
    let randoms = unfoldr (Just . next) gen
    putStrLn $ show $ take 10 randoms
We create the random number generator "in the `IO` monad", where non-pure actions are allowed. Then we pass it to `unfoldr`, which is a pure function (you can tell because it's in a `let` binding), to generate an infinite list of random integers. This works because when we use the generator to get a random number, it isn't mutated, but instead returns a random number and a new generator. The `unfoldr` call basically just iterates on that operation.

If you don't know Haskell, this code will probably look like nonsense, so you'll just have to take my word for it. I'd be happy to explain or clarify further.


The same way that purely functional languages do input/output. (think, how do purely functional languages even write to the console, if they are in fact pure?). Creating the initial random generator is an effectful operation much like writing to the console; it get's handled in the Main/root part of the program.

Understanding exactly how input/output in pure languages work involves Understanding the distinction between the RTS (Run Time System - the thing that runs the effects) and your Program (the recipe for what effects to run).


You use a generator that receives a state and returns a random number and the modified state. You'd want some additional abstractions to make that work well.


The abstraction then becomes indistinguishable from a language feature, and you have come back full-circle to where you started. You call "random" without managing the state explicitly, except when you need to.

http://clhs.lisp.se/Body/f_random.htm


I might just not understand your meaning but I don't think what you're saying about abstractions becoming indistinguishable from language features is true. Here's a (pure) Haskell function that is passed a random number generator and "returns" an infinite list of random numbers:

  infiniteRandoms :: (RandomGen g, Random r) => g -> [r]
  infiniteRandoms = unfoldr (Just . random)
This just uses a standard function from the ubiquitous `Data.List` module (`unfoldr`) and the most basic functionality from the `System.Random` module. Nothing about that is "indistinguishable from a language feature", IMO.

In other languages, of course, this operation would look different and potentially have side effects depending on the libraries used.


> Nothing about that is "indistinguishable from a language feature", IMO.

You are right in this case.

The parent comment said "You'd want some additional abstractions to make that work well.". Maybe passing around a random generator explicitly is not what we want. We abstract things away because we want to focus on "what" some code does, not necessarily "how". Most of the time, I want my random numbers to be random; the case where I need to replicate a specific sequence of random events is actually rare. That's why "random" is available as a system facility and does not need to take the random generator's state explicitly ("There is a single, implicit, global random number generator", https://hackage.haskell.org/package/random-1.1/docs/System-R...)

My point is that hidden state is not bad when the explicit simulation of state is cumbersome to use. As an aside, that's also the reason I dislike Go's approach to error handling. There is so much praise about having everything visible, based on the claim that "there is no happy path"

https://www.reddit.com/r/programming/comments/30ad8b/why_gos...

There is no happy path. This is a very common misconception which causes so many programs to be unreliable POSes... handling errors is very important, just as important as handling success. Hell, 99% of the time, it's not an error, it's just an alternate option. Hey, the file isn't there... that's not an error, it's just a different possible state of the universe. Error code is application code.

And yet, error handling code follow certain patterns that can be abstracted away (99% of the time!).

Besides, in Go forums or blogs, you can spot code samples where "error checking is omitted for clarity", which I find quite representative of the kind of noise explicit error handling code introduce:

https://medium.com/@nate510/don-t-use-go-s-default-http-clie...

https://devhub.io/repos/kshedden-gonpy

https://github.com/alecthomas/mph

https://groups.google.com/forum/#!topic/golang-nuts/f-OH4nas...

http://stackoverflow.com/questions/37723883/fill-an-object-i...


> Maybe passing around a random generator explicitly is not what we want.

That's fair. What I often want is to pull a function out of my program and test it without a lot of work. This is easy if the function is pure.

> My point is that hidden state is not bad when the explicit simulation of state is cumbersome to use.

That depends on your goal. If your intention is to make state explicit, then by definition there has to be more code to make it so. Again, a preferable abstraction for this makes the boilerplate go away, while keeping everything pure. That added effort gives you testability but I know there are instances where that effort is not worth it.


Discussions related to the same post 2 years back..

https://news.ycombinator.com/item?id=8943356


I don't understand the couple other posters who are dismissing this article. it is exactly what it claims to be, a practical introduction to functional programming. I'd also like to echo poster vvanders sentiments about Rust and how it compliments the functional style. I've experienced the same feelings, although I did it while working in Swift. And for those out there asking "why is FP better", I submit that it allows for fewer ways for me to shoot myself in the foot.


Here is the best written piece I know about the FP benefits (Hint: it is not only vs imperative):

https://drboolean.gitbooks.io/mostly-adequate-guide/content/...


>> runs the function on each item in the original collection and inserts each return value into the new collection.

some poor schmuck new to FP will come around, read that and scratch head saying "isn't that iterating?"


I wasn't impressed with this the first time I read it, and my opinions haven't changed since then. A lot of the comparisons here are straw men, and a lot of the proposed solutions are unnecessarily hard to understand. I'm not going to repeat all my points, so here's a link to the previous criticism.

https://www.reddit.com/r/Python/comments/30w1bn/an_introduct...


Functional looks good for applying x^2 to a list of numbers, but when you need to do something a little more complex, it becomes hard to put in in a single line/lambda.


You decompose the complex task into separate other tasks and compose them together the same way that you would in any other paradigm. You wouldn't expect to express a complex task in a single line in an OO program for example.

documents = filter(pdf, folder)

summaries = map(summarise_document, documents)

Is pretty clear despite summarising a document being a complex task.


I appreciate how nice the functional style is, but sometimes a for loop acting over a few local variables is simpler to read than a bunch of map's and lambdas. Does this look readable and nicer to you than a for loop?

    print pipeline_each(bands, [call(lambda x: 'Canada', 'country'),
                                call(lambda x: x.replace('.', ''), 'name'),
                                call(str.title, 'name'),
                                extract_name_and_country]
Sometimes it's easier to implement directly what you want done than to remember the name of the function that plucks a field from a list of records or something like that.


You seem to be obsessed with this idea that "functional programming" is a fancy term for squishing your entire program onto one line via gratuitous use of anonymous functions, which simply isn't true. I don't know what this snippet of code is supposed to do, and it's not very pretty, but guess what: it is just as easy to produce unreadable garbage in an "imperative style". I've seen a lot of both, but more of the latter.


Oh, it's good for so much more. Just take this refactoring of a functional test library of mine:

1. https://github.com/haf/expecto/commit/e4f2ce3cc2b2cf3c848c5e... 2. https://github.com/haf/expecto/commit/9f8380cdb17002e09ae169... 3. https://github.com/haf/expecto/commit/3fafd9ada4e932dcadc24a... 4. https://github.com/haf/expecto/commit/0c243c9eb781e7527b0bd2... 5. https://github.com/haf/expecto/issues/36 6. https://github.com/haf/expecto/commit/9284173599145a1bb08ee8...

Functional programming also makes for really smooth control flow and that's a huge part of programming. In short what it does, is give you the ability to compose and reason with your code.


That's why one should use more than a single one-line function per program.


That's fine, that's not really one of the points to functional programming.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: