Well, if your goal is to play Devil's Advocate, then here's one: FP makes simple things hard. Instead of simple for-loops you have to use recursion, if you want to do something as primitive as assigning to a variable you have to learn about monads, instead of control flow operators you have to use functions-as-values, and the type system is based on a foundational mathematical topic called lambda calculus.
Just to be clear: I'm a fan of FP, and I don't think it makes simple things hard. I just think that debunking the above would be illuminating.
That's a good devil's advocate position, but it's easy to counter. For-loops are replaced with maps, folds, and filters more often than recursion, and assigning to a variable is only a primitive because we were brought up that way.
Rather than saying "simple to hard", I'd say "familiar to unfamiliar."
Do maps, folds and filters become as natural as for-loops in time? (If you have already been exposed to more traditional techniques). They strike me as similar to regular expressions, in the sense that however long I spend using them I will always have to think it through carefully when I encounter a new use.
By contrast, a for-loop is spelled out for me. They are certainly longer, but I guess they match my thought process more closely. I suppose the question is, do I think this way because of my prior exposure to for-loops, or would it be the same if I was exposed to functional programming first? I'm really not sure.
Yes, map, reduce and filter feel much more natural to me than explicit loops for their respective use cases. Let's try writing an example in English:
Give me the items from the collection foo for which the function bar returns true.
Create an empty collection baz and a counter i with a value of 0. Until i reaches the length of the collection foo, do the following, then increment i and repeat: call the function bar with the element of foo at position i. If bar returns true, append the element to baz. After the loop is complete, return baz.
The for loop is pretty awkward just to express in English, while the filter is straightforward enough that most non-programmers would have a pretty solid understanding of what they'll get.
Fascinating, thanks. I would have said that the for-loop is at least somewhat intuitive -- the step-by-step (lather, rinse, repeat) nature resembles the way most instruction sets are written. Not to mention flow charts, which most people have been exposed to at some stage.
Do you have any advice for languages and techniques for getting the necessary exposure to these concepts? I'm not in a position to use FP through work, so I'd need some easy intro and way to practice.
Ruby, especially the standard Array library. Look at stuff like select, each, and whatnot, methods that take a block. Try and write (or rewrite) a toy program you have lying around and make sure you try to use those.
Don't thank us for exposing this to you just yet, though. :) You can use this pattern in some mainstream programming languages, but not all of them--- typically it's not idiomatic in Java or C++.
I think there's still a gap between how you're thinking about it and how I am.
Yes, a for loop translates pretty cleanly to a flowchart. It's not hard to understand how it works. The problem is that I'm usually thinking about the result I want in the case of sequence transformations like map and filter, not the process of generating it.
Do you have any advice for languages and techniques for getting the necessary exposure to these concepts?
Functional programming tends to emphasize data flow and leave control flow as an (often hidden) implementation detail. So, I find it’s helpful to think of algorithms in terms of transformations of data rather than series of instructions.
For example, what values do I start with, and are they in (unordered) sets, (ordered) sequences, or something more intricate? Then, what do I want to end up with?
Now, how can I get from one to the other via a series of transformations or combinations of the data? Am I turning one set/sequence/whatever into another, operating on the individual elements? If so, that’s probably some sort of map operation. Am I combining element of a sequence somehow? If I’ve already got a sequence, that’s probably a reduce or something like it. But what if I’ve only got an unordered set? Then maybe I need to impose some ordering first. Can that order be arbitrary, or is there really some implicit order that gets me to working with a well-defined sequence? Maybe I need to combine multiple sequences somehow. If each element in one sequence corresponds to a single element in the other, that’s probably some sort of zip operation. If it’s a many-to-many matching, that might be some sort of nested recursive algorithm that walks each sequence one inside the other, or maybe I can generate an outer product so I’ve got a one-dimensional list of all possible pairs and then do something simpler with that.
After a while, you get to recognise basic underlying data structures like sets, sequences, and trees, and you get to recognise which common operations make sense for each type. You can map just about anything, but to reduce you need a sequence. You can make a sequence from a set by imposing an arbitrary order if necessary. You can make a sequence from a tree in several ways, for example by traversing it in depth-first or breadth-first. And so on.
Sometimes you’ll need to do something that isn’t a convenient, ready-made transformation like a filter or zip, and at that point you start thinking about using recursion to traverse the relevant data structure(s) “manually”. Once you’ve done that a few times, you might start to see new kinds of recurring logic within your code, which you can abstract away into their own higher order functions to join the standard toolbox next to fold and friends.
You can get into all kinds of fun and games with representing effects and controlling the interactions between them as well. However, I’d suggest that if you can start to think of simple problems in terms of things like transforming and combining structured data and then move on to implementing more advanced algorithms by combining the simpler building blocks, that’s probably the best way to get your feet wet without getting out your depth in the early days.
They do and I started with Qbasic. Actually I find them more natural and find for-loops to be cumbersome. Fold took the longest.
I think how well things stick is a function of how much you think the effort of learning it is worth and how much time you spend on it. Regular Expressions never stuck for me either, I keep having to relearn it. If you are not intent on picking up a new paradigm or are not addicted to being constantly confused (me) then you rarely escape that valley.
map and filter feel quite natural to me, and superior. I like FP but I don't consider myself an "FP person," if only because I don't spend most of my time in it, nor am I particularly well versed in any FP language.
Python occupies an interesting middle ground which might help you see what I mean. I vastly prefer
filtered_xs = [x for x in xs if x.foo(bar)]
to
filtered_xs = []
for x in xs:
if x.foo(bar):
filtered_xs.append(x)
If you think of filtering and mapping as trivial operations, it seems like a waste to spend so many words on it as in the latter case. It feels tedious, like filler, since I have to explain how to iterate over a list every time I write a loop. When you can express it crisply and just as precisely in a single line as in the former, the reader is free to occupy their mind with the real work. When you can deal with a higher level of abstraction without introducing a lot of complexity, verbosity, or confusion, I think that's a win.
FTR, the two mainstream languages that frequently emulate this include Ruby (see Array) and, depending on what you're using, JavaScript. If you've played with jQuery, f'rex, you can see this pattern there. (And even monad-like behavior!)
I found maps and filters massively more intuitive than for loops. To be honest, I'm still more likely to screw up a for/while/until loop than I am to mess up using map/filter. Then again, I have a Math background, and Functional Programming is much closer to how Mathematicians think than Imperative Programming IMO.
To be fair, those become simpler once you approach things from a constructivist-theoretical point of view. So it's a matter of which you're familiar with. (Dammnit, beaten!)
But I have a big problem with any statement of the form "reasons not to learn X". (Or, for that matter, the idea that imperative and functional styles are two different beasts, and neither the twain shall meet.)
I do think which you've spent more time with has a big influence, but I'm not sure it's purely an issue of where you started in programming. Procedural programming, especially for beginners, benefits from fairly widespread familiarity with procedures in a more general sense: recipes, flow charts, mechanical devices, job protocols, assembly lines. So procedural programming can seem natural as just the computer version of these existing ideas, whereas functional programming tends to attach itself more to a mathematical rather than mechanical description of the world.
You're right - we start with that analogy, presumably because it's more concrete. But I'd argue that basis is only popular by convention (and in any case, it breaks down soon enough) - SICP-like intro courses start at expressions combining, being abstracted, simplifying and evaluating, and steal intuition from (and here you're very right) math. They do just fine - I think that's just as accessible; it's just that the former is preferred because it's less abstract. (And, to be fair, I'd rather spend my time learning how programming a computer is different from what I already know - the point, in the end, is learn what programming is all about, and not just what it's like.)
But what was the original point? Programming can be just as abstract or as nitty-gritty as you like, and I'd rather have a feel for the breadth of programming, and more bits and pieces in my toolkit of expression, than simply limiting myself just because I won't be earning big bucks directly from it.
An interesting pattern in all those objections is that they're focused on methods, not goals. It's a little like saying that getting receipts emailed to you makes organizing them harder than getting paper copies, because you have to print them out and stick them in a filing folder.
My biggest reason is that I have to weigh learning an entirely new paradigm versus bettering myself in a paradigm I already know that's currently making me money. In short, I need it to be worth it.
You probably think you know how to write code in which you Don't Repeat Yourself. Haskell will show you you are quite wrong and there are any number of repeating patterns that you currently don't even see, and how you can factor them out too. It will also give you a fantastic workout in learning how to separate logic from code that does stuff. This definitely can have a very positive impact on the rest of your code.
Even if you only treat it as a workout like lifting weights, it can still be worth it.
The downside is, to really see this you need to try to do a couple of real projects in Haskell. Maps & folds & monads are the words of the language, not fluency. I think a great deal of the "this is useless!" has come from people taking the first mile of the thousand mile journey and wondering why nothing's different yet, and why did I bother taking this mile anyhow?
Getting a good handle on FP has been one of the best things I've done to better myself in the paradigm I get paid to write code in by day.
I haven't personally found a whole lot to warrant the "functional programming as universal panacea" meme that's been so popular lately. But most decent imperative languages have pretty good support for many features of functional programming nowadays, and spending some time working in a functional language will teach you to make better use of those features of your native one. To that extent, polishing my FP skills has led me to like my by-day language (C#) even better.
Even failing that, learning new ways to think about and code solutions for problems can only improve your ability to think about and code solutions for problems.
I've learned a bit about FP (mainly by reading books on several FP languages, and attending the SF Bay Area Functional Programmers group back when it was still meeting).
However, I have never seen a reasonable way to use a true FP language in my work. Most of what I do is inherently stateful-- at the core are iterative algorithms that update vectors of state variables with small corrections until a problem converges. This is a really hard use case for a pure FP language (i.e., one that "takes away assignment") to shine.
That said, learning FP has influenced the way I write the code that pumps numbers into the core update algorithms. Think using PURE and ELEMENTAL functions in Fortran, for example. And of course scientific programming has a rich history of passing functions in to general numerical algorithms. I just don't use closures, or create functions on the fly, or any of that other FP goodness.
To me at least, the endgame for any complex system that you end up with a large number of modular decoupled components. This endgame looks more or less the same today in both functional and object oriented paradigms.
The difference is how you get there. In functional programming, you start with modular, decoupled components. With OOP, you make a mess and you 'refactor' into somewhat modular, somewhat decoupled components later.
In OOP I start with modular, decoupled components. And to answer the other popular complaint about OOP, the vast majority of my classes are not stateful. Just like for the preferred practice in functional programming, I try to only introduce state when practicality demands it.
Making a mess and then (maybe) refactoring later is a regrettably common practice among may object-oriented programmers. But it's not a fundamental or necessary approach. It's just a bad habit that refuses to die. Might die faster if people who've been taught better practice within the functional programming sphere would quit trying to suggest that everything outside the (supposed) ivory tower is irretrievably corrupt and barren.
And while we're at it, 'functional' and 'object-oriented' are orthogonal characteristics. There are a whole lot of object-oriented functional languages out there.
And if there were less effort going into suggesting that functional programming is somehow antagonistic to the rest of the programming industry, or that there's this insurmountable valley between functional or procedural programming, I think that could happen.
Originally several of the courses at my alma mater were being taught in Scheme. (I'd personally have picked ML, but that's a different argument.) Several years back they dropped it, leaving Java as the department's one official language.
The rationale was that Java's the marketable language, and that Scheme didn't have anything to offer in that department. Now the first is self-evidently true. The second is patently false. . . but it's a falsehood that I think is perpetuated most vigorously (if unwittingly) by functional programming advocates. Because they perpetuate the myth that everything one might learn from working in a functional language is the exclusive territory of functional programming. Speaking as someone who used to be in the habit of chalkboarding procedures in LISP-like pseudocode before implementing them in C, I can say that isn't even true when the imperative language has virtually no language-level support for functional constructs.
And I think taking Scheme out of the intro classes in particular also harmed the students' object-oriented programming skills. Because it meant they had to be introduced to objects almost immediately. Object-oriented programming isn't really a fundamental programming technique; it's a more advanced, architectural software engineering technique. Introducing it to students before they've even got the basics down leaves too much opportunity for them to be confused about what object-oriented features are actually for, and how they should be used. Better to learn how to write organized code first before learning new and exciting ways to disorganize it with contrived inheritance hierarchies and the same routines copy-pasted into a gajillion different classes and suchlike.
So we've got an entire generation of programmers being taught in a cocked-up way. And I think if something like SML were to get traction as a good introductory language then FP could help fix that. Procedural programming certainly can't because the only good non-OO procedural language nowadays is C, and pointers and manual memory management are also complications that are best saved for later. (And it - not C++, too high-level - does need to come later. Don't get me started on kids these days not understanding how memory management really works.) But FP isn't well-positioned to step in, because a bunch of people who don't really know or appreciate what's good in FP think that knowing about FP isn't useful if you can't find a day job working in Haskell, and rather trying to disabuse them of that notion the FP community's party-line response is to say, "Yes that's right!"
You've probably heard it before but I'll say it again: learning functional programming will make you a better programmer, regardless of what you use in your day job.
If you measure only dollars in your bank account and time spent, it probably will not be worth it. If you add in the enjoyment and enlightenment it brings, and the new career options it might indirectly provide you with, it will be worth it. At least to me, learning Haskell and Lisp has been one of the best things I've invested time in.
I write my day job code almost exclusively in C. But for some in-house tools we have used Haskell and gotten better results faster than any other tool I can think of could provide. Certain problems are a very good fit for the functional paradigm. In contrast, many problems are not a good fit for the imperative paradigm but we use it anyway, because our computers work "imperatively".
A few people have told me learning functional programming will make me a better programmer, so that makes me more confident when I take the plunge. Thanks!
Well, not understanding functional programming is missing a big part of CS. I would think, in all likelihood, if you understood FP you would probably be worth more financially ... unless you are already at maximal worth.
I think the divide between imperative and functional programming is largely an arbitrary one. If you write a for-loop where each iteration doesn't refer to another iteration, then the compiler should be able to swap between either notation on the fly.
So I'm frustrated that we haven't adopted a kind of "bridge" programming with immutable data, and dealing with the transformations that are being done. Basically everything would be a black box, like a series of small simple programs running in the same heap. Like a unix VM running inside the program, and just pipe data around.
Yes, functional programming makes easy things hard, and imperative programming makes strong things weak, but anymore I think of it as limitations in compiler and mindset more than anything else.
A case can be made that that happened in the 80s (Lisp & AI Winter being the headline acts), and what we're actually seeing today is the climb up the (ahem, Gartner's term, not mine) "Slope of Enlightenment".
I'm trying to write a blog post for O'Reilly publicising
the tutorials at CUFP, and was going to structure it
around "10 reasons for not learning functional
programming" rebutting each of them. Here's my list of 11
(so far) … any more ideas?
It says at the top that he's trying to compile a list of common "why not FP" arguments so he can rebut them, as part of publicity for an O'Reilly series on functional programming.
Just to be clear: I'm a fan of FP, and I don't think it makes simple things hard. I just think that debunking the above would be illuminating.