Can someone please explain in basic terms, what exactly are the advantages of functional programming, over other mainstream approaches like OOP? AFAIK, some FP support was added to the newest version of C#, but I couldn't immediately tell how exactly can I make use of it.
Just look at some simple uses of closures and you'll see they open a world of possibilities. (and there's so much more to it than this!)
They also let you make compilers easily but I won't get into that here.
CL-USER> (sort (list 1 8 3 2 5) #'<) ; sort just needs a function of 2 arguments that tells if the first parameter is strictly less than the second
=> (1 2 3 5 8)
CL-USER> (sort (list 1 8 3 2 5) #'>)
=> (8 5 3 2 1)
CL-USER> (sort (list "hello" "HELLO" "hey" "bye") #'string<)
=> ("HELLO" "bye" "hello" "hey")
CL-USER> (mapcar #'1+ '(1 7 3 9))
;mapcar applies a function to each element of the lists passed as arguments and returns a new list with the results
=> (2 8 4 10)
CL-USER> (cons 1 2)
=> (1 . 2)
CL-USER> (mapcar #'cons '(A B C) '(1 2 3))
=> ((A . 1) (B . 2) (C . 3))
CL-USER> (defun make-adder (how-much-to-add)
(lambda (initial-value)
(+ initial-value how-much-to-add)))
=> MAKE-ADDER
CL-USER> (mapcar (make-adder 10) '(1 7 3 9))
=> (11 17 13 19)
CL-USER> (defvar *my-favorite-adder* (make-adder 42))
=> *MY-FAVORITE-ADDER*
CL-USER> (mapcar *my-favorite-adder* '(1 7 3 9))
=> (43 49 45 51)
CL-USER> (defun make-modifier (operator operand) ; a bit more general than make-adder
(lambda (initial-value)
(funcall operator initial-value operand)))
=> MAKE-MODIFIER
CL-USER> (mapcar (make-modifier #'expt 3) '(1 7 3 9))
=> (1 343 27 729)
CL-USER> (find-if #'numberp '(a "hi" nil 8 "test" 100))
=> 8
CL-USER> (find-if #'numberp '(a "hi" nil 8 "test" 100) :from-end t)
=> 100
CL-USER> (apply #'+ (remove-if-not #'numberp '(a "hi" nil 8 "test" 100)))
=> 108
That's not even (by a long shot) the tip of the iceberg.
note: The lisp syntax looks weird to me, but in converting this, I really noticed the convenience of lisp's (1). symbols (not needing quotes); and (2). lists (not needing commas). Interesting.
I was referring to the ability to traverse a tree structure (made of conses or objects) and generate a tree of closures from that.
Instead of interpreting the tree structure, you make a first pass where you do a lot of dispatching in advance (at runtime you'll just do what you must do instead of first figuring out what to do and then do it), generating a tree of closures. Then you call your "top-level" closure for a nice optimized run. You can optimize your source tree before compiling this way, for some crazy speed.
All this without having to read advanced books about low-level compilers, parsing, machine architecture and optimization!
You've got a lot of detailed answers, but I find the following quote conveys the feeling:
"Functional programming is like describing your problem to a
mathematician. Imperative programming is like giving instructions to an idiot." --- arcus, #scheme on Freenode (via http://community.schemewiki.org/?scheme-fortune-cookies)
You know how annoying it can be when you're trying to debug a function and it turns out it was acting weird because of a global variable? Functional programming is sort of like avoiding that sort of thing* , and then discovering that once you guarantee you won't mess with global state, it's suddenly possible to e.g. easily parallelize code, since you already know the processes won't interfere with each other's data. At the very least, it makes code much easier to test and debug, since there aren't "hidden variables".
* Maybe entirely, maybe just as a general rule -- long story.
When you have code that is "referentially transparent", that is, it doesn't have any side effects, there's also really no difference between doing something a thousand times or doing it once and caching the value, except the first wastes time. The language could potentially just cache it for you, or any of several other optimizations. (Or not waste time calculating results that it knows won't be used.)
Also, I'm of the opinion that when most people are talking about OOP, they're thinking OOP as implemented in C++ and/or Java. That's a very specific style of OOP, grafted somewhat uncomfortably onto C's type system. You may find OO makes more sense when you see how it works in Smalltalk, for example.
Alternatively, look at how it works in something like Lua, which is not in itself OO, but can easily have its core extended with any style of OO system by defining the language's hooks (called "metamethods") for handling table lookup and a few other things.
I agree that functional programming is very closely related to mathematics.
I think that for programmers who think mathematically, FP is a breath of fresh air, and to them it is obviously, intuitively, right. By "thinking mathematically", I mean people who think of all possible cases, and not just specific instances. For example, when you say "f(x) = x * x", you picture a curve that illustrates all cases. I think the mathematical mindset is to see all code in that way.
- Reliability: I personally don't have that mindset naturally - but what I find very valuable about it is the possibility of proving code. That is, not only does the code work, but you can be sure it will always work, for all valid inputs (i.e. that are in its domain, and its pre-conditions/assumptions are met). Reliability is one of the open problems of programming, and functional programming makes a contribution. I've also found that studying some FP languages has helped me understand some mathematics better, through familiarity (e.g. list comprehensions).
- Concurrency: FP is supposed to help with concurrency, because of immutability. There's a role here, but I have doubts: Erlang, the most promising candidate for concurrency, doesn't actually use this feature (its concurrency is by pure message passing, and it doesn't share any memory (even when it could) - this is for reliability: so that process death has no effect whatsoever on other processes). In other words, pure message passing (also used in webapps, and in SOA) seems to be the answer to concurrency, not FP (though FP can help).
- Hackability: There's also the great hackability of everything being an expression - you can put everything on one line, or divide it up however you like. I personally find this fun, but not (yet) useful. However, I imagine that in some cases, being able to divide up the problem exactly where you want to could be very useful - I just haven't yet encountered that myself (I'm very inexperienced in FP).
There's some functional programming going on in the Python community these days, and I'd suggest that as an accessible route. It's probably the least advanced FP, but that's what makes it accessible. I find the Lisp community is too advanced to bridge the gap to a newby if they lack the mathematical mindset. Haskell is more accessible, but there's still a big gap. I found Erlang more accessible still, but Python is the accessiblest - also because of its community.
It's difficult to do real FP in Python, though. Its lambda operator is crippled (one statement only), and it doesn't have tail call elimination* . Still, you can try out some functional ideas in it pretty easily, such as the map and filter functions.
* What tail-call elimination basically means is that when you call a function, but aren't waiting on its result as part of an expression at the current level (the difference between "1 + f(x, totalCount)" and "f(x, 1 + totalCount)", the language realizes that it doesn't need to keep a placeholder for doing stuff when that level is done on the stack. Therefore, calling a function in that manner can be used for (potentially infinite) looping without just piling more and more on the stack and eventually filling it up. If you try doing that in python, it will eventually blow the stack:
def houseOfCards(x):
if x == 0:
return "done"
else:
return houseOfCards(x - 1)
>>> houseOfCards(10)
'done'
>>> houseOfCards(100)
'done'
>>> houseOfCards(1000)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/tmp/py21945wPg", line 5, in houseOfCards
RuntimeError: maximum recursion depth exceeded
Pure(no side-effects) functional programming means a lot less bugs and it is easier to find the ones that you do create.
It also means more modular code because of natural separation of concerns.
It is much more expressive than imperative/OO programming.
I especially like Haskell, in my opinion the most expressive language there is and at the forefront of new exciting programming features.
Haskell has an awesome typesystem. Statically typed but with optional declarations. It will infer the types by itself and verify consistency.
You can ask the interpreter for the type of a specific function(and search hoogle to find a function with the type you desire).
But it is not only interpreted, on the contrary it is a compiled language and a very fast one.
Much faster than Python, Ruby etc, rivals C at some stuff and it scales without problems to multiple cores so that advantage will only grow.
Pattern-matching, not only is thetypesystem very good for avoiding bugs but patternmatching really helps to make code more readable and makes it easier to verify that you have covered every case.
I also find functional code very beautiful and easy to read, here is a naive mergesort in haskell:
Easy to read even if you never programmed in haskell before right?
Scala is also cool since it takes the best from both worlds and really delivers.
Building on an already existing platform and also providing nice abstractions for concurrency and parallellism.
In your experience, how easy or difficult is it to work with Haskell, given the lazy evaluation? In my limited Haskell experience, I've found this to lead to unpredictable performance results.
You can write FP in a natural way (using non-mutable functions and data) where the code is automatically scalable over hundreds or thousands of processors. It has built-in super-scalability.
The trade-off is that the code is butt-ugly, at least in this C# programmer's view.
It's not the butt-ugly code, but the copying. Yes, there are tricks to avoid copying an array just to update one element (if you don't have side effects ....), but at some point you just want to modify something.
I don't know much about C#, but functional code in functional languages is beautiful. Attractiveness of functional vs. imperative code seems to have more to do with the intentions of the language desires than anything inherent to the language. Take Ocaml, for instance. The syntax for imperative structures and object-oriented programming is very ugly, but the syntax for functional programming is beautiful.
The really beautiful thing about OCaml, I think, is that it lets you do either when you need to and just encourages you to give the imperative code a clean functional interface.
for me the #1 benefit is first-class functions. once you have that, you have the best that functional programming has to offer
more generally, the composability of functions allows for more tacit programming. for example:
array = [ 1, 2, 3 ]
sort( array )
print( array )
here sort is a function that takes an array and sorts it in-place. the sort function itself doesn't return anything. but if we have it return the array, we can write the above this way:
print( sort( [ 1, 2, 3 ] ) )
not only does it read easier, you also no longer have to think about the state of the array
from there you can enforce immutability, and then you have to worry even less about the state... because there isn't any
but to me that's extreme. the ideal of FP is nice, but taking something fundamentally dynamic like an algorithm or a working software system, and then trying to process it with something that can in its nicest terms be called anally rigid, is IMO fundamentally misguided. the only language in which i've seen this work is XSLT (a strict functional language,) because in its case you're dealing with rigid things to begin with (declarative XML documents) and on top of that it maps perfectly to its domain (XML documents are structured/"composed")
of course, strict FP has lots of benefits for the computer regarding optimization. but from a language design POV i'd rather give full power to the programmer and do my best to optimize later (eg via JIT)
so, IMO, use first-class functions all over the place and take advantage of the conceptual alleviation that composabilility offers, but consider the type-anal, FP-anal, OO-anal, and pretty much anything-anal languages to have been designed by crazy kooks who can't see the very wide line between theory and practice
Smalltalk - "pure" OO. I guess you would call it "OO-anal". Alan Kay - ACM Turing Award awardee (among other things). Fairly smart guy. Genius, some would call him. You would, I imagine, call him a "crazy kook".
This is one of those rare instances where I wish I had more karma here.
I'm intrigued by the idea of Lisps built on top of other run-times.
Clojure is the example here that all the cool kids are talking about. But I just notice Nu in the list of functional languages the article mentions, which is a Lisp built on top of Objective C. The creator of Nu points out he wanted a Lisp to extend C, and that Objective C provided a dynamic run time to build on while still allowing the incorporation of straight C code. I suspect there is a good .Net Lisp out there somewhere, I just don't know what it is.
Building a Lisp on top of these languages gives you macros, code as data, culture of functional programming, etc. while still making all of the underlying libraries available.
That is probably the future of lisp - I mean we can see it already. Clojure is one example (in use in some startups) and arc is another (being built on MzScheme - although thats probably more for practical reasons).
It was the most thorny and un-inspiring community I've ever participated in, despite my extreme interest in the language. It's jaw dropping that a language with such promise has sat out the resurgence, and speaks to what an un-friendly and un-inviting community can do a technology platform.
As a Lisp newb who frequents #lisp every now and then, I’ve noticed quite the opposite. People in the channel are very open to newcomers and as friendly as hackers can get.
Uninspiring? Why? Lack of smug as Cal Hendersen says?