FP is the future of programming, and always will be.
The argument that many-core will lead to FP adoption used to appeal to me, but then I studied Erlang, and saw that its concurrency power was due to shared-nothing pure-message passing, and not due to it being functional. FP is a way to not need to share memory, but Erlang doesn't actually use that for concurrency. The thinking seems to be that inter-core communication should be coarser-grained (e.g. at the module level, not the level of recursion over a list), because it will always be slower than communication within a core.
Also, surprisingly to me, the over-hyped web services, SOA and ESB etc arguably also aim at pure-message passing concurrency.
If you actually want to do something you still have to have side effects, and that's when all that Haskell-ish beauty goes out the window.
It is still absolutely worth it to learn functional programming, it will change the way you think, it will make a better programmer in non functional languages, but it won't solve the problems of side effects.
Purely functional languages are a red herring. So-called impure languages however like OCaml can be enormously productive for certain classes of problem. As always, it's a matter of the right tool for the job. Too many people these days only have one tool, then every problem looks like your face.
Most programmers, realistically, don't use that many languages at any given time, and if they could use fewer, they probably would. The more territory a language covers, the better off it is... well, at least as long as it can do so relatively well.
Agreed. Languages like OCaml are actually higher-order imperative languages. They have a very rich, expressive system of "values". A value can be a function value, partially un-curried, and closed over arbitrary data. They can also be data values but can be arbitrarily complex. How many languages let you say:
let tree = Node (Node (Leaf 3) (Leaf 4)) (Node (Leaf 7) (Leaf 8))
This value is constant, just like an integer or string constant in many languages. In other words, the right-hand side can be arbitrarily complex. And this data structure gets passed around in a very lightweight manner (by reference, not by copying).
The Java notion that values are either primitive, or objects... is limiting.
1. Most lines of code don't require side-effects, the Haskell way just asks you to structure your program in pure parts and side effecting parts (usually 90/10).
2. Also it's easy to define your own forms of side effects and parameterize your code on them (so I write my code to depend on any monad m that offers references and the code runs using IO, ST or STM), not being constrained by whatever choices the language designers gave you initially
3. Monads (and arrows and other abstractions that can model side-effects) are beautiful by themselves. It's one of the most elegant ways to express, compose and restrict side-effects. In most languages there's no easy way to create different combinators to deal with side-effects, but in Haskell 'do', ';' and '<-' are only pieces of notation, the real meat is '>>=' and 'return' and whatever combinators you build upon these.
4. The Haskell way to deal with side effects is to use a pure language to build a side-effecting program. Instead of writing down the program you abstract its parts and use combinators to mix them. 'IO' is a value that can be manipulated not an action that was already executed.
He never said that. He just said the "Haskell-ish beauty goes out the window," which is arguably true.
In my (limited) experience, Haskell code to do I/O (i.e., Monads) isn't a pretty or elegant as purely functional Haskell code. A lot of Haskell's useful tricks (i.e., its beauty like the ridiculously strong type system or lazy evaluation) don't work on code with side effects.
"Most modern programming languages, including object-oriented languages, are imperative: A program is essentially a list of instructions intended to be executed in order."
That's silly. Anyone who has worked with a pure OO language knows that there's nothing inherently imperative about the approach. Just because the most popular OO languages (C++, Java) ultimately force you to have a main() doesn't mean that a procedure is required to write OO code. That's more a reflection of the OS execution model than the language used to write the app.
I've personally written CORBA apps that were asynchronous, distributed and multi-threaded -- in C++. Object-oriented development is more than capable of handling the complexity of multi-core processing.
Procedural does not mean "has a main()". It means that the code within a procedure (or method, or function) is always a sequence of steps, i.e., a procedure. OO is a procedural paradigm. doThis(that) is replaced by that.doThis().
A Prolog program is not specified in terms of a "sequence of steps", for example. Of course, the implementation involves a sequence of operations (because it is typically going to be evaluated on a Von Neumann-style machine), but that has nothing to do with the semantics of the program.
OO does not require that your code be executed any more sequentially than does a functional language.
If by "OO", you mean mainstream OO languages like Smalltalk or Java, then they absolutely do have a more operational definition than a pure functional language does. Objects send messages to other objects; as a result of receiving a message, the internal state of an object changes.
So, you're suggesting that functional programs are somehow totally stateless? Otherwise, I don't really see your point. All useful programs store state somewhere. The fact that OO programs couple state and code doesn't make them procedural. Functional programs just tend to muss over this detail by using closures and anonymous functions (aka "objects") to keep track of state.
An object-oriented program is not required to be expressed in terms of a "sequence of steps" any more than is a functional program. You can instantiate objects that talk to one another, asynchronously or otherwise, to collaboratively do work.
You originally said that the idea that OO languages are imperative is "silly", and that:
Anyone who has worked with a pure OO language knows that there's nothing inherently imperative about the approach.
Can you provide some examples of how you might write "non-imperative" programs in a mainstream object-oriented language?
An object-oriented program is not required to be expressed in terms of a "sequence of steps" any more than is a functional program
A functional program is not a "sequence of steps"; in principle, it is just a mathematical expression that can be evaluated (via some means) to yield a value (think of a SQL query or a Prolog program).
"Can you provide some examples of how you might write "non-imperative" programs in a mainstream object-oriented language?"
Go back and read my post -- I drew a distinction between "mainstream" OO languages, and pure OO languages. If you want to genuinely understand object-oriented programming, don't look at C++ or Java. Look at Smalltalk or Simula.
"A functional program is not a "sequence of steps"; in principle, it is just a mathematical expression that can be evaluated (via some means) to yield a value (think of a SQL query or a Prolog program)."
I know what you're arguing -- the platonic ideal of the functional program is stateless, and methods within that program have no side effects. In reality, all programs store state somewhere, and even "pure" functional languages store state (albeit at a higher level).
What I'm trying to explain is that object-oriented programming doesn't require side effects. Where functional programs use closures and monads to pass state around, you could just as easily use a functor to do the same thing. Thus, you aren't compelled to write object-oriented code in an imperative style. The difference is that OO code tends to be more explicit about the location of state variables, and calls them what they are. Functional programs, in contrast, just tuck the stateful bits into closures, and pretends that they aren't really state.
It ought to be possible to structure an OO program so side effects can't escape their logical boundaries. But the only people who can do that are a) functional programmers and b) given the choice aren't using Java...
It's most likely possible. However, I am not aware of any OO compiler/runtime that can enforce it. So, while it's possible, it's not easy to implement. With functional programming it's there by design.
It's actually pretty easy in Java. Create objects that initialize all their internal state in the constructor, and never mutate it in public methods. If a method must mutate state, have it return a new object.
Several well-known Java gurus have advocated this, eg. Joshua Bloch. I worked with Ken Arnold for a bit, and a lot of the code he wrote was in this style.
The problem is that you're usually using Java because you want access to all those Java libraries, and the vast majority of Java libraries do not use this style. So you get state leaking into your program even if your own code doesn't do it.
(This is also why I'm less thrilled by Clojure than many other people are, even though I think it's a very well-designed language. The reason people are into Clojure is because it can use Java libraries and yet provides a mostly-functional language on top. But the problem isn't in Java-the-language, it's in Java libraries themselves. Unless you go rewrite the offending libraries - and this includes most of Swing, JSF, JFreeChart, the JavaBeans spec, and Calendar - you'll still run into problems. The only major libraries I've seen that use a relatively stateless style are Date and Java Collections Frameworks (both done by Josh Bloch, not surprisingly), JavaSpaces (done by Ken), and the basic String and Number classes.
Defensive programming? You still don't have any guarantees that your code is in fact free of side effects. There isn't a way to easily test for it either. So, even if you don't use any libraries and write all your code in this style, you could have bugs that will be very hard to track down. Functional languages ensure, by design, that this doesn't happen.
It's not that hard to check for. Simply do a regexp search on your code for "variable =", where "variable" is one of the instance variables in the class, and ensure that all occurrences are in the constructor. Also need to avoid calling mutating methods of built-in classes - this is easy with Strings and Dates, but you need to be very careful to make copies of all your collections. I could write a simple script or Eclipse plugin that does it all for me.
This, again, assumes you can trust your libraries. A single method that doesn't follow this convention will pollute anything that calls it.
I've done things like this, in Python and to a lesser extent in Java. It works. It is pretty easy to slip up - I've had some bugs introduced because I forgot to copy a list - but it's at least a tractable problem. Gets easier if you use things like list comprehensions and slicing, which copy by default. Though Java's lack of support for closures can make this difficult.
Sure, it's possible. But this now sounds similar to arguing that you can do OO programming in C. Doing inheritance with function pointers and defining all "object" methods to take the data structure as the first parameter. I just think if you're going to code in a specific style, it's better to choose a language that better supports the style.
The argument that many-core will lead to FP adoption used to appeal to me, but then I studied Erlang, and saw that its concurrency power was due to shared-nothing pure-message passing, and not due to it being functional. FP is a way to not need to share memory, but Erlang doesn't actually use that for concurrency. The thinking seems to be that inter-core communication should be coarser-grained (e.g. at the module level, not the level of recursion over a list), because it will always be slower than communication within a core.
Also, surprisingly to me, the over-hyped web services, SOA and ESB etc arguably also aim at pure-message passing concurrency.