
Functional thinking: Why functional programming is on the rise - Adrock
http://www.ibm.com/developerworks/library/j-ft20/
======
Chris_Newton
Some ideas that are ubiquitous within functional programming are certainly on
the rise, for example:

\- functions as first-class entities in programming languages, and
consequences like higher-order functions and partial evaluation;

\- a common set of basic data structures (set, sequence, dictionary, tree,
etc.) and generalised operations for manipulating and combining them (map,
filter, reduce, intersection, union, zip, convert a tree to a sequence
breadth-first or depth-first, etc.);

\- a more declarative programming style, writing specifications rather than
instructions;

\- a programming style that emphasizes the data flow more than the control
flow.

I see these as distinct, though certainly not independent, concepts.

I’m not sure whether functional programming itself is really on the rise, not
to the extent of becoming a common approach in the mainstream programming
world any time soon. I don’t think we’ve figured out how to cope with
effectful systems and the real world having a time dimension very well yet. (I
don’t think we’ve figured it out very well in imperative programming yet
either, but the weakness is less damaging there because there is an implicit
time dimension whether you want it or not.)

~~~
jarrett
To me, the future will most likely be languages that allows both functional
and OO styles to interoperate. Programmers will pick the style or mix of
styles most appropriate to the particular sub-problem they're solving.

We already do this with some of our high-level languages like Ruby and
JavaScript. With these, we have higher-order functions, map and friends,
closures, etc.. But we also have our familiar OO constructs. Almost every
program I write in these languages uses all of the above, not just the
functional or OO subset.

I so far have not seen any practical advantage in going purely functional.
I've tried it a number of times, but I always find that the real-world
programs I write need to be stateful. Yes, functional languages do of course
have facilities for handling state, but they always seem super awkward,
especially compared to the elegance of state handling in OOP.

For example, consider a simple, 1980s-style arcade game. There are a bunch of
entities on screen, each with attributes like velocity, health, etc.. How do
you maintain this state in a purely functional language? I've seen various
suggestions, but they all seem to boil down to setting up a game loop with
tail recursion, and then passing some game state object(s) to this function on
each recursion. Doesn't sound so bad, but what happens when you have a bunch
of different types of entities? E.g. player characters, monsters, projectiles,
pickups, etc..

Well, every time you add a new type of entity (or a new data structure to
index existing entities), you could add another parameter to the main game
loop. But that gets crazy pretty fast. So then you have the clever idea to
maintain one massive game state hash, and just pass that around. But wait, now
you've lost something key to functional programming: You can no longer tell
exactly what aspects of the game state a function is updating, because it just
receives and returns the big game state hash. You don't really know what data
your functions depend on our modify. Effectively, it's almost like you're
using global variables.

I'm using games as an example here, but the same sorts of problems come up
with almost any stateful application.

This is why I prefer languages that allow you to seamlessly mix functional and
OO styles. They give you many of the benefits of FP without forcing you to
deal with the difficulties described above.

~~~
martinced
If you mix FP and OO you'll often end up with terrible FP which simply
reproduces the "old" approach: lots of mutable stuff everywhere. Many Clojure
toy games are like that: they start with a lot of "variables" in mutable refs.

But it doesn't need to be that bad: you can create a game that is fully
deterministic. A game which is purely a function of its inputs. And it can of
course be applied to more than games.

The problem is that it feels "weird" to non-FP people.

The state monad is definitely what you're looking for. If you use a state
monad approx. 95% of what you just wrote is utter rubbish.

But the state monad (and the maybe monad) sadly aren't _that_ trivial to
understand.

~~~
jarrett
> If you mix FP and OO you'll often end up with terrible FP which simply
> reproduces the "old" approach: lots of mutable stuff everywhere.

What I'm suggesting is that mutable state isn't inherently bad. It's entirely
possible that we will someday enter a new age of mainstream, pure FP where we
look back at mutable state and cringe. But that's pure speculation. Our
current world is full of successful applications written in languages built on
mutable state.

So I'm not saying that mutable state is inherently better than the
alternatives. Just that the case against it--and for the alternatives--has to
be pretty darn compelling to outweigh the tremendous real-world success of
languages like Ruby, Python, JavaScript, and so on.

------
discreteevent
The thing is that any imperative programmers who have composed SQL subqueries
have have been doing this kind of thinking for years whether they realise it
or not. The only substantial difference is that the data is in the process'
memory as maps and lists as opposed to relational tables in the db. You end up
with exactly the same kind of patterns of composition in the code.

~~~
pestaa
Completely disagree. Composing SQL queries even on multiple levels (as in
subqueries) might involve a way of thinking that remotely resembles to
functional programming, it is way too simplistic for comparison with real
world functional programs, at least in my experience.

I know I grasped SQL really quickly, and still get puzzled by Haskell after 6
months of trying.

~~~
jeffdavis
"[SQL] is way too simplistic for comparison with real world functional
programs"

Another way of interpreting that is that the ideas behind SQL are so powerful
that it's able to be one of the most successful programming languages of all
time without all the bells and whistles of a language like haskell.

It's so successful that it can get by with horrid syntax, a ton of special
cases built in, and the mess of SQL NULL semantics.

Sometimes it boggles my mind that more functional programmers don't get
involved in databases, where all of their ideas are already proven to be
useful.

I don't necessarily disagree with you, depending on what you mean by "real
world" programs. I just think that if there is already an area where the
concepts are useful (whether you call that a "real world program" or not), it
makes a lot of sense to keep trying to build from that. Trying to convince
someone to learn haskell to build a website when they already know and like
ruby is an uphill battle.

~~~
martinced
_"Sometimes it boggles my mind that more functional programmers don't get
involved in databases, where all of their ideas are already proven to be
useful."_

But some do. For example Rich Hickey used Clojure (which both users and
encourages FP but still allows to bend the rules when needed) to create
Datomic.

It's a DB but it's CRA instead of CRUD. Making it just so much easier to
reason about. The DB is ever-growing and any query becomes "query at time t".
And later on, when you query again "query at time t", you CANNOT get a
different answer.

Typical CRUD DBs are place-oriented mutability mess and, honestly, reading
about MySQL fanbois vs PostgreSQL fanbois arguing about which one is the best
looks, from a CRA point of view, as a blind man arguing with a one-eyed about
who has the best eye vision.

Datomic is ACID, has infinite read scalability and can use SQL DBs (and many
others) as their persistence stores.

It is as Real-World as one can get since that Rich Hickey as seen exactly what
we're all witnessing daily in the real-IT-world: a gigantic ball of mutability
mess.

To me the revolution is on and there's no going back for those who switched.

From there it's just a matter of time.

~~~
jeffdavis
Thank you. I knew that Datomic existed, and some of the basic things you
pointed out here, but I'll take a closer look. It seems like you really think
it has a lot of promise.

That being said, I don't think CRA is a panacea. Databases will always have a
"read, decide what to do, write" pattern (because that is what happens in the
real world, and databases model the real world). If the read happens at time
T, and the write happens at time T+100; then you could have a race condition
if something happens at time T+50.

Postgres can detect such race conditions by using true serializability (based
on a technique called Serializable Snapshot Isolation[1]). It gets tricky to
apply the same technique for temporal databases, though I believe it's
possible. Do you happen to know what techniques Datomic offers users to help
avoid/detect/resolve race conditions?

Don't get me wrong. I think it's a very useful direction to go in. I've done a
lot of work on temporal features in postgres. I'm also aware of some of the
limitations, however. You still need some kind of coordination and something
resembling locks or a conflict detector.

(By the way, please avoid derogatory comments about people who make different
technology choices than you do. An argument could be made that postgres is
CRA. And even if not, CRA versus CRUD is a small part of what is actually
important to users; many of whom face a different set of trade-offs than you
do.)

[1] <http://drkp.net/papers/ssi-vldb12.pdf>

------
rdtsc
I think a big one is concurrency.

Highly concurrent application will become more popular. Functional programming
via immutable data structures is one sane way to manage that. Clojure is doing
it. Erlang has been doing it for years. Haskell has that.

Fear of large, mutable state is well founded.

What else I think is healthy is adoption of these patterns. It is possibly to
be diligent and try to apply some of the idea using other languages (C++,
Python etc). It is just a different way of thinking and structuring code.

------
justin_vanw
They are assuming that functional programming is on the rise. With the
exception of Clojure (which is more popular by virtue of being new), I don't
know of any functional language that are in any measurable way more popular (I
assume that is what they mean by on the rise) than it has been in the last 10
years.

In fact, the only application I use that is built using functional programming
is Xmonad.

Also, none of the top languages on the TIOBE survey are purely functional,
although one in the top 20, 'lisp' with .9% and falling, does promote a
functional style.

People expound the virtues of functional programming year after year, and then
go off and get a bunch done with Python. Steve Yegge said it best:
[http://steve-yegge.blogspot.com/2010/12/haskell-
researchers-...](http://steve-yegge.blogspot.com/2010/12/haskell-researchers-
announce-discovery.html)

[http://www.tiobe.com/index.php/content/paperinfo/tpci/index....](http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html)

~~~
batgaijin
Hypothetically you are a big company able to deliver increasingly complicated
software that far outperforms the competition because of functional
programming. Would you want to correct a remark like this by IBM?

Why on earth would you want to educated pointy-haired individuals when you can
be another vapid supporter of the six-sigma model?

------
spikels
Why is functional programming on the rise?

I have heard several times that the big payoff with function programming comes
from parallel processing. Because functions typically have no side effects
they can operate on a set of inputs in parallel without modification. This is
important because future increases in processing power are expected to come
primarily from more cores rather than higher clock speeds as in the past.

Is this correct? The author seems to focus on other real but lesser benefits.

~~~
PeterisP
I've done a bunch of both 'classic' and functional programming, and for me the
biggest difference is a switch in code-writing mindset.

In imperative languages, generally, I tell the computer what and how it should
do - no matter if it's assembly, C, Java or [most of] Ruby.

In FP languages (Haskell or Scala, haven't worked with Lisps), I generally
tell the computer what needs to be computed and expect it to figure out how to
do it - what order of execution, what grouping of data, what to cache.

If you compare Java code and Scala code for the same algorithm, using the same
JVM API - then the main nonsyntactic difference between them will be a pile of
missing Java lines detailing the order of processing steps and loops, which
probably weren't essential to the algorithm - when writing the Java code I
could've written it in opposite way with the same result.

What I lose in FP is the ability to easily explicitly do time/space tradeoffs.
Sometimes I need to control that, and then it's a bit trickier. However, it
can be fixed with minor refinement of language and libraries - the original
article cites a great example on how memoization should be implemented in all
great languages.

Parallelism currently is just a nice bonus - say, I lose 3x performance by not
detailing a great execution path manually in C++; but I gain 3x performance
since most of the code is trivially parallelizable to run on 4 cores. For
example, in Haskell it's often literally a one-line change, while in Java it'd
be a pain in the butt to make everything safely threaded.

~~~
goggles99
_Parallelism currently is just a nice bonus - say, I lose 3x performance by
not detailing a great execution path manually in C++; but I gain 3x
performance since most of the code is trivially parallelizable to run on 4
cores._

Parallel extensions have been (and are being) added to OOP languages to make
it's use not only simpler, but even as a preferred general design.

~~~
PeterisP
In order for such extensions to work on most of my code well, my code needs to
written in an FP-inspired style that many new additions to OOP languages do -
like C++11, python, etc.

As a crude example - there is a big conceptual difference between a for-loop
that processes all the elements in a list or array, and a map operation to all
the elements in it.

The for-loop specifies that the execution will be sequential and in a specific
order, so it can't really be parallelized automagically.

The map-loop says that it might not be such, so you cannot (a) use a mutable
state such as counter incrementation inside; and (b) use the 'previous item'
in the calculations.

But in practice, you can write most computations in both of these ways - and
if you switch from 'idiomatic C' for-loops to map operations (and the
equivalents for the many other common data processing tasks), then code is
much more parallelizable both in FP and classic OOP languages.

BUT - it means that much of your code needs to be written in FP-style even if
the language is not FP, so you need to think in FP-style. If you use these
"parallel extensions" and the new "preferred general design of OOP languages"
then much of your code will be stateless and w/o sideeffects; much of your
code will look quite different than classic/idiomatic code that the OOP
langage had earlier.

~~~
spikels
Thanks for the explanation. Popular "modern" languages like Javascript, Ruby
or Python already have many of the "FP" features mentioned in the OP and
comments: first class functions (though not so simple Ruby) and high level
data processing functions (e.g. map, filter, select).

As best I can tell the big new benefit of FP is the potential, generally not
currently realized, for automatic parallelization. The cost is having to think
in a new way that is, at least initially, somewhat confusing.

Alternatively non-FP languages could simply import ideas from FP and gain the
benefits in an evolutionary way. For example if Mads wanted to he could
rewrite the Ruby interpreter to have map, filter and many other methods
automatically use multiple cores. Eventually programmers using the new multi-
core version would learn which structures were, such as map, or weren't, suc
as for loops, going to be parallelized.

There would be many language specific details to work out. For example
determining if there are truly no side effects in a particular situation. This
is not always so simple in non-FP languages but still not as complex as
analyzing every possible for..end loop.

As 4, 8, 16, 32 and more core processors become more and more common the
pressure to either change to FP languages, incorporate FP feature in non-FP
languages or find some other parallelization methodology (?) will become
irresistible.

------
taeric
Doesn't this just bend back to Moore's law? Sure, there are more "efficient"
data structures for use in functional styles now than there were in the past,
but they all seem to treat memory as if it is free. Basically, now that we
have such impressive computing resources, we can start thinking of the
programs we are writing in terms of the abstractions themselves, and less in
terms of the abstraction that is the computer. (That is, many of us are lucky
enough to not worry about caching strategies, threading concerns, etc.)

The entire debate about immutable structures is amusing when you consider old
resources where there is not spare room for a new copy of a string just from a
different starting point. (Referring to the new Java behavior of .substring)

------
slyv
I have been meaning to dig deeper into functional programming, specifically
Clojure, but as a newbie to functional programming, I am finding it hard to
find a detailed tutorial that I can start to get started with the concepts and
programming used in Clojure. Would anyone have any recommendations for a
guide?

~~~
pearle
This site is a bit old now (not sure if it's kept up to date) but I found it
helpful when I first started playing with Clojure several years ago
(wow...that long already)?

<http://java.ociweb.com/mark/clojure/article.html>

------
ilaksh
Highly recommend LiveScript if you like CoffeeScript or JavaScript or
functional programming.

<http://livescript.net>

~~~
fourstar
Any reason why they ripped the original name for Javascript off? I totally
thought you were joking until I went to the site and noticed it's a current
project.

~~~
GeZe
The name is a joke, the project is not. "LiveScript was one of the original
names for JavaScript, so it seemed fitting. It's an inside joke for those who
know JavaScript well." (from the site)

LiveScript hasn't been used as a name for JavaScript in almost two decades.

~~~
fourstar
Thanks for explaining.

------
goggles99
The majority of people Are Lazy - This is why java and other languages
abstracted further than C/C++ became so popular. Functional programming is
more natural and comes with a smaller learning curve. It does not require the
same amount of discipline to become proficient - nor the conceptual,
analytical problem solver skills to master. But that is just the language,
what about the design of an large scale application suite?

So people with less analytic abilities and problem solving skills, (I think of
those who HAD TO choose Humanities and arts majors here?) Can actually pick it
up. This can be looked at as good or bad, in the long run it is probably good
overall because the understanding of hardware and memory become less important
as processors, memory, storage, and bandwidth become cheaper There will always
be the geeks who still understand everything and can design and architect the
software for the developers to code.

One downside may be that those with the analytic/problem solving minds will
lose their art in a sense and lose the very thing that keeps their minds
sharp. Another thing worth mentioning is, there is going to be a lot of
changes for developers. Wages will go down as it gets easier to learn how to
program - thus more programmers flood the market (supply and demand), also
their will be a separation of Architects and code monkeys - the latter being
the Humanities graduate type.

To sum up - This will "Dumb Down" your average developer and create a clearly
defined separation between developers and designers/architects.

------
neur0
Quite off-topic, but the underscores for the variable names in the first code
snippet are totally irritatingly useless and anachronistic in the most ugly
looking way.

~~~
efnx
They mark the private variables so you don't have to remember which ones are
private when you read further down the page. That's pretty common place and
good style, IMO. What makes you think they're anachronistic?

