
Does Functional Programming Replace GoF Design Patterns? - apgwoz
http://stackoverflow.com/questions/327955/does-functional-programming-replace-gof-design-patterns
======
gruseom
Looking back, I'm amazed at how long it took me to awaken from the dogmatic OO
slumber. One of the things that snapped me out of it was realizing how
mechanical and low-level the GoF patterns are. Combine that with the tight
coupling you get in object models (a foo has a bar and a collection of bazzes,
each of which has a bizzat, and they each tell two friends, and so on and so
on) and you end up with very rigid code. That medium is concrete and concrete
hardens awfully quickly.

Nowadays, my programs are orthogonal sets of functions that I can combine any
way I want to, with no object model to wrestle into submission, and my world
is blissfully design-pattern-free.

Freud said sometimes a cigar is only a cigar. I say sometimes a function is
only a function.

Edit: I exempt Smalltalk from this generalization about OO. I've met too many
smart, well-informed people who prefer Smalltalk (at least half a dozen!) to
believe that what Smalltalk programmers do is OO of the kind that I
experienced. Alan Kay's writings on the subject confirm that.

~~~
silentbicycle
If you're exempting Smalltalk from your generalizations about OO, then your
observations probably aren't even about OO so much as the fallout from C++ and
Java's attempted graft of it onto C. That's like exempting Haskell from
generalizations about functional programming.

OO systems can be simple and incredibly useful when solving some problems
(without needing tremendous amounts of boilerplate code), but many things
claiming to use OOP are doing so for buzzword-compliance. You _can_ , of
course, write good OO code in C++ and Java (or even C), but many people
working in them do not understand OO design very well (overusing inheritance
is especially common), and you will have to write complex and verbose
workarounds for fundamental differences in language semantics (e.g. the lack
of Smalltalk-style blocks, C-style static typing's requirement for explicit
casts).

If you want to understand OO, look at Smalltalk (_Smalltalk-80_ is a good
book). Post-Smalltalk prototype-oriented languages such as Io
(<http://www.iolanguage.com/>) and Self
(<http://research.sun.com/self/language.html>) also have numerous interesting
ideas in them* , but writings about them generally assume familiarity with
Smalltalk. Also, read Alan Kay. Whether or not you agree with his conclusions,
he writes lucidly about many language design issues.

My favorite OO example from Smalltalk is that the language doesn't need a
primitive for "if", you just pass an object that calls one of two blocks,
depending on whether it receives true or false. A Smalltalk block, by the way,
is an anonymous function which is not evaluated until its result is needed.
Sound familiar? :)

* Prototype-based programming is also easily done in Javascript and Lua, FWIW. Steve Yegge is big on the former ([http://steve-yegge.blogspot.com/2008/10/universal-design-pat...](http://steve-yegge.blogspot.com/2008/10/universal-design-pattern.html)), I prefer the latter.

~~~
gruseom
Note: I mostly wrote the following before I had my coffee. Consider yourself
warned :)

 _your observations probably aren't even about OO_

I programmed professionally in OO for years, have trained many people in OO
design and domain modeling, and know what the fuck I am talking about. I know
what Smalltalk blocks are, I know what Smalltalk "if" is, and I happen to have
read Alan Kay, which you might have seen had you read my post properly instead
of just dumping out a pile of predigested tutorial boilerplate.

My critique of OO comes from struggling to build serious production systems
with it, not from programming language parlor games.

 _exempting Smalltalk from your generalizations about OO ... [is] like
exempting Haskell from generalizations about functional programming_

No, it's not. Kay coined the term OO, but it hasn't meant what he meant by it
for a long time. I'm talking about OO in the sense that the overwhelming
majority of people now understand it: roughly, the organization of programs
into classes that bundle state and behavior in ways that attempt to model the
problem domain. Whatever makes Smalltalk great, it isn't this, as the
subsequent history of OO makes clear. People thought that this was the
contribution of Smalltalk and they tried to abstract the paradigm out and
improve on it - but what really happened is they threw the baby out with the
bathwater, and now we have static bathwater. I have a hunch this is because
they paid too much attention to the Simula strain in Smalltalk and not enough
to the Lisp one (<http://news.ycombinator.com/item?id=288786>). Had they
listened to Alan Kay they wouldn't have made this mistake; there's no question
which he emphasizes more.

 _OO systems can be simple and incredibly useful when solving some problems
(without needing tremendous amounts of boilerplate code)_

Anything is simple and useful when you're working on a simple problem. What
matters is how it works on complex problems. The claim made about OO for a
generation now is that it helps to build complex systems, and this mantra has
been repeated and taught so often that most people just believe it. It's not
true. In my experience, you _do_ typically end up needing tremendous amounts
of boilerplate code when working that way (whether or not you see it as such
is a different matter). Worse, your system starts to fight itself because the
way you need to organize it over here doesn't work over there, and so on; time
to factor out the code you need into a static "utility" class - poof, there
just went your OO paradigm.

Ultimately, I'm exempting Smalltalk from my critique because of the respect I
have for the Smalltalk programmers I know, who can't possibly only be doing OO
in the majoritarian sense. I haven't used Smalltalk enough myself to have a
clear idea of what they're really doing, and Bob Dylan said not to criticize
what you don't understand.

I'm not, however, exempting CLOS, which I have used enough to experience
several of the limitations I'm talking about, enough so that I don't bother
with it anymore. Perhaps that will persuade you that I'm talking about more
than C++ and Java?

~~~
silentbicycle
First, I saw that you said that you have read Alan Kay, but I'm speaking to
the general discussion audience. And, ok, you know what you're talking about;
I'm used to reading people who think that _all_ OO systems necessarily suffer
from the same problems as e.g. C++'s, and, not knowing your programming
history, had few impressions to the contrary based strictly on what you said.
Nothing personal. Be civil, have your coffee.

> What matters is how it works on complex problems.

Agreed. I don't believe pure-OO (or pure-FP, pure-declarative, etc.) is an
ideal approach for _anything_. OOP is a technique that works well for _some
problems_ , and is best understood so you can use it when it is a good fit.

I think that the best way to _learn_ OO is via studying Smalltalk. I think the
best way to learn FP is via either Haskell or ML, etc. I do real work in
OCaml, Lisp, and Python (though I am coming to prefer Lua over Python), but
use a multi-paradigm approach based on what I've learned in other languages.

~~~
gruseom
_Be civil, have your coffee._

That made me laugh! Actually the coffee kicked in as I was writing, which you
can probably tell. I don't know why I didn't go back and edit the first
part... I compulsively edit everything else.

------
daniel_yokomizo
Most OO languages are based on no theory, while every FP language can trace
its roots to the lambda calculus. Functions compose and abstract very well,
leading to simpler and more expressive designs. In OOPLs the unit of
abstraction is the class, usually these aren't even first class citizens, the
syntax is usually too heavyweight for casual use (imagine a FPL where it takes
as many lines of code to declare an anonymous function as it takes to declare
an anonymous class in Java). Even Smalltalk and Self use blocks (i.e. closures
or lambdas) instead of providing some lightweight class/object literal
notation. GoF patterns were created to solve these problems of OOPLs. If
you're trying to do FP in a language without higher order functions and
algebraic data types you'll have to come up with dozens of design patterns to
express simple things as folds or pattern-matching. OTOH a hypothetical OOPL
with higher-order classes and a lightweight syntax for anonymous object/class
literals would provide combinators for almost (if not all) GoF patterns and
this wouldn't be an issue. It's not FP that replaces GoF patterns, good
languages with a solid theoretical basis replace GoF patterns, independent of
the paradigm.

~~~
silentbicycle
> Most OO languages are based on no theory

Smalltalk is based on the same computational theory as Lisp. See: The Early
History of Smalltalk
([http://gagne.homedns.org/%7etgagne/contrib/EarlyHistoryST.ht...](http://gagne.homedns.org/%7etgagne/contrib/EarlyHistoryST.html)),
search for " _to really understand LISP_ ". (It's a scanned and OCR'd article,
so there are strange typos.)

Unfortunately, you're ultimately correct: "most [popularly used] OO languages"
these days seem to be based on tearing off a handful of Smalltalk's object-
based theory of computation, duct-taping it on C, and seeing what happens. You
can have OO languages with very elegant theoretical foundations (Smalltalk,
Io, Self), including systems handling Classes as first-class objects
(sometimes called a "Meta-Object protocol"), but most people associate OO with
C++ and Java.

> Even Smalltalk and Self use blocks (i.e. closures or lambdas) instead of
> providing some lightweight class/object literal notation.

This really is key: the language doesn't require you to create a full
anonymous class when sending a message/anonymous function would do. Also,
blocks aren't evaluated unless their message is sent, which means you get a
good syntax for lambdas and basic lazy evaluation for free. Those, along with
dynamic typing, make most of C++ OO-style awkwardness vanish.

------
jherber
Amusingly horrible question. Functional programming languages don't typically
have class-based or prototype mechanisms, so why on earth would a named set of
object interaction mechanisms be relevant - let alone displaceable?

I also think it is funny the status quo in language design is to effectively
combine objects and functions but programmers and writers are not
acknowledging this fusion. Ocaml, Javascript, F#, Scala, Ruby, Nemerle ...
with varying ease, these languages all let you create lambdas, compose
functions, functions as values. Even mainstream languages like C# and Java are
awkwardly moving in this direction.

Scala does some crazy stuff to make "fusion" interesting. They allow objects
in pattern matching by providing a specific mechanism for allowing an object
to expose encapsulated behavior (extractors). Functions all belong to a
parametric set of classes and have an "apply" method e.g. def add(x:Int,y:Int)
= x+y is really Function2[Int,Int,Int]{ def apply(x:Int,y:Int):Int = x + y }.
Likewise, any class or singleton object can invoked as a function by providing
an "apply" method implementation.

~~~
dangrover
I think it's because you can easily make your own object system in a
functional language.

Make yourself a function (representing an object) that takes in a parameter
that it uses to dispatch a function contained in it. Return the function, and
the user can then call that with their args. And you still maintain
encapsulation, too, because the returned function may access stuff that was
only available inside the containing closure (where it was dispatched).

Put all of this together in a nice "class" macro, and bam, you've got objects.

Though, yeah, I guess this would be really tedious if your functional language
didn't have good macros.

~~~
omouse
"And you still maintain encapsulation, too, because the returned function may
access stuff that was only available inside the containing closure (where it
was dispatched)."

That only works if your functions can maintain state and if they can, well
they're basically _objects_ aren't they...

~~~
dangrover
That's my point. You don't need to have a separate special construct for
objects, or any kind of support for them, when you have first-class functions.

As for state information, you could just put a 'let' inside there with your
variables, and call mutators on them from inside the "object"'s functions.

Hmm, I think all these Scheme-centric classes I've taken have warped my mind.
I should seek therapy.

~~~
apgwoz
> when you have first-class functions.

This isn't enough without the ability to close over the lexical environment.
See Python pre 3.0, without the dictionary/list "hacks".

------
silentbicycle
What is the difference between a design pattern and a language idiom?

From common usage, a "design pattern" seems to be a way of organizing part of
a program that cannot be directly expressed _once_ in the language and then
pushed into a library, but instead needs to be mostly rewritten whenever used.
(Polyglot programmers usually recognize this as a kind of language weakness.
(See: [http://weblog.raganwald.com/2006/01/finding-signal-to-
noise-...](http://weblog.raganwald.com/2006/01/finding-signal-to-noise-ratio-
in-never.html)) ) If one could just say,

    
    
      require "visitor.lib";
    

and move on, then would people make such a big deal about them?

Along similar lines, FP has currying, pattern matching / destructuring-bind,
higher-order functions, and monads, among other things. The only one of these
I've heard referred to as a design pattern is the monad. Most FP languages
just let you do all of the others, but monads require rewriting some
(relatively small) infrastructure code. In Haskell, the compiler (via
typeclasses) _recognizes_ that you are writing an instance of a _known general
concept_ , though, so you don't need to duplicate most of the infrastructure.

If this isn't the case, then why aren't general OO design concepts such as
e.g. delegation or inheritance considered design patterns?

~~~
jd
> What is the difference between a design pattern and a language idiom?

A design pattern is simply a name given to the obvious solution given to a
common problem. By giving it a name it becomes much easier to talk about /
understand the code. Once you know that a class is a proxy class, you don't
really care what the methods do anymore, you just dive in and see to which
classes the real work is delegated. So one word "proxy" tells you exactly what
400 lines of boilerplate does.

A language idiom is different. The purpose of language idioms is to clearly
and succinctly express common concepts. The language designers think of a
"best" solution, and give it a name.

So, really, when you have a bunch of people writing the same code to solve the
same class of problem, you give that code a name and call it a design pattern.
Patterns emerge AFTER the language is used. Language idioms try to capture the
essence of the problem, such that no patterns are necessary.

At least, that's how I look at it.

------
newt0311
Its not functional programming per se but higher level languages in general.
Most scheme and lisp environments already come with high-level abstractions
which completely obviate the need for any obvious design pattern (like
visitor, etc...) and lisp is by no means a functional language (though it does
incorporate many aspects of functional languages).

