
Joe Armstrong: Why OO Sucks - it
http://harmful.cat-v.org/software/OO_programming/why_oo_sucks
======
dgreensp
OO vs. FP is just a matter of whether you focus the nouns or the verbs. The
counterargument to the OP is that surely a function that manipulates "data" is
less powerful and abstract than one that manipulates objects.

For example, take an "interface" or abstract data type like Array, consisting
of a length() and a get(i) method. (This is really called List in Java and Seq
in Scala.) There may even be an associated type, A, such that all items are of
type A. This is very powerful because functions written against the Array
interface don't depend on the implementation; we can store the data different
ways, calculated it on demand, etc.

The "binding together" Joe is complaining about is binding the implementation
of length() and get(i) to the implementation of the data structure, which is
surely understandable. The alternative, seen in Lisps and other "verb-
oriented" languages, is that there is a global function called "length" which
takes an object... er, a value... and desperately tries to figure out how to
measure its length properly, perhaps with a giant conditional.

The original OO (SmallTalk) was about message passing rather than abstract
data types; just the idea that an object was responsible for responding to
certain messages, and that these communication patterns completely
characterized the object. This is how we think about modern cloud services,
too; it's kind of inevitable. Who would complain that S3's "functions" and
"data" are too coupled? Who would ask for a description of S3 in terms of what
sequence of calls to car and cdr it makes internally? OO concepts allow a
functional description of a system that starts at the top and can stop at any
point.

The "everything is an object" philosophy gets a bad rap. It's a big pain in
Java, especially, because of how the type system works. Ideally I'd be able to
define a type of ints between 1 and 3, an obvious subclass of ints in general,
whereas in Java I find myself declaring "class [or enum]
IntBetweenOneAndThree" or some nonsense.

~~~
meric
In Haskell I guess it would look like this: I'm still new to Haskell hopefully
someone can improve it?

    
    
        data M = M1 | M2 | M3 deriving(Show, Eq, Ord)
    
        fromM M1 = 1
        fromM M2 = 2
        fromM M3 = 3
    
        instance Num M where
          abs = abs
          a + b = fromInteger (mod ((fromM a) + (fromM b)) 3)
          a * b = fromInteger (mod ((fromM a) * (fromM b)) 3)
          a - b = fromInteger (mod (abs $ (fromM a) - (fromM b)) 3)
          fromInteger 0 = M3
          fromInteger 1 = M1
          fromInteger 2 = M2
          fromInteger 3 = M3
    

Is this what you're looking for? :)

~~~
klodolph
A more generic version allows you to parameterize based on the bound:

    
    
        newtype BoundedInt b = BoundedInt Int
    
        class Bound b where
          boundRange :: t b -> (Int, Int)
    
        fromBounded :: BoundedInt b -> Int
        fromBounded (BoundedInt x) = x
    
        toBounded :: Bound b => Int -> BoundedInt b
        toBounded x =
          let result = BoundedInt x
    	  (minb, maxb) = boundRange result
          in if minb <= result && result <= maxb
    	 then result
    	 else error $ "Out of range: " ++ show x
    
        instance Bound b => Num (BoundedInt b) where
          abs = toBounded . abs . fromBounded
          negate = toBounded . negate . fromBounded
          signum = toBounded . signum . fromBounded
          x + y = toBounded (fromBounded x + fromBounded y)
          x - y = toBounded (fromBounded x - fromBounded y)
          x * y = toBounded (fromBounded x * fromBounded y)
          fromInteger = toBounded . fromInteger
    

This assumes you want exceptions for overflow. Nowadays, you can put numbers
in the type system, obviating the need for a "Bound" class, but I'm not
familiar with it yet. The implementation above will also silently overflow
given large enough bounds.

------
bithive123
A series of assertions and "I just don't see it"-s presented as self-evident
when they are anything but. No examples of real cases where OO does in fact
"suck", ending with the claim that in order to understand the popularity of OO
one should "follow the money".

What? I mean, I don't even...

~~~
comex
harmful.cat-v.org has a lot of odd opinions:

<http://harmful.cat-v.org/software/dynamic-linking/>

<http://harmful.cat-v.org/political-correctness/girls-in-CS>

~~~
discussit
The second link is to gibberish from reddit. Not sure if it's a joke or if it
represents the opinions of cat-v.

But the first link is on the money. I would not call it "odd" as that seems to
carry a negative connotation. Rather, it's sensible, albeit irreverent,
thinking to question dynamic linking (as the Plan 9 people have) and alas the
mainstream of software development has a difficult time thinking sensibly and
prefers to "follow the herd".

Dynamic linking is anything but a clear "win", using the silly web lingo of
today. It's one of those many engineering trade offs we continue to live with
even though the original rationale for its adoption no longer exists.

Here's the problem as I see it.

You link to a library of n functions where, e.g., n > 10\. But your program
only uses 1 or 2 of those functions.

There is no accounting method for keeping track of which functions each of
those "4000" different binaries uses. How easy is it to tell me what functions
each of those "4000" binaries uses, and which libraries they reside in? I have
to resort to binutils hackery to extract this info, but it seems like basic
information that should be easily available... because it should be used in
making decisions.

Why link to a library with, e.g., > 10 functions when your program only uses,
e.g., 2 of them?

How many functions in those libraries that your program links to are not used
by your program?

No big deal you say. And that's true. Because of dynamic linking.

Dynamic linking to some degree makes us disregard this wasteful "black box"
approach to use of library functions.

But what if we started static linking? Then maybe we start to think more about
those functions that are in the linked library but are not used. We might even
question the whole idea of libraries.

What if we took an accounting of all the unused functions? How much space
would they account for?

Why package functions together in libraries? Whatever the original reasons
were for adopting this practice, do they still exist today?

How many libraries do you have installed on your system that are only used by
one or a few programs? Is there a threshhold for how many programs need to use
a group of functions before it justifies creating a library tobe shared?

If we really want to have functions that are to be shared among many disparate
programs, and we have "4000" programs (cf. the no. of programs on a system in
the 1980's) then it makes less sense to place them in arbitrary libraries (how
many people know the location of every C fucntion purely based on library
names intuition?) that have to be linked to as a unit. We perpetuate a black
box. It makes more sense to have each function available on its own, and each
program can select only those functions that it needs.

This is if we were static linking.

I do a lot of static linking because I move programs from system to system. It
works very well.

Another thing that I sometimes think about is the sticky bit. We try to
achieve the same effect with caching. You call a program for the first time,
it take some time to load. You call it a second time and hopefully it's all in
the cache, and it is much more "responsive". But what if it cannot fit in the
cache? What if another program displace it? Why can't we conscisouly keep a
program in a "cache"? We give control to the OS, and we hope everything works
as intended.

Dynamic linking reminds me very much of package management systems,
particularly those that build form source. It's extremely difficult to tell
what a particular install procedure consists of. You type "make" and what
happens next is in many respects a black box. Only so much information can be
reliably extracted from the system. For example, if you wanted to know all the
possible Makefile variables in the package management system, it's virtually
impossible to get a list. You can get most, but not all.

The way thse systems work is a lot like shared libraries. The mainatiners err
on the side of overinclusion of dependencies, some of which may not actually
be needed, in order to keep the black box working reliably.

~~~
comex
Well, I'd distinguish your "maybe there's a better way" from the article's

> All the purported benefits of dynamic linking [..] are myths while it
> creates great (and often ignored) problems.

which is tripe.

In my view, the fundamental problem with static linking as we currently know
it is that when libraries release bug fixes-- especially security fixes (which
I think are the real clincher), but really any of the minor bug fixes that
happen all the time in more complex libraries-- you almost certainly want all
the programs on the system that use the library to switch to the new code, and
by the nature of libraries, there are probably a lot of them. In some cases
you could rely on a distro / traditional package manager to provide new
binaries for everything, but that would be a lot of wasted bandwidth and even
if you don't use proprietary software, you probably want to be able to use
some software from outside the distro. So you really need an automatic re-
linker of some sort and probably a new binary format that can be re-linked,
and some infrastructure to keep track of what binaries exist on the system and
what they depend on. Plan 9 never had that, and at that point I think you're
solving more fundamental problems than static vs. dynamic linking (which is a
good thing) and should take a step back and see what you can do with it, but
who knows-- it might be interesting to talk about, but you have to come up
with it first. :)

General comments on your post: I think the boundary of a library is not quite
arbitrary, because

\- libraries tend to be developed independently by different people! It might
be nice to develop things in a more unified fashion, but in general I don't
think you can avoid people having specific (functional) interests and areas of
expertise, and it's nice to have a unit of code that someone can "own".

\- random interdependencies tend to be a bad thing; ask Google. Organization
is good.

\- many libraries have the job of parsing file formats or doing other things
where the selection of which functions to invoke generally comes from user-
supplied data-- you can't ask for half of FreeType or, dare I say it, WebKit;
you need to be able to parse whatever the user throws at you, so it's all or
nothing.

Which is not to say here couldn't be improvements.

~~~
p9idf
"So you really need an automatic re-linker of some sort and probably a new
binary format that can be re-linked, and some infrastructure to keep track of
what binaries exist on the system and what they depend on. Plan 9 never had
that."

Plan 9 has all of those things. Namely 7l† and mk††.

† <http://plan9.bell-labs.com/magic/man2html/1/2l>

†† <http://plan9.bell-labs.com/magic/man2html/1/mk>

------
zacharyvoase
“Data structure and functions should not be bound together” — I can't agree
with you more.

However, in Smalltalk (and even Ruby, to some degree) objects are _not_ data
structures, they are collections of functions invokable on a 'thing' with an
unknown structure. They have an internal structure—potentially immutable—but
you never see this, because you only interact with methods on the object.

And in many cases, there is syntactic sugar to make invocation of these
methods look like slot access: think of Objective-C’s `@property`, or Python's
descriptors, or Ruby's `def method=(value)`.

When people talk about 'object-oriented languages' in such general terms I get
frustrated, because there's a lot more nuance to this than simply 'bundles of
functions and data structures'. That's a very implementation-led way of
looking at it. The reality is that these objects are supposed to represent
real-life situations where knowledge of what something is, or how it behaves,
or how it fulfills its contracts is unknown. If NASA's Remote Agent[1] had
been implemented in Haskell, OCaml or ML, do you think debugging DS1 would
have been as simple as connecting to a REPL and changing some definitions in a
living core? I don't think the image-based persistence of SmallTalk and many
Lisps would be possible in a purely functional or traditional procedural
language.

And what is a data type anyway? It's supposed to represent a mathematical set
of possible values. Sure, you can use a simple array to build a b-tree, but
don't you want to explicitly state that _variable x is a b-tree_ if that's the
case? I was always taught that explicit is better than implicit.

I should probably stop ranting now, it's just that if you’re going to start
hating on programming paradigms, at least _sound_ like you've thought your
argument through a bit more.

[1]: <http://www.flownet.com/gat/jpl-lisp.html>

~~~
joe_the_user
I'm not exactly sure what you mean here. Last I programmed Ruby, objects had
member variables.

~~~
JonnieCache
The point is, they're all private, all the time.

Let's overlook the fact that ruby allows you to ignore this using stuff like
Object#instance_variable_get ;)

Ruby is a strange one because it has a fundamentalist approach to OO, yet it
is shot through with FP ideas. Most ruby programmers use the latter heavily.
Also, ruby's love of syntactic sugar makes generalisations hard to apply to
it, for example the aforementioned all-private member variables combined with
the single-line getter/setter macros, and the method invocation sugar. The
ruby programmer is often able to have their cake and eat it.

------
notJim
99% of the time I read these articles that say $commonly_used_thing [1] sucks,
the arguments are always "it is fundamentally incorrect" or some variant
thereof [2], and strawmen [3] abound.

Where these arguments fall short are in addressing the simple fact that
highly-skilled people produce very neat, well-designed systems that they are
pretty happy with from a technical standpoint, and that make money every
single day using $commonly_used_thing. If you can't acknowledge that
$commonly_used_thing has some good attributes, and that it actually works well
for many cases, I don't understand why I should take you seriously.

[1]: Examples of commonly_used_thing: ORMs, OOP, SQL databases, NoSQL
databases, operating systems, platforms.

[2]: There are a handful of variants. I think my favorite is the magical
phrase "impedance mismatch", which I think in non-buzzword-speak translates to
"fuck you, I'm right"

[3]: Most-frequent strawman: the most essentialist, rigidly-formal version of
$commonly_used_thing, when in reality, nearly every version of
$commonly_used_thing compromises to cope with reality.

~~~
LnxPrgr3
Every time someone says $commonly_used_thing sucks, people come out and point
out that $commonly_used_thing is being used for $productive_activity.

It _is_ possible to do amazing work with broken tools, or with the wrong
tools. That doesn't mean these tools aren't broken or that they couldn't be
better matched to the job.

Not that I think OOP is inherently evil. Reading the article, I don't think
the author quite understands OOP. One example:

"In an OOPL I have to choose some base object in which I will define the
ubiquitous data structure, all other objects that want to use this data
structure must inherit this object."

If he really believes this, no wonder he's railing against OOP. That _would_
be horribly broken.

~~~
espeed
_Reading the article, I don't think the author quite understands OOP._

The original author is Joe Armstrong (<http://www.sics.se/~joe/>), the creator
of Erlang.

~~~
barrkel
I don't think Joe ever claimed to be a great programmer. Erlang looks as odd
as it does because it was hacked up in Prolog, which few compiler engineers
would choose as a starting point.

I think Erlang is interesting precisely because it was created by someone with
a little distance.

~~~
nosequel
I take it you have never written anything significant in Erlang?

Try writing something in erlang and I bet you'll see the real power of pattern
matching, supervision trees, and the whole message passing infrastructure. If
the people who wrote Erlang weren't good programmers they certainly got lucky.

~~~
barrkel
You seem to have mistaken my comment for a criticism of Erlang.

------
MarkMc
Wow, I am genuinely shocked by the comments in this thread. I didn't realise
that so many people held the polar opposite view to me. It's a bit like
suddenly finding out that all your friends are racist.

I _love_ object oriented programming. For me it aligns perfectly with the way
I think - it allows me to produce a system of interrelated 'things' where each
thing (or group of things) has a well-defined role and can hide its internal
state and behaviour from other things.

When I see how some code tackles a problem I get an emotional response from
how 'clean' it is. Does it smell bad or is it a work of beauty and elegance?
If the code feels wrong I get an urge to make it better and for me that
process of improvement relies heavily on object-oriented concepts. I get a
real buzz from creating a clean, elegant solution to a problem: Trying to do
that without object-oriented features would be like trying to write a letter
by holding the pen with my teeth. Ugh.

~~~
fauigerzigerk
So what is the natural and clean design for an operation that represents the
sale of a property? Say we have these objects: the buyer, the seller, the
agent, the property and the contract. Which one of these would you prefer?

property.sell(buyer, seller, agent, contract)

seller.sell(property, buyer, agent, contract)

buyer.buy(property, seller, agent, contract)

agent.sell(property, buyer, seller, contract)

contract.sign(property, buyer, seller, agent)

The state of all the objects may be modified, and there are different types of
properties, buyers, sellers and agents. So you might want polymorphism along
any of those hierarchies.

~~~
jpatte

      contract = agent.makeContract(property, seller, buyer)
      seller.signContract(contract)
      buyer.signContract(contract)
      agent.registerSale(contract)
    

Nothing complicated here, really. Just follow the "natural" flow and respect
each actor's role. Don't try to group all operations into a single one when
there are independant actors involved.

Note : I see the "contract.sign()" solution coming back a lot. I will admit
this is "pure OO", but to me it doesn't make any sense. A contract here is a
_data_ object, not an _actor_. It shouldn't process anything. How many times
do you see contracts sign themselves (or even _do anything_ ) in reality?

~~~
fauigerzigerk
Your "data object" versus "actor" argument is intersting. Does that mean an
object model should mimic certain properties of the real world that aren't
even part of the system, like the knowledge that agents can act whereas
contracts can not?

There are clearly conflicting goals here. It could be that you need
polymorphism along one hierarchy but that would make the design look very
unnatural in the eyes someone who knows the problem domain.

In my experience, OO models tend to diverge greatly from the real world over
time, because after all we're not modelling the world, we're modelling a
solution to a problem and we shouldn't fight the tendency for the language of
the solution domain to dominate the language of the problem domain.

Sometimes this can mitigated by having interfaces on different levels of
abstraction, but that often leads to bloated and slow systems.

~~~
jpatte
Clearly the goal here is not to simulate an ecosystem where agents, buyers and
sellers happily live together.

However, the idea here is to organize your object model exactly as you would
organize a group of employees. Each of these employees has a specific job and
a specific set of responsibilities - ideally only one. You can describe the
solution to your problem as the result of their interaction. These employees
are _actor_ objects.

To interact efficiently, they need to exchange information, in the form of
_data_ objects. These data objects don't do anything except hold a piece of
information - exactly as you would have two employees exchanging notes or
emails.

Most of the time these data objects correspond to the language of your problem
domain. The actor objects however may have nothing to do with any "real world"
activity, depending on the job you want them to do in the process. So yes, the
object model diverge from reality over time because we introduce new actors
with specific roles that have no "real world" equivalent. But that's just
fine.

The important thing is to keep seeing the distribution of responsibilities
among your objects and their interaction as the work of a team of independent
experts, not as "things doing stuff".

------
LnxPrgr3
I'm all for proper rants against popular tools to keep people on their toes.

This isn't one of those.

"Objects bind functions and data structures together in indivisible units. I
think this is a fundamental error since functions and data structures belong
in totally different worlds."

Sure—a class defines a type and operations on that type. What's fundamentally
wrong about date.addDays(1) vs. date_add_days(date, 1)? (Let's skip the
mutable state argument and assume both versions return a new date.)

There is the problem that sufficiently opaque classes are hard or impossible
to extend. That's the class author's fault: this is an avoidable problem in
every object-oriented language I've used.

"Functions are understood as black boxes that transform inputs to outputs. If
I understand the input and the output then I have understood the function. …
Functions are usually 'understood' by observing that they are the things in a
computational system whose job is to transfer data structures of type T1 into
data structure of type T2."

A constructor is a black box that converts a data structure of type T1 into a
data structure of type T2. Objects just also have other black box functions
defined on them.

Sure, some objects are stateful, but they don't have to be.

"In an OOPL I have to choose some base object in which I will define the
ubiquitous data structure, all other objects that want to use this data
structure must inherit this object."

Um, no. This is a job for composition, not inheritance.

"Instead of revealing the state and trying to find ways to minimise (sic) the
nuisance of state, they hide it away."

They hide state's implementation, for mutable objects.

    
    
      std::vector<std::string> some_list;
      std::cout << "Items: " << some_list.size() << std::endl;
      some_list.push_back("Hello, world!");
      std::cout << "Items: " << some_list.size() << std::endl;
      // Oh no! State, EXPOSED!
    

Sure, an allocated piece of memory might have grown, or even moved. Why should
I care? I still see the state I care about, presented through a hopefully
useful abstraction.

This rant seems to somehow miss the points of both object-oriented and
functional programming, instead harping on mostly meaningless (or outright
wrong) details. Or am I missing something here?

~~~
LnxPrgr3
I owe the author an apology on one point. "Minimise" is a correct spelling,
though neither my system's dictionary or I realized this.

~~~
tikhonj
I think minimise vs minimize is just British vs American spelling
respectively. So neither is strictly wrong.

------
NinetyNine
There's a certain allure to saying code should be a certain way because of
natural properties of computing, or our own feelings on what things are
different and similar from what other things.

The reason OO shines is because it allows you to make that distinction at the
domain level rather than the code level. You organize your software into
business objects, or components, and have these interact with each other. They
allow you to separate it out in a way that a new developer can come to the
project, understand what the code _should_ be doing, and look for the classes
which seem the like objects involved, including the types of data it has and
the things it can do.

There are all sorts of nasty things we've invented in OO over the past few
years (mixing up inheritance and composition, using way too much state), but
it gives us a lot of advantages from an engineering point of view.

~~~
stock_toaster
Isn't that a bit of a false dilemma though? Can you not have clean interfaces
and separation of concerns without OO -- even with something as 'simple' as
python namespaces and dicts?

~~~
el_presidente
Dicts are objects.

~~~
stock_toaster
That is an implementation detail due to python's object support. Dicts are a
native type, that are hashmaps.

Unless you mean "object" as in "a thing"....

~~~
Maro
I think he means dicts combine data, state and functionality, expose a clean
API and hide the details.

------
programminggeek
I think OOP took off because it _seems_ like a great way to model things. The
idea that you can simulate a car by saying you have a base kind of car with
properties and actions and then a Ferrari is a kind of car, so you can just
kind of take that car object and make it have more horsepower and a different
body type and you have a Ferrari, is very exciting.

Businesses like to model things and simulate things. So, in that respect, OOP
was probably an easy sell because it's selling an idea of what businesses
want, even if it hasn't worked out exactly as they hoped in all cases.

------
einhverfr
I see a bunch of caveats here.

The basic issue is that OOP is easily hyped and far too easily taken too far.
My favorite OOP environments are decidedly un-OOP in specific ways (Moose, for
example, has very transparent data structures, which is really useful).

The first big criticism I have is with the idea that "state is the root of all
evil." I think the truth is more nuanced than that. State is, in many cases,
extremely necessary to track. The problem is that state errors create bugs
that are very difficult to track down (you can eventually figure the state
out, but how did it get corrupted)? A better approach I think is for state to
be approached declaratively and with constraints. This is why things like
foreign keys, check constraints, etc. in the RDBMS world are so nice. In fact
good db design usually has a lot to do with eliminating possible state errors.
Wouldn't it be great if OOP environments gave that possibility! Well, Moose
does to a large extent (another thing I really like about it).

A lot of it comes down to the really hard question of "what should be
abstracted?" The _correct_ answer is a question, "what makes your API most
usable?"

This is why I think it's important to be able to move in and out of the OOP
worlds, and why OOP taken too far runs into the problems the author mentions,
but that it also doesn't have to.

------
6ren
\- though he has no qualms about misleading and deceptive answers.

\- _"If I understand the input and the output then I have understood the
function."_ A good point, I tend to think of "information hiding"
<http://en.wikipedia.org/wiki/Information_hiding> as applying to state, to
enable SOTSOG, but it also applies to pure functions.

\- _"define all my data types in a single include file"_ That quote sounds
silly, but in practice, I find it much clearer if all the part of a data type
are next to each other, uncluttered by methods. It also supports Brooks'
observation: "Show me your tables, and I won't usually need your flowcharts;
they'll be obvious" ("tables" being datastructures). In Java, I tried this by
defining fields in superclasses, methods in subclasses. But having two classes
per class was awkward. (I ended up keeping code entirely separate except for
very core methods - still not happy with it). But I don't think this is
entirely a language problem, it's partly just complexity management is hard.

\- this article makes me feel antagonistic, but in fact I never liked OO when
taught it; it seemed dogmatic, not actually useful in practice. But I did like
the idea of an ADT, where you can package something up (esp. a list, hashtable
etc), and work at a higher level of abstraction. Subdividing tasks and SOTSOG

------
Locke1689
Fundamentally I think OOP is either state or syntactic sugar. If your methods
don't modify internal state then they're basically doing _ad hoc_ type
polymorphism on their first argument (this is the way that virtual methods are
implemented, by the way), which just makes the '.' syntactic sugar that at the
same time limits composibility because it demands an inheritance hierarchy.

Then it just comes down to whether or not you believe that mutable state is a
good design choice. I don't. I think state is the root of all evil. For one,
it makes my job as a PL researcher of 1) writing a formal analysis and 2)
using the formal analysis to write a compiler, less attractive than blowing my
brains out.

Note that you can have objects without OOP. Python has objects. Python is not
_object-oriented._ Same with O'Caml or Racket. I'm not arguing against using
state. If you're programming a state machine, you may want to model it with
state. That would be a pretty good choice. The problem is that OOP says
_everything is a state machine._ Do you believe that or not?

Either way, I'm putting my best efforts towards state-corralled languages.

------
msluyter
I expect this to be a busy thread.

I think the point about state has some traction. For example, Chaper 15 of
Effective Java (awesome book on Java, btw) is entitled "Minimize Mutability,"
so I think this idea is one that has caught on even in fairly traditional OO
languages.

As for the other points... I do think sometimes that using OO to model real
world objects may not always be wise, esp. if the result is a deep hierarchy,
as in the OO 101 example of, say, Boston Terrier < Dog < Mammal < Animal <
Thing... And then someone changes Dog and gives your Boston a tail... I dunno.
I have only vague intuitions here, but perhaps the "objects as models of
reality" might be perfectly suited for reality simulators of some sort that
require stateful elements a la Sim City but not generally. Or, perhaps a
better example: You could model a chess game as classes of Pieces on a Board,
with methods like King.isInCheck(), or Queen.canMoveTo(Square) but this to me
seems clumsier than simply having an 8x8 array of enums with the logic living
in functions and not inside individual pieces.

------
pacala
The historical win of OO was polymorphism. The competition to OO was
procedural code that consist(s|ed) of hardwired procedure calls. Enter
polymorphism, which provides a way to abstract over functions, not only over
values. Of course, this is nothing new to functional programming where
functions are first class citizens, but it's new for procedural programming.
Modern OO is about stateless objects, dependency injection and unit testing,
aka functional programming.

------
nessus42
I've been watching some talks online recently by Rich Hickey of Clojure fame,
and he's a very interesting and convincing speaker. He basically makes the
same argument that Armstrong makes here.

I'm not clear, however, how the pro-FP, anti-OO crowd address the Law of
Demeter, which is often summarized as "One dot: good. Two dots: bad." The
canonical example where the Law of Demeter serves us well comes from some of
the original Demeter papers, which I actually read a long time ago when they
were current. This canonical example is that of an object to represent a book.

One of the initial selling points of OO was that if you encapsulate the
representation of an object from its interface, this ends up giving you a lot
more flexibility. For the case of representing a book, pre-Demeter, a typical
OO organization would have been to provide a method to give you chapters of
the book as Chapter objects, and from there you could get Section objects,
from which you could get Paragraph objects, from which you could get Sentence
objects, from which you could extract the words as strings.

The Demeter proponents correctly argued that this OO organization of the Book
rather defeats the goal of encapsulation, since with this organization you
cannot restructure the internals of the Book object without breaking the API.
E.g., if you decide to insert Subsections between Sections and Paragraphs,
your API for extracting all the sentences of a book will change, and
consequently, much of the client code will have to change.

The Demeter folks argued that instead of having to explicitly navigate to
sentences, you should just be able to call a method on the Book object
directly to get all the sentences. Without special tools, however, this is
hard for the implementers of Book, since now they have to write tons of little
delegation methods. I take it that people who are serious about following the
Law of Demeter _do_ do this, however. In the original Demeter system, Demeter
would do this automatically for you. The problem with the original Demeter
system is that few people actually ever used it, and it was rather complicated
for Demeter to provide this automatic navigation.

So, back to FP: Rich Hickey argues to forgo damned objects and to just let the
data be _data_. So if I follow Hickey's advice, how am I supposed to represent
a book? As a vector of vectors of vectors of vectors of strings? If so, then
how do I prevent a change in the representation of the Book from breaking
client code? If I had followed the Law of Demeter with OO, then everything
would be golden.

Sure, with this naive FP approach, I could also provide a zillion functions to
fetch different sub-structures out of the book. E.g., I could have a function
to return all the sections in a specified chapter, and another to return all
of the sentences in the book. This, however, would end up being little
different from the OO approach following the Law of Demeter, with the further
downside that if you change the representation of the book, you don't know
that you haven't broken the client code, because you have no guarantee that
the client code isn't accessing the representation directly.

Please advise.

~~~
tikhonj
Instead of giving you an example that anybody actually uses, I'm going to tell
you about a cool idea I've been reading about that hasn't gotten much actual
use.

The basic idea is to use a generalization of pattern matching. Languages like
ML and Haskell support pattern matching, but in rather limited ways.
Crucially, patterns _are not_ first-class citizens of the language. (For
Haskell, at least, there are some libraries to remedy this, but I don't know
how effective they are.)

So how can we generalize pattern matching to help you solve your book problem?
Normal patterns allow you to match data types in the form C x₁ x₂... where C
is some constructor and x₁ x₂... are either matchable symbols or arbitrary
patterns. An example of a pattern would be Chapter (Cons (Section content)
rest). We differentiate between the matchable symbols and the constructors on
case: lowercase means matchable, uppercase means constructor. This is somewhat
limited: you cannot easily write code that is generic over the constructor at
the head of the pattern. You could write a function that counts the sections
in a chapter, but you could not write a function that counts the sections in
anything.

So let's relax the restriction that patterns have to be headed by a
constructor. We can now have patterns in the form x y. These are static
patterns: you can match data against them without evaluating the pattern. With
this, we can imagine writing a function to count sections generically:

    
    
        count_sections x = 0 -- If this is some terminal, it cannot be a section
        count_sections (Section content) = 1 + count_sections content
        count_sections (x rest) = count_sections rest
    

This goes through the entire data type you passed in and counts all the
sections it sees. It assumes sections can be nested. This will let you count
the sections in a Book or a Chapter or a Series or whatever you want.

So, this is generic over the data you pass in. However, if you wanted a
function to count Chapters or Sentences or what have you, you would be forced
to write it. This calls for another extension to pattern matching: dynamic
patterns. Patterns are now in the form x ŷ where x is a _variable_ and ŷ is a
_matchable symbol_. Constructors are still uppercase, so Section is a
constructor and not a variable.

A variable in a pattern can be instantiated with another pattern. So now we
can write a truly generic count function:

    
    
        count constructor (constructor) = 1
        count constructor (constructor x̂) = 1 + count x̂
        count _ (x̂ ŷ) = count ŷ
    

So now if you want to count chapters in your book, you would just invoke count
Chapter book. If you want to count sections in your chapter? Easy: count
Section chapter.

You can also use patterns and constructors for polymorphism by overloading
functions on different constructors. One interesting idea is to allow adding
cases to functions after their definition. This way you could have an existing
toString function and then, when you've defined a book, add a new case to it:

    
    
        toString += | Book title content -> "Book: " ++ title 
    

This way you can have a toString function that works on any type of data.

All my examples are obviously in pseudocode. (And hey, it looks nothing like
Python! The whole "runnable pseudocode mantra annoys me.) I haven't covered
all the edge-cases, and I haven't even begun talking about a type system for
this mess. Happily, there's somebody who has, and wrote a book about it
(that's where I got all these ideas): _Pattern Calculus_ by Barry Jay[1].

[1]: [http://www.amazon.com/Pattern-Calculus-Computing-
Functions-S...](http://www.amazon.com/Pattern-Calculus-Computing-Functions-
Structures/dp/3540891846)

I'm also not sure whether this is the best possible approach. However, I think
it's certainly very neat. If you like this idea, the book is definitely worth
a look.

~~~
polymatter
Thanks for an excellent write up to the idea. That was very clear.

I am very intrigued and was looking at purchasing that book to learn more -
but then I saw the price. I'll have to scrounge a copy from a library
somewhere. I thought on demand printing was going to reduce the costs of books
on the long tail!

~~~
tikhonj
Oh yeah, it's more expensive than I thought :/. Happily, I was lent it by a
friend.

You might have some luck getting it as an ebook from the library. My
university's library seems to only have it as an "electronic resource", and
it's a pretty big library.

I like the book, but a good part of it is just background. It goes through a
bunch of variations on the lambda calculus in building up to the "pattern
calculus". If you're already familiar with the basics, you might be better off
just reading some papers on it. (Hopefully there are some you can read for
free, but I'm not sure.)

------
phleet
So I have very limited experience with FP, and a reasonable amount with OO
(mostly dynamically typed).

I can really see the benefits of FP, but there are some problems I have
trouble modelling with FP.

For instance, if I have a simple 2D rendering engine, I just want to say "add
this object to the screen". The object might be a geometric primitive (square,
circle, etc.), it might be generated particles, or it might be an image or
video or something. The way I deal with this at the moment is have Drawable or
something to add to the screen with something like, which implements an
interface with a "draw" method.

    
    
        screen.addToScene(new Circle(...))
        screen.addToScene(new Square(...))
        screen.addToScene(new ParticleGenerator(...))
        screen.addToScene(new ImageSprite(...))
    

Then the game would loop over each of the Drawables, then call .draw() on
them, which is implemented differently for everything that implements
Drawable.

How would I model this in FP?

The only solution I can think of at the moment is to have a draw function that
does pattern matching on the type of thing and do it that way. How do people
do stuff like this in scheme or other languages with limited support for
pattern matching?

The problem I have with this is that it means every time I want to add a new
kind of thing, if it implements many methods in an interface, I have to go to
many different files to implement how this new kind of thing works.

Among other things, that's a huge pain for revision control, since if I have 3
coworkers adding new kinds of things that can be drawn, we're all going to
have to modify the draw function. In OO, we'd each just be creating a new
subclass in its own new isolated file.

As a second question - what should I read to get a good idea of how to sanely
model things in OO and FP? I've read a lot of debate about the right way of
doing things, but I don't really know where to learn this stuff. The OO class
in university was completely useless, since the examples were outrageously
contrived and too small to see any real benefits. I'd ideally be looking for 1
book that explains how to model real problems in OO very clearly, and one book
for how to model real problems in FP.

~~~
aaronharnly
I encourage you to read Philip Wadler's essay The Expression Problem, which
addressed precisely the dilemma you point out:

<http://www.daimi.au.dk/~madst/tool/papers/expression.txt>

In brief, if you think of data types are rows, and behaviors as columns, the
question is how to extend either the rows or the columns naturally.

In your example, it is easy to add a new row (datatype) – create a Triangle
class which implements the Drawable interface. It is difficult to add a new
column (behavior) – if you realize all of these datatypes should also have a
"extrude to additional dimension" behavior, you're going to have to
individually implement that in all of your different classes, across many
different files, etc. All of the problems that you note arise when adding a
new datatype in the FP strawman.

It's important to recognize that this is indeed a difficult problem, and that
addressing it well takes real care.

The c2 wiki has a good distillation:

<http://c2.com/cgi/wiki?ExpressionProblem>

and this paper (by the Scala people) is very nice:

[http://www.scala-
lang.org/docu/files/IC_TECH_REPORT_200433.p...](http://www.scala-
lang.org/docu/files/IC_TECH_REPORT_200433.pdf)

Some approaches that languages take, to varying degrees of success, include
typeclasses in Haskell, multimethods in Clojure and elsewhere, the Visitor or
Extended Visitor patterns in OO languages, controllable extensions in C#,
Ruby, and Scala, etc.

~~~
phleet
Thanks for the sources - will sit down and read them later.

The row, column analogy is pretty apt - it's an interesting way of looking at
things.

While it is true that adding a new behaviour in OOP is a pain in the ass, I
definitely find myself adding new datatypes far more frequently than adding
new behaviours for an interface.

Isn't adding a lot of new behaviours to data typically a sign than the
interface is bloated and is now dealing with too many things?

------
damian2000
In the 1990's I saw OO as just another tool which was infinitely better than
what I had at the time. I hazard a guess that most devs at the time were still
working with procedural languages like C, Cobol, Fortran, Pascal or Basic. OO
gave you abstraction and encapsulation, making it a little easier to write
better code, that's all.

For me it was never OO v.s. FP, it was just OO v.s. the status quo. If OO was
hyped up so some guys could make money from it (as the article suggests), then
who was behind it? -- Bjarne Stroustrup, Anders Hejlsberg or James Gosling? I
think not.

------
gabordemooij
To me object oriented programming makes a program 'come to life'. In our daily
lives we are surrounded by objects: trees, houses, books... to name just a
few...

I love the fact that I can reason about these 'natural' concepts in my code.
Thinking 'in objects' sparks my creativity and boosts my imagination. It helps
me to visualize otherwise very abstract notions.

I love to talk about a 'Book' instead of an Array. With good Object Oriented
code, technical concepts and natural ideas seem to come together. To me, the
benefits of writing object oriented code have more to do with human-computer
symbiosis ( <http://en.wikipedia.org/wiki/Smalltalk> ) than with pure
technical correctness, it just fits my mind.

If you want to appreciate the real beauty of objects I recommend to skip Java
and C++ for a minute and look at Smalltalk. I just read the Blue Book
(Smalltalk-80) and I had tears in my eyes. The elegance and beauty of this
language is just stunning.

------
carsongross
I find grouping functional code along with the data the code is supposed to
work on reasonably intuitive. The OO religionists definitely sold the world a
bill of goods on the reuse arguments, and the religious fervor was silly (just
like it is with todays functional zealots) but still.

Having an 'x' and hitting '.' and seeing what 'x' knows to do with itself
doesn't suck.

------
borplk
From a purely scientific view, OO is a terrible idea because it moves the
program further away from the mathematical form and makes it harder (if not
impossible) to say, logically prove the correctness of the program. But from a
practical perspective OO is a great idea because it makes many things so much
easier.

~~~
yen223
"But from a practical perspective OO is a great idea because it makes many
things so much easier."

This seems true at first, but after having worked with C# a while I'm not so
sure about that. OO introduces some weird issues that aren't immediately
apparent: 1\. Verbosity When the shortest method call looks like
"abcObject.functionXYZ()", code gets huge really fast character-wise. This
actually does make it harder to read and debug existing code.

2\. Multithreading Multithreading in any programming paradigm is a pain, I'd
give you that. But OO exacerbates the problem because of the way each property
of an object is essentially a global state within its local scope. It makes it
quite tricky to enforce thread safety.

Having said that, I'm not sure that rewriting it in, say, a functional style
would make things simpler. Sure it is easier to prove correctness, but as soon
as you introduce IOs, everything goes to hell. I guess OO seems to be the
worst pattern, except for all others that have been tried.

~~~
icebraining
I don't think verbosity is an issue with OO any more than it is with any other
paradigm. 'abcObject.functionXYZ()' is not any more verbose than
'functionXYZ(abcDataStructure)'. I think it's mostly a question of the
community's coding style, and C♯ and Java are notoriously verbose even in
their standard library.

------
alttab
What about the benefits of abstraction? I'm not going to introduce hyperbole
on how I think this article is overstated, and instead I'm going to ask HN
members who have more experience with functional programming how they leverage
concepts similar to abstraction with Haskell, clojure, or erlang, etc?

~~~
zaphar
Abstraction:

    
    
       * Haskell typeclasses
       * Clojure Protocols, Multimethods
       * Erlang Processes
    

Abstraction is not limited to Objects. Objects are just one way of expressing
abstractions.

~~~
alttab
Thank you. This gave me enoug keywords to do some good research. Now clearly
these methods could even be applied to ruby and other oo enabled languages as
well

------
ColinWright
There was a substantial discussion when this was submitted 3 1/2 years ago:
<http://news.ycombinator.com/item?id=474919>

I do wonder if this discussion repeats all the same points, or if it raises
new ones.

------
hurshp
What I find so obtrusive about OOP which I feel is a massive issue (maybe has
to do with the last sentence in Joe's post). OOP is pushed into places it does
not belong and causes a lot of impedance issues.

OOP developers want if something doesn't talk OOP then to make it talk OOP,
for example ORM's and SQL databases.

It is a tables and sets, most of computers use sets and tuples, yet OOP needs
to be serialized and abstracted away and pushed in almost becoming a data type
in it's self.

And I think there are other issues and pervious failures like this.

------
nivertech
While I dislike OOP, I think that CLU-style ADT has some merits, especially
when ADT implemented in pure functional way.

Just because you don't have explicit schema, doesn't mean that you have no
implicit schema.

Likewise just because you don't have explicit objects and classes, doesn't
mean that you have no implicit objects and classes.

I code in Erlang, and I treat every gen_server either as a singleton object or
as a class (in case I spawn many instances of it)

~~~
nessus42
Wow, CLU! Those were the days!

------
Tloewald
The whole basis of this argument is that functions and data structures should
not be "locked in a cage together". Replace object with "file" in the entire
article and you'd have an equally but more obviously ridiculous argument.

Does OOP have problems? Sure. Is Erlang great in some ways? Sure. But this
argument is silly.

------
timruffles
I think a project along the lines of Todo MVC for general programming
languages - <https://github.com/addyosmani/todomvc/> \- would work really well
for illustrating these kind of debates.

Ideally it'd be a reasonably involved problem domain (rather than a todo list)
with persistance, networking and something which requires
parallelism/concurrency (I'm sure there are other categories too). This'd
expose each language to the types of complexity that exposes the really
interesting differences - does language X allow a clean API even when we
require immutability for parallelism, does language Y impose boiler plate on
simple problems, Z require unreadable line noise?

I find these debates nearly useless without evidence and code to read.

------
lukifer
OO is just one design pattern of many; it should be used when the mental model
is a good fit for the problem domain. It does annoy me, though, when languages
or frameworks force the use OO when it's unneeded.

Use a function when you need a function, and a class when you need a class.

------
yason
My sentiments exactly couldn't have been better put than the author did in the
opening sentence: _When I was first introduced to the idea of OOP I was
skeptical but didn't know why - it just felt "wrong"._

------
lightblade
What I finds funny is that all these OO design patterns and best practices are
aimed to solve problems that doesn't exist in FP. Of course I may be over
generalizing, but you get my point.

------
bborud
It would be useful if Armstrong stated which languages and which programming
styles he disagreed with.

"OO" is too vague. Not only does it include languages as different as
JavaScript, Perl, C++ and Java, but there are wildly different ways of using
each of these languages.

For instance it is entirely possible to write purely functional code in
JavaScript if you want to.

------
erlkonig
Heh. The "-deftype second() = 1..60." is a problem, since some minutes have 61
seconds in them.

------
jongraehl
Author does not understand OO, or argues against strawman OO:

> In an OOPL I have to choose some base object in which I will define the
> ubiquitous data structure, all other objects that want to use this data
> structure must inherit this object.

------
CurtMonash
His history is wrong. OO won in large part for a good reason -- it was a way
of implementing, if not enforcing, modularity. One that people accepted,
unlike LISP.

~~~
artsrc
I like modules that are more like objects. I would like to supply parameters
to their constructors and be able to refer to them via names.

Languages without modules, like JavaScript, support that kind of thing.

[http://wadler.blogspot.com.au/2009/08/objects-as-modules-
in-...](http://wadler.blogspot.com.au/2009/08/objects-as-modules-in-
newspeak.html)

------
kyledrake
"Reason 2 - It was thought to make code reuse easier. I see no evidence of 1
and 2."

<https://rubygems.org/stats>

~~~
astrange
Code reuse works at the library level, not the individual object level. You're
not exactly fishing an object out of a worldwide sea of classes there.

Actually, immutability can code reuse pretty easy, but many OO systems make no
effort to encourage that (one reason Python is not my favorite language).

------
vph
data structures and functions are very much related and connected. In fact,
all data structures are invented to perform certain specific functions. OO is
not the answer to all things, but it provides a natural way to fuse data
structures and its associated functions together.

~~~
tikhonj
In this case, "function" means mathematical function rather than purpose.

Of course, if you want to take a somewhat reductionist approach, you could
argue that data structures _are_ functions. After all, in the lambda calculus,
_all_ you have are functions and you can use them represent numbers, pairs,
lists, booleans, conditionals--basically whatever you want.

------
jroseattle
The conversation around programmatic semantics and languages and the like
always come and go over time, and feelings about them ebb and flow. Over
umpteen years as a developer, I can say that I've found there's something
about _every_ language that will cause one to say "why do I have to think
about this like that?" Nothing is perfect, but certainly a lot of languages do
a few things quite well.

As such, I would be grateful to hear an argument of why object-oriented
programming structures are incorrect. I disagree with the reasons provided by
the OP because of the slant toward personal preference. The arguments posted
here are specious; I can find holes in each of the points made.

1\. Data structure and functions should not be bound together - very true,
they should be independent. However, this statement: "Objects bind functions
and data structures together in indivisible units." This implies how something
is implemented (or rather ALWAYS implemented), and while a tight binding _is_
possible in most OO-supporting languages, it's not requisite. Just because the
ability to violate this exists doesn't make it awful; it just makes it
complicit on the programmer to use the right approach in a given situation.

2\. Everything has to be an object - in some languages, this is true. However,
this causes what problems? For the OP, this is nothing more than semantical
ickiness. I won't defend any implementation of things like time and date and
other primitives, but the chief complaint here seems to be how that
information is accessed and the form of which it takes. I simply find the
"this-is-an-object-so-it-feels-wrong" argument quite lacking.

3\. In an OOPL data type definitions are spread out all over the place - this
is organizational, but I'm not sure what "find" means in context. I guess it
depends on the language being used, but I question why this is an issue for
the OP. "In Erlang or C I can define all my data types in a single include
file or data dictionary." I can do the same thing in Java or C# or other
languages, if I want. For most developers, "finding" data type definitions has
more to do with documentation than the actual language.

4\. Objects have private state - of course they do, it's the nature of OOP.
This statement: "State is the root of all evil. In particular functions with
side effects should be avoided." This is unfounded (not the side effects part,
which has nothing to do with state.) State, as the OP points out, exists in
reality but should be eliminated from programming. Just as the bank example
points out, one will want state to be accounted for in cases of deposits and
withdrawals from an account. Thinking that state can only be handled in a
certain way (which is what this argument suggests) is limiting in evaluation
and unimaginative in assessment.

Most of the arguments show personal preference to application development, and
with that I totally understand. But these arguments are intended to show why
the languages which support OO are conceptually wrong, as if the concepts of
the alternative are an accepted truism.

~~~
Locke1689
_not the side effects part, which has nothing to do with state_

State is a side effect. In fact, that's basically the definition of side
effect -- I think you need to revisit the basic principles of formal
semantics.

 _State, as the OP points out, exists in reality but should be eliminated from
programming_

State is the entire point of programming. State shouldn't be eliminated, it
should be corralled so effects are traceable and formalizable.

Objects rely on being mutable. An object-oriented language pushes objects
everywhere. This pushes mutation everywhere. This is the opposite of
corralled.

~~~
jroseattle
> State is a side effect. In fact, that's basically the definition of side
> effect -- I think you need to revisit the basic principles of formal
> semantics.

You're right, thanks for the correction. My line of thinking was that
addressing the issue of side effects in programmatic functions was not the
same thing as addressing state -- while not mutually exclusive, they're not
the same thing.

> State is the entire point of programming. State shouldn't be eliminated, it
> should be corralled so effects are traceable and formalizable.

I won't go so far as state being the entire point, but it's simply another way
to programmatically solve a problem. It absolutely should not be eliminated,
but rather used responsibly -- just as with anything else in a programmatic
environment.

~~~
Locke1689
_I won't go so far as state being the entire point, but it's simply another
way to programmatically solve a problem._

Writing to memory is stateful. Writing to a file is stateful. Printing
something to the screen is stateful. Seeing the _effect_ of your program is
stateful. Would you say that the point of programming is to produce an effect?

~~~
jroseattle
That's fine. All your points are perfectly valid, but they have nothing to do
with OOP, which is what I was referencing.

~~~
Locke1689
_Objects rely on being mutable. An object-oriented language pushes objects
everywhere. This pushes mutation everywhere. This is the opposite of
corralled._

As I said in another comment:

 _Note that you can have objects without OOP. Python has objects. Python is
not object-oriented. Same with OCaml or Racket. I'm not arguing against using
state. If you're programming a state machine, you may want to model it with
state. That would be a pretty good choice. The problem is that OOP says
everything is a state machine._

------
ww520
I don't get it. If people don't like OO, why don't they just not use it. Just
use your favorite methodology to get the job done. Why do they have to bad
mouth it?

~~~
i_cannot_hack
Because they want to promote discussion and exchange opinions, and maybe
advice others against something that they think is a bad idea. To just
silently avoid stuff is a very stifling and unproductive way to handle things.

They do not badmouth OOP. It is not a human being with reputation and
feelings. They criticize its usage.

~~~
mattacular
But he is criticizing the design and fundamentals of OOP. Something doesn't
have to be human or have a reputation to be badmouthed...

------
nirvana
I can express my OO ideas in erlang with no problem (objects become
processes).

I cannot express my concurrency ideas-- that erlang makes super easy-- in the
OO languages.

Maybe Go changes this but I haven't used Go yet.

~~~
glassx
Agreed. And Joe Armstrong himself used this argument here:
<http://www.infoq.com/interviews/johnson-armstrong-oop>

------
michaelochurch
The people who originally came up with OOP knew what they were doing. The
inspiration was the cell, which hides immense mechanical complexity behind a
simpler interface of electrical and chemical signals. When interfaces are
simple, it limits the unexpected dependencies that can exist between software
modules. Alan Kay wasn't saying, "Go off and write bloated objects" but, "When
software must be complex, strive to provide simple APIs."

For example, when you write a SQL query, do you have to micromanage the
database in how it's performed? No. That's an example of encapsulation. The
implementations are different, but the interface is fairly stable. That's
something OOP-like that works very well-- an interface that hides
(encapsulates) complexity.

Objects are general and powerful, and that's part of the problem. "Power"
isn't always good; GOTO is also extremely powerful, but should be used
sparingly. Objects are not specific. Is it a function, or a tangle of methods,
or a data object, or a "singleton" module? This isn't clear, and it becomes
even less clear in the industrial world where tens of hands pass over code and
it turns into mashed potatoes.

There are some good ideas in "object-oriented programming", but it's also an
extremely complicated programming model and it's hard to develop OOP code
_correctly_. If you don't know what "open recursion" is and why it's
dangerous, you shouldn't be doing OOP.

What happened in the 1990s is that there was an effort (now becoming
acknowledged as a failure) to commoditize programming talent-- to make 5
mediocre programmers able to replace a great one, and thereby prevent what we
see now (the long-term "threat" of top software engineers outclassing
professional managers in social status and compensation). Thus was born a
bunch of design-pattern cargo-cult stuff designed to make programming slow,
tedious, and limited _but_ easy enough that mediocre people could do it, if
they were stacked on top of each other in large enough numbers. Thus were born
bloated, horrible codebases that bastardized "object-oriented programming"
beyond imagination-- 21st-century spaghetti code.

People should be required to learn the basics of programming first. They
should start with immutable data objects, referentially transparent functions.
Mutable reference cells can come next as an optimization. Then, it's a good
idea to learn type systems through OCaml or Haskell. After that, they can
tackle the hybrid OO/FP of Scala. (Once you've learned Scala, there's no
reason to use Java unless you need performance.) There are times to use OO and
times to use FP, but if you aren't smart or curious or dedicated enough to
grok FP, then you'll never actually understand OO either and you have no
business trying to use it.

People learn best when they're presented with one new concept at a time, and
the problem with OOP is that it presents tens of new concepts at the same
time, with no separation. It leads to cargo-cult programming because people
start coupling concepts that don't necessarily belong together.

~~~
guard-of-terra
"People should be required to learn the basics of programming first. They
should start with immutable data objects"

Immutable data objects and all the stuff are not the basics of programming.
They never were. Unless you single-handedly redefine the meaning of
"programming".

The whole text you just wrote is full with dumb self-praise and deprecation of
others; based on worship of principles that were never proven to produce good
software, or any software at all. This receiving upvotes makes me sad.

~~~
tikhonj
There's no need to redefine programming. There are essentially two fundamental
ways to approach a program: what it _does_ (coming from EE) and what it
_means_ (coming from math).

If you're coming from the first perspective, than mutation is fundamental.
Registers are always changing in value and everybody likes self-modifying
assembly.

However, if you're coming from the second direction--which is at least as
fundamental as the first--the basis of computation _does not_ involve
mutation. This is where the lambda calculus comes in. It's the abstraction
that underlies most of the mathematically well-behaved languages. And in the
lambda calculus, as well as a bunch of its popular variations, there is no
mutation.

From this second perspective, immutable data _is_ a basic building block of
programming. Essentially, all you have are functions that can bind names to
arguments and be applied to each other. It's a surprisingly simple and elegant
model.

There are some other popular models of a computer program like the SKI
calculus. However, it's easy to translate between the lambda calculus and SKI
calculus, so for the purpose of this discussion they are effectively the same.

So from a more mathematical standpoint, functional programming _is_ more
fundamental than imperative programming. Redefining the meaning of
"programming" is not required!

~~~
guard-of-terra
Programming is about writing some code that does useful work. Okay?

When you argue that programming is something else, you're doing just that:
redefining.

Because from the first days programming meant building computational devices
or programming programmable devices. And those have changing state, and then
they have machine code, which mutates state. It's how it always was and now
some people want to redefine it but I say no to them. Programming is not CS.
Programming is about producing programs. Historically and statistically
programs are written to work by mutating their state.

~~~
prodigal_erik
Turing machines have mutable state. The lambda calculus does not. Yet any
algorithm which can be expressed by one can also be expressed by the other. We
have _proof_ that state is merely an implementation detail, which happens to
be widespread in today's hardware because we figured out how to implement it
at scale.

Programmers need to understand the difference between the abstract algorithm
they're trying to express and the representation they've chosen (and the ones
they didn't) inside the computers they're using, just as astronomers need to
distinguish between the stars they study and the measurements coming out of
the various telescopes they can choose.

~~~
guard-of-terra
State is an implementation detail to today's hardware

And yesterday's hadware

And the day before that

And pretty long time, frankly - do you see a pattern here? And it does have
its deep reasons.

Programming is not about algorithms. It's about making inanimate matter
behave. CS might be about algorithms, but CS is not programming.

~~~
spacemanaki
Your hard-line stance is a little bit too far IMHO. That inanimate matter
programming is all about manipulating also has no notion of objects or object
orientation... or (for the most part) methods, fields, functions, procedures,
arrays, modules, encapsulation, interfaces, garbage collection, databases ...
and so on and so on. All that abstraction exists to make programming easier
for humans to grok, and for teams to deliver some "useful work".

To borrow from Robin Milner, who worked on ML: "Types are the leaven of
computer programming; they make it digestible." All that abstraction makes
programming more digestible, or more palatable. Functional programming is just
some more abstraction to make it more digestible. To say that it's a
redefinition of programming is just a distraction, since programming has
already been redefined.

Also, you're right that CS isn't programming, and viewed in a certain light
one could argue CS is basically just a branch of math. But on the other hand,
programming _without_ CS really wouldn't be that useful. It's just not a
helpful distinction to make in this way.

~~~
guard-of-terra
I would argue that objects, methods, fields, modules, encapsulation,
interfaces, garbage collection, databases are not the basics of programming
either.

Good things to have, but are not basics of programming. So, teach/learn them
later in the cycle.

Basics of programming are variables, branches, arrays, loops and simple I/O.
You can go a long way with these. And you can explain those to 6 yr old child.

~~~
prodigal_erik
Recursion is fundamental. Reassigning to a bunch of variables in a loop is a
poor imitation of recursion that caught on because fifty years ago we could
barely afford call stacks and hadn't invented tail call optimization. The
better our hardware and our theory gets, the more of these shortcuts we can
and should leave behind, so that we no longer need to try to preserve the
illusion that programs run sequentially and state is globally consistent.

~~~
guard-of-terra
It's not an illusion, it's in the CPU.

It didn't happen yet, call me again when it will.

~~~
nessus42
Everything we do in any high level programming languages is an illusion. In
fact, even assembly language is an illusion. The only thing that isn't an
illusion is building a computer out of nand gates, and the like.

In my education, we STARTED with functional programming, and eventually we
built real microcoded computers that we designed ourselves out of nand gates
and adder chips. And, yes, I mean REAL chips with lots and lots of wires on a
real physical breadboard.

Is a nand gate, the fundamental building block of all computers, is that
functional? You bet your ass it is! State only comes by using this functional
component and feedback to make a flip-flop.

Your premises are all wrong: My education started with FUNCTIONAL programming.
We then learned how to build OO on top of functional programming, using only
closures and single mutation operation: variable rebinding. We then went on to
design and build our own physical computers out of chips and wires, starting
with the fundamental functional component: a nand gate.

~~~
guard-of-terra
You do not program nand gates. You program your cpu. Last time I've checked,
it's not Reduceron II but instead AMD64 or ARM cpu. One which is inherently
imperative - it's von neumann machine. It has registers, shared writable
memory and stuff.

~~~
nessus42
First of all, as I have mentioned, I _have_ implemented programs using real
nand gates, with my very own hands, using copper wires and microcode.

Furthermore, my CPU is made out of nand gates, so yes I _do_ program nand
gates every day.

If you deny this reductionism, then I deny your reductionism to the CPU. I
don't program the CPU--I program the Scala compiler. My Scala code doesn't run
on an AMD64, or an ARM CPU, it runs on ANYTHING, past, present, and future for
which the Scala compiler has been implemented. In the future, my program runs
on DNA or on neutronium Tinker Toys.

Traveling into the past, rather than the future, I have written Lisp code and
then compiled it to microcode.

P.S. I don't trust the opinion of anyone who hasn't built a computer out of
nand gates, as clearly they are out of touch with the hardware. Have you built
your own CPU from scratch?

~~~
guard-of-terra
"I" is the last letter of the alphabet.

(lol, it isn't, you got me)

What I see in your comments is the constant stream of I's. I, me, I did, I
studied, I think this, I think that. Guess what? Nobody cares that much except
for you.

When talking about basics, we should consider the least common denominator of
all programmers' knowledge. You can reason that the basics should be
different, but what if you are wrong? You are a smart guy who loves scala, but
there are other smart guys who consider scala a failure, for example. And
there are guys smarter than you and me who just do it all in C because they
can't be bothered with abstractions, and FP is an abstraction in the end.

People are different, and if you think that you're such an unique yet the most
important snowflake - why?

Nobody questions your ability to write successful FP code. So what? What does
it prove? Nobody even questions that. But there are people who write FORTH.
And there are people who write Prolog. Both groups claim they get much bigger
bang for the buck than anyone else. Why do we listen to you and not them?

~~~
nessus42
You keep expressing your opinion. Please tell me how there is no "I" in your
opinion. I see two of them: o-p-I-n-I-o-n.

Also, I never said not to listen to any of these other folks. It was you who
told me that I didn't learn to program in my first programming class because
it started with functional programming as the most fundamental concept. That
statement was offensive and false, and I am now demonstrating to you why you
are so wrong.

This is not to say that there are not many possible reasonable approaches. I
think the one I learned was great. Are there better approaches? Who knows?
Certainly not you.

~~~
guard-of-terra
I didn't really mean to offend SICP. I suspect it's overrated a bit, but since
I've never actually tried it I'm not qualified.

What I was arguing for, is that you should not call "immutable data
structures" and "referential transparency" basics of programming.

~~~
nessus42
SICP is overrated to _whom_? For me, it was the most profound and wonderful
educational experience of my life. I found it to be profoundly _inspirational_
and educational. I found it to be beautiful, uplifting, moving, rewarding, and
just incredible in all sorts of ways that I cannot even begin to express. And
it pumped up my joy of learning in a way that survives to this day, decades
later. There are not enough words in the English language for me to express
how much this course meant to me and how much intellectual joy it provided me.

Of course, YMMV.

As to what I should and should not do, who are you to tell me that?

Mathematically, immutability is more fundamental than mutability, as the
mathematical models for computer programs all model mutability in terms of
immutability.

Pedagogically, either approach seems to work. Which one is better? For me, the
functional approach was FAR superior. The best pedagogical approach is almost
always the one that is the most inspirational, and SICP truly inspired me. I
found it to be _beautiful_ , whereas I found the more traditional approach to
be merely workmanlike.

Having received both kinds of education, I'm qualified to say which worked
better for _me_. Is my experience representative? Well, lots of people who
have learned via this approach seem to agree with me. Zillions of people
_LOVE_ SICP. On the other hand, zillions of people swear by imperative
approach. The best approach then might vary from individual to individual.
There may be no one-size-fits-all education. But I suppose that we cannot rule
out that future education researchers might show that on average one approach
is better than the other. The Logo people claim to have done such research for
children, and they settled on the functional approach. But children are not
adults, and I'm sure their research did not reach the level of proven fact.

As to which approach results in engineers who produce the highest quality
software, I believe that having a deep understanding of the functional
approach results in higher quality software, and the best way to acquire such
a deep understanding is to start with it from the very beginning. I am not
prepared to prove this assertion, but neither are you prepared to prove its
contradiction.

As to which approach is closest to the hardware du jour, who cares? That has
nothing to do with anything, unless you are writing in assembly language. I
will point out, however, that today's hardware comes with multiple cores, and
this tend is only likely to increase. Functional approaches to programming
result in very natural extension to multiple cores, while with imperative
styles, not so much.

------
sodelate
oo is not always the better choice

------
ioquatix
Languages that are "OO" and languages that are not (in this case functions and
data structures) are semantically equivalent.

Q.E.D.

~~~
it
It is possible in principle to implement one in terms of the other, but in
practice this is not what typically happens.

~~~
ioquatix
Sure, it might not be typical, but it does happen in quite a few languages.
CLOS is one system that comes to mind.

FP vs OO is an interesting discussion but this article hardly does it justice.

