
OOP Isn't a Fundamental Particle of Computing - joeyespo
http://prog21.dadgum.com/156.html
======
programminggeek
Well, most OOP isn't object oriented at all, it's class oriented or maybe
another way to look at it is that most of the time OOP is used as containers
for data and function, not for things.

For example, say you have an object that holds some strings and numbers and it
has getters and setters on it. How is that different than a Hash? Is that
object oriented? If you add a couple helper methods to said object is it more
OOP if you still are using getters and setters?

There is a fundamental difference between treating OOP as a bunch of classes
that contain basically procedural code vs treating them as objects that are
things that do things.

Treated the way it often is, OOP is not that useful for anything other than
namespacing and creating ridiculous hierarchy structures just because "it's
oop" and inheritance lets humans do what they love to do - name and categorize
things.

If you treat objects in OOP as "things", I think it is more useful and
interesting than treating objects as containers. Either way, not all code
needs to be OOP at all.

~~~
guelo
> OOP is not that useful for anything other than namespacing

Namespacing, and modularization in general, are cornerstones of well
structured code. OOP gives you nice tools to help with this.

~~~
ynniv
OOP gives you tools. Having spent the last 15 years using them, I'm not
convinced they are nice.

~~~
lerouxb
Maybe you use C++, Java or one of the languages in that family?

------
DanielBMarkham
Yeah not so much.

I've become a huge fan of FP over OOP over the last couple of years, but I
think he missed it.

At some point early in any program, you have to concatenate existing types
into something else purposed for the solution you are writing. In the OOAD
world, we call this moving from the problem domain to the solution domain.

Is an RGB value just a tuple? Depends on what the problem is. In the context
of just one pixel, yes. In the context of an image processing application, you
may need an alpha channel, a grid of pixels, an undo, and so forth. By
identifying just one type instead of talking about a problem, RGB, the author
has created a circular reference. Why do you need more than a tuple for RGB
values if you only have RGB values? Well you don't. But programs are not about
tuples, they are about solving problems. Yes, a problem can only require one
simple datatype to solve, but in my experience that's only for a very small
subset of problems.

OOP concentrates just on assembling various pre-existing datatypes into
something new in order to solve a problem. It concentrates on the type. FP
concentrates on what you are doing with the type. Either way you still need to
assemble various datatypes. The construction of a new type doesn't go away.
The emphasis is just different. You don't need to worry about a copy
constructor or any of that anymore because the goal isn't to create reusable
types that can last forever. Multi-core processing and our experience with
large systems is killing that concept -- most types don't scale as advertised.

So OOP skills aren't going anywhere. It's just instead of creating a complex
type, we're using those skills to create the minimally-complex type sufficient
for our immediate needs. Big change in focus.

~~~
bad_user
Most people that have a problem with OOP are conflating it with how OOP gets
used in Java.

Like if you wanted to represent a RGB pixel on screen, in Java that ends-up
being a class having methods like draw() and such. And this is the problem
with OOP in Java, because of the limited toolset, you always end up having
heavy Objects that have their own behavior instead of functions that operate
on a whole composite of such values. Having draw() defined for a pixel is
pretty bad design and yet people do it anyway.

Also, some of the common complaints about OOP classes (e.g. you end up having
to write boilerplate, like when defining getters and setters and equals and
hashCode) are again related to Java and Java-like bureaucratic OOP languages.
In Scala all you have to do is this:

    
    
        case class RGB(r: Int, g: Int, b: Int)
    

Is it statically type-safe with a good self-documenting structure? Yes. Is it
just a simple tuple? Yes. And under the hood Scala also generates a proper
equals() with structural equality and the right getters/setters for the
values, because in Scala (like in Smalltalk) member access is always done
through a method call. And because it is also a class, you can also attach
things to the constructor, like runtime checks to make it even more type-safe:

    
    
        case class RGB(r: Int, g: Int, v: Int) {
          require(r >= 0 && r < 256, "r must be a value between 0 and 255")
        }
    

Is this OOP? That's indeed a class, but saying that programming with classes
as types is OOP is like saying that programming with functions is functional
programming.

In my opinion Java should never be taught as an introductory language and in
universities people should really get exposure to pure forms of OOP, like
Smalltalk and CLOS.

~~~
DanielBMarkham
"Is this OOP?"

Okay, that's beyond my pay grade. I could argue that either way. OOP is the
grouping of data and functionality into units called types or classes. So yes,
technically, but I know what you mean.

I agree that Java is a bad way to start, but I wonder if I wouldn't come at it
from just the opposite angle. Start with C, move to structs, then move to
something like the class you show, then to more complex type creation. After
all, C++, probably the most widely-used OOPL on the planet, started with the
problems of scaling C.

The problem here is the confusion between "the way things work in language X"
and "the way things generally are". I think you need a few OOP languages, and
perhaps some functional ones too, under your belt to be able to understand
that.

What I think you're getting at is _problem-solving strategies_. There is a
certain problem-solving strategy available in OOP that doesn't make much sense
in FP. Many functional guys look at OOP and say something like "It all looks
like just a bunch of wiring" -- which is true but misses the point. It works
the same in FP, they have ways of solving problems differently than the OOP
guys, but that's for another day.

In either case, starting with Java is not so good, if for no other reason than
you end up in namespace-land with even the simplest code. Start simple, either
with Smalltalk or with C, and work in the more complex pieces.

Note: I _do_ think that any OOP learning experience should end with creating
your own robust type system, just like I think any FP learning experience with
creating your own parser and compiler. But this also is a topic for another
day.

~~~
SiVal
Instead of C, I would suggest something like Python, where you don't have to
learn to make your own shoes before you can start experimenting with fashion.
With Python, you can forget about the computer and just think about
programming. (I'm talking about pedagogy here, not production, where C is
often a better choice.)

The fact that OOP was really bolted onto Python as an afterthought means you
can learn simple straightline code, then start clustering it into functions as
it starts to get more complex then, eventually, start clustering functions and
data into simple classes where it (occasionally) helps to manage the growing
complexity, followed by your own modules. You don't have to start with
packages and class hierarchies just to print "hello, world" as in Java, but
you also don't have to reinvent the string, as in C.

~~~
Zak
_you don't have to learn to make your own shoes before you can start
experimenting with fashion_

I think what you and DanielBMarkham are approaching from opposite sides is
that it's useful for a student to learn both how computers work and how
computation works.

C is pretty close to the computer. Learning C will teach a student how
computers work. That's important for the same reason that a fashion designer
should have some understanding of what shoes are made out of and how they're
assembled. Making shoes that are nice to look at, but that fall apart when
worn is not especially useful.

Python (or Smalltalk, or Scheme, etc...) is closer to how computation works,
and closer to how many of the problems people want to solve using computers
work. It lets students explore what they can do with computers without getting
mired down in the details of how the computer will accomplish the task.

Java is somewhere between, and from an educational perspective, has the
disadvantages of both sides without gaining many of the advantages.

------
wvenable
I agree with the author's conclusion "When blindly applied to problems below
an arbitrary complexity threshold, OOP can be verbose and contrived" but I
think he fails to recognize that the opposite is also true.

Yes, it's much simpler to store RGB color in a three-element tuple when you're
working on simple code that you've written all yourself. But what happens when
get a tuple from somewhere containing {289, 345, -1}? Or god forbid {200,
"Monkeyspit", {}}!

The measure of simplicity and complexity is a double-edged sword. Making
everything a class for a small project, as the author concludes, is a recipe
for confusion and pain. However, not using strictly defined self-contained
data types for a large project is also recipe for confusion and pain.

"there's often an aesthetic insistence on objects for everything all the way
down"

I don't think there's anything wrong with that -- in many languages arrays are
objects, dictionaries are objects, and tuples are objects and they let you be
as unstructured as you want to be -- but it's still objects all the way down.

~~~
jerf
Your argument, unfortunately, doesn't really work, because for instance, in
Haskell,

    
    
        data RGB = RGB Word8 Word8 Word8
    

The point made here has nothing to do with weak typing. It just so happens his
example sort of implies that he might be using it, since the author rather
likes Erlang and that appears to be the syntax he reaches for, but the entire
article applies without loss to strongly-typed languages as well. (Of which
Haskell is merely a convenient example. Even more conventional manifest
typing, as in C, works well enough here.) If you want to declare that you have
a map of Booglies to Wangdoodles, it's trivial, but you're still using off-
the-shelf maps and not bashing together trees yourself.

~~~
danieldk
In this case, Word8 applies nicely, but I'd like to point out that the
argument doesn't fly either when something isn't nicely mapped to a type. E.g.
(not checked):

    
    
        module Temperature (
          TempCelsius
          tempCelsius
        ) where
    
        newtype TempCelsius = TempCelsius Double
    
        absoluteMinimum :: Double
        absoluteMinimum = -273.15
    
        tempCelsius :: Double -> Maybe TempCelsius
        tempCelsius t
          | t < absoluteMinimum = Nothing
          | otherwise           = Just $ TempCelsius t
    

In other words, by hiding the constructor, you can control all access to a
datatype. You could do the same with opaque pointers in C, etc.

------
mercurial
Given the amount of comments on this article, I feel like I'm the only one who
missed the point the author is trying to make. TFA goes from praising the
existence and ease of use of data types, particularly collections, in high-
level languages such as Python (an object-oriented language) and constrasts it
to C and Pascal (neither of which are object-oriented) and then seems to
extoll the virtues of relying on basic data types (eg, tuples) as opposed to
custom data types, and concludes that OOP is more complicated.

I could try and summarize the article differently:

    
    
      The exports of Libya are numerous in amount.
      One thing they export is corn, or as the Indians call it, "maize".
      Another famous Indian was "Crazy Horse".
      In conclusion, Libya is a land of contrast. Thank you.
    

Point 1 is pretty uncontroversial: well-written data structures in high-level
languages are easy to use and save you a lot of typing. Point 2, not so much.
When you start to argue that tuples are a good way to represent a data
structure you will presumably use in several places in your program, this is a
lot more controversial (but has nothing to do with OOP, you can write Haskell
using tuples instead of records too...). Regarding the conclusion, OOP is not
a "fundamental particle" of programming: you had non-OO languages before, and
you have non-OO languages now. For me, you have two major things which
distinguish functional languages from object-oriented ones: immutable state,
and a less strict coupling between data structures and behaviours operating on
these structures. Neither points are addressed in TFA.

~~~
Jare
OOP philosophy is to turn concepts from your problem space into custom data
types. The article suggests that this is not always the simplest and most
practical way to architect every piece of your solution. That's all.

The underlying reason is that data types must address (in some way) a number
of requirements: copyability, management of owned resources, conversions,
limits, etc. You need to take care of all that for every single custom data
type you create, or your code will be a minefield of half-baked data types.
This large base cost for data types in turn makes the classic 'divide and
conquer' strategy for software development more costly, which triggers further
practical problems with overgrown and over-engineered types.

~~~
mercurial
> OOP philosophy is to turn concepts from your problem space into custom data
> types.

I don't agree with this statement. Surely the use of structs in C, or records
in Haskell, are not enough to turn them into object-oriented languages. And
whatever language you end up using, you are going to end up with custom
datatypes when attempting to solve non-trivial problems. Sure, simple data
types are useful enough on their own. In languages supporting even basic
pattern matching, I use tuples whenever I need to return more than one value
for a function, and I'm not interested in reusing together elsewhere.

But this does not scale to complex program and complex data types. And your
custom data structures will need to support a number of operations (eg,
comparison, etc...). I don't really see what this has to do with OOP per se.

~~~
ajuc
Do we really need to reinvent these complex datatypes each time, thought? How
many classes there exists in Java or C++ just to keep x,y and z coordinates of
a point in 3d space? With custom types we gain the ability to distinguish
color (r, g, b) from position (x, y, z), but is it worth the cost? How often
do you catch errors like assigning point coordinates to a color?

If you want to use library A that has Vector3d type defined with library B
that uses library C that has Point3f class, and with graphic library that use
its own Vertex class - isn't it stupid that we need to convert the data all
the time?

I've seen application in C++ that uses 4 types of string: QString,
std::string, char* and custom type to pass data to database. 3 of them just
because of used libraries.

I think OOP mindest makes people whip their own datatypes too easily, and it
hurts when you need to integrate libraries developed independently. Most of
the time tuples, lists and hashmaps suffice, and in better languages you get
many operations on you composite datatypes for free - for example
serialization, deep copying and deep equality checking.

~~~
mercurial
Haskell has String, ByteString and Text :) Data type proliferation is not
confined to object-oriented languages (actually, since datatypes are much
easier and cheaper to build in functional languages, they're even more likely
to occur within FP).

~~~
chousuke
The nice thing about Haskell is that you can have a million data types with
different names, and as long as they're capable of implementing the type
classes you are using, you can pretty much freely switch between the different
underlying types. And if a library author doesn't provide an instance of a
type class for the data type, _you_ are still able to write one.

In most OOP languages, you'd be forced to either monkey-patch the classes, or
writing cumbersome wrappers.

------
jeremyjh
I happened to read this interview (<http://www.codequarterly.com/2011/rich-
hickey>) with Rich Hickey today which I'm sure has been posted here before. I
mention this because of this statement:

> When we drop down to the algorithm level, I think OO can seriously thwart
> reuse. In particular, the use of objects to represent simple informational
> data is almost criminal in its generation of per-piece-of-information micro-
> languages, i.e. the class methods, versus far more powerful, declarative,
> and generic methods like relational algebra. Inventing a class with its own
> interface to hold a piece of information is like inventing a new language to
> write every short story. This is anti-reuse, and, I think, results in an
> explosion of code in typical OO applications. Clojure eschews this and
> instead advocates a simple associative model for information. With it, one
> can write algorithms that can be reused across information types.

~~~
Avshalom
Which is interesting considering how often "write a DSL for every problem" is
used as a selling point for lisps.

~~~
nickik
DSLs still use normal data down below. Thats part of why it is so easy to
write DSLs.

You can write a Datalog DSL and it can work with almost everything. You can
write a Prolog DSL and it can work with everything.

You can still expose and send around all the data the DSL uses. The trick is
not to mix up what is functinality and what data, witch is what standard OOP
does. Many good OO Languages like Dylan or Common Lisp do not do that.

------
colanderman
I thought he was going to talk about how OOP can be broken down into several
simpler orthogonal concepts, namely:

1) Code reuse (e.g. inheritance)

2) Implementation hiding (e.g. methods)

3) Subtyping (e.g. interfaces)

4) Code composition / programming in the large (e.g. classes)

5) Run-time dynamic dispatch (e.g. instances)

Functional languages such as Haskell, Mercury, OCaml, and Scheme do a good job
of teasing these apart:

1) Code reuse doesn't need anything fancy. You can do this with function calls
even in C.

2) You can hide implementations using opaque types + accessor functions. OCaml
has some pretty neat typing constructs that allow partial type hiding as well.
You can even do this in C with incomplete types.

3) Subtyping is provided at the module level by OCaml's module interfaces, at
the opaque type level by Haskell and Mercury's type classes, and at the
transparent type level by OCaml's polymorphic variants and functional objects.

4) Code composition / programming in the large is provided by OCaml's module
functor system or Scheme's unit system. You can even do this in C at the
linker level.

5) Run-time dynamic dispatch is a feature that's rarely actually needed in
practice. (Compile-time dynamic dispatch is usually sufficient; that's
provided by Haskell's type classes or OCaml's module functors.) Nonetheless
OCaml provides RTTD -- _with_ multiple dispatch -- through functional objects,
Mercury provides the same through a combination of type classes + existential
types. (I think you can even use existential module types in the latest
OCaml.)

~~~
tikhonj
Just to be pedantic, what Haskell type-classes provide is not usually known as
sub-typing. Sub-typing usually implies some relation between types such that
every element of a sub-type is _also_ an element of the super-type.

Haskell does not have any sort of sub-typing in this sense. Every single value
only belongs to a single type. What you get with type-classes is polymorphism:
your function works for all types in the type-class, but each type is
disjoint.

Now, I should add that your observations about how type-classes are used are
correct--they really do serve many of the same roles as sub-typing does in
other languages. However, the distinction between polymorphism and type-
classes is still important both in practical and in theoretical terms. Not
having sub-typing significantly simplifies the Haskell type system--you never
have to worry about covariance or contravariance, for example--but also makes
some things, like heterogenous lists, slightly more tricky.

I think you could safely replace your "sub-typing" header with "polymorphism";
sub-typing with interfaces is just one kind of polymorphism, and other kinds
also serve a similar rule of making your code more generic by letting it work
on multiple types.

Anyhow, I think that covers my daily dose of pedantry :).

Edit: also, to clarify: I like and agree with your post, I just think the
distinction between sub-typing and polymorphism more generally is important.

~~~
colanderman
The pedant in me agrees (class-constrained types are not types and therefore
not subtypes), but the pragmatist in me still wants to think of class-
constrained types as forming a subtyping hierarchy :) even if doing so
requires explicit typecasting. Some food for thought:

In Mercury, you can combine existential types with typeclasses like so:

type foo ---> some [T] (foo(T) => printable(T)).

Pedantically, foo is a distinct type from any other. However, any T which is
printable is _effectively_ a subtype of foo, since you can use it wherever a
foo is expecected, modulo some ugly syntax:

['new foo'(5), 'new foo'("apples"), 'new foo'([1, 2, 3])]: list(foo)

I believe you can do something similar in OCaml using module types. (Of course
OCaml has true subtyping via polymorphic variants and functional objects.) I
am not as familiar with Haskell, so I'm not sure if it supports existential
types or not.

Dependent languages like Coq go even further -- you can augment types with
arbitrary predicates to form "sigma types" which can act as subtypes of one
another. You can even write entire programs using sigma types _without_
explicit typecasting. However they remain distinct from true types, and
therefore do not provide true subtyping (regardless how well-executed the
illusion is).

~~~
tikhonj
You can do similar existential types in Haskell:

    
    
        {-# LANGUAGE ExistentialQuantification #-}
        data Foo = forall a. Show a => Foo a
    
        [Foo 10, Foo "blarg", Foo (Foo (Foo "str"))
    

However, I think having the extra constructor there is very important. If you
were willing to overlook extra syntax and the behavior of the type, then even
a normal tagged union starts to look like sub-typing!

In fact, in practice, that's exactly what you use where you would use sub-
typing in a different language. There are some significant differences, but
I'm sure you can see a parallel.

There is also an intuitive parallel between existential types and normal
tagged unions. The existential type is somewhat like a way to create a tagged
union for an _unbounded_ number of types. This naturally means it can't
actually be _tagged_ \--the tag loses its meaning and you now can't recover
the type of the contents--but in practical terms they're similar.

As I said, thinking of it like sub-typing is a pretty good intuitive guide
(although it might lead you astray sometimes). I was just being pedantic;
there's something about formal semantics and type theory that just raises the
pedant in me :P.

~~~
colanderman
Thanks for the insights!

------
SethMurphy
OOP has never been just about computing. It's strength is that it allows the
business rules to be most directly mapped to the code that needs to be
written. When both parts of this process are performed by the same person it
loses much of it's value (until the problem gets larger and you need more than
one person). As we have better computing tools in modern languages it is more
often the same person. It has never been considered more efficient or easier
for the programmer alone, just the whole problem solving process in general,
especially in teams.

~~~
hackinthebochs
> It's strength is that it allows the business rules to be most directly
> mapped to the code that needs to be written.

Absolutely this. Discussions around OOP vs functional always seem to ignore
this massive boon to productivity that OOP brings. Being able to create
something of a DSL and map your business rules onto basic operations on this
DSL is a huge win. The trick is to design your objects and operations such
that the salient rules "rise to the top" of the stack, and thus can be easily
programmed and verified at the highest level of abstraction.

Of course a DSL comes with its own drawbacks. Having to learn a new
"language", with operations that are sometimes just renaming more basic
operations has an extra cognitive load that can't be ignored. This is why
there is a threshold of complexity below which you're better off just writing
it straight imperative/functional style.

~~~
crntaylor
> _Being able to create something of a DSL and map your business rules onto
> basic operations on this DSL is a huge win._

Hold on - how is this a concept specific to object-oriented programming?
Creating a DSL for your problem domain and solving the problem in the new
language is _exactly_ what functional programmers have been doing for decades!
E.g. here's a mini-DSL in Haskell for parsing CSVs, built on top of the Parsec
parsing DSL:

    
    
        cellContents = many (noneOf ",\n")    -- match up to first comma/newline
        
        remainingCells = (char ',' >> cells)  -- comma => parse more cells
                     <|> (return [])          -- else done
    
        cells = do first <- cellContents
                   rest <- remainingCells
                   return (first : rest)
    
        eol = char '\n'                       -- match newline character
    
        line = do result <- cells
                  eol
                  return result
    

Building DSLs is most emphatically _not_ something specific to OO programming.

~~~
hackinthebochs
Of course, anything you can create abstractions with you can create a DSL of
sorts. But I think most would agree that OOP is the more natural paradigm for
this.

~~~
papsosouid
Most would agree because most have never used anything other than OOP. That
isn't saying anything interesting. I find OOP to be most often awkward and
difficult to express the rules of my application in. Functional programming
does it quite naturally, as it is all about creating small simple components
and combining them to produce larger components. My rules are simply the
combination of smaller rules applied in order.

~~~
se85
This is exactly how I feel also.

This simple logic of fitting smaller things together to make bigger things has
always delivered superior results for me regardless of the language in use.

OOP seems ass backwards to me but everyone I talk to generally doesn't have a
clue how FP is different to OOP!

------
hurshp
There is nothing theoretical behind OOP its just how we currently abstract and
modularize code for human consumption.

I know its flame war stuff but I really think marketing from certain languages
got everyone into the OOP paradigm as the best way, Its funny sometimes I
talked to people who programmed in the 80's and they talk about how OOP didn't
really solve anything for them. Its always interesting conversation.

~~~
acuozzo
OOP was quite popular in the 80s.

------
carstimon
I program almost completely for myself. I recently moved up a scale from only
being able to write small programs (solving simple mathematical problems) to
larger ones.

I think my block from being able to write larger programs was that I would get
too caught up in creating classes. I would start a large project by
abstracting and abstracting and thinking about the most general class. I found
that I just thought about what data fields I needed, not what I actually
wanted to do.

More recently I've tried to limit how much time I think about how to store
data. Do the quickest thing first, and only think about your data storage when
you find yourself writing the same thing over and over.

Maybe you call this "premature abstraction", in comparison to premature
optimization?

~~~
jonathanwallace
It sounds like to me that you're finally learning what design means and how to
design well. :)

------
delinka
Well, if you want "fundamental particles of computing," have some machine
code. Assembly is a bit more tolerable, but directly correlates to machine
code, so sure, use that. The thing is, the CPU is "imperative" - it's a
machine whose state changes according to instructions provided to it.
_Everything_ else is abstraction. Abstraction attempts to reduce the amount of
stuff the programmer needs to think about all at once (i.e. 'complexity.') A
nice procedural language lets you more easily reuse code by referring to
procedures which, in turn, are executed sequentially. OOP lets you group
functionality with conceptual objects. Every bit of code you write is executed
sequentially by a CPU core. So what abstraction works best for you?

In some cases, it's not about what works well for the programmer. I'm
currently an OS X and iOS developer and I see problems in the Cocoa APIs with
an overabundance of OOP. The simplest of iOS apps gets a view, a view
controller, an app delegate (which often gets used as an app controller), etc
ad nauseum. Now, I come along and look at the code and just want to know how
the thing does what it does, but the functionality is spread across dozens of
classes and to follow along I actually have to run the app to see that the
entry point to the functionality is really -touchesBegan on some deep class...
it's obnoxious. What's worse is the design of Cocoa and CoreFoundation lead to
many app designs that have magical entry points unless you have more deep
knowledge than you ever thought you might. I get that this is an attempt by
Apple at keeping the developer more productive by writing less code, but I
don't think that idea is being implemented as well as it could be.

------
btipling
> I use lists and strings and arrays with no concern about how many elements
> they contain or where the memory comes from.

You should worry. If you don't want your server or your app to run slow, you
should worry about these things. Can you imagine going into a programming
interview and saying something like this?

It's pretty difficult to implement reliable, readable and proven design
patterns if you're just passing around dictionaries of dictionaries and lists.
It's more difficult to test. I'm not saying everything needs to be a class,
they're a nice tool to have in your toolbox. Use when needed.

~~~
btilly
_You should worry. If you don't want your server or your app to run slow, you
should worry about these things. Can you imagine going into a programming
interview and saying something like this?_

You sound like someone who doesn't really understand performance very well.
Lists and strings and arrays as implemented in any scripting language can
implement virtually any algorithm with only a constant factor performance and
memory penalty. (Try to come up with a counter-example to that statement, I
dare you.) Switching from language to language comes with similar factors. If
you discover that you've got a problem that requires you to worry about data
structures at that level, worry about it then. Possibly you should use a more
efficient language.

But for most apps and websites, those constant factors are really not a big
deal. Basic algorithm mistakes are a different story.

For the exceptional cases, you should build working code first, then use a
profiler to figure out where your problems are. You should only worry about
performance after identifying actual bottlenecks.

If I interview with a company that uses a scripting language and does not
understand all of that, that's a pretty good sign that I don't want to work
there. Because their opinions on performance are clearly misguided, and I
don't want to have to work with what they thought would be "optimized code".

~~~
chubot
Yeah I always hear people saying how Python is slow.

But in 10 years of Python programming, I see 2 repeated performance anti-
patterns:

1) Writing quadratic loops and not realizing it, then it blows up on a biggish
data set. This is quite easy to do in Python because people don't realize that
"in" on a list and .remove() and so forth do a linear search.

2) Writing Python like Java, where you have tons of indirection, which is
slow. People somehow feel naked when they only have to type 10 characters
rather than banging out a bunch of boilerplate for a mundane task.

When you're just using basic data structures in an idiomatic way, Python is
fast. I think there were some dictionary-heavy benchmarks I saw awhile ago
where Python is faster than Go, because its dictionaries are so highly tuned.

~~~
paganel
> Writing Python like Java, where you have tons of indirection, which is slow

I've actually just introduced one of my younger colleagues (and pretty new to
Python, but knowledgeable in C# and Java) to this 8-year old article:
<http://dirtsimple.org/2004/12/python-is-not-java.html>

I remember reading that article back in the day, I had only begun to do stuff
in Python for a year or so, and everything seemed so well explained but here I
am now instead still spreading the word :) At least some part of it I hope
it's done for good, such as "XML is not the answer", which actually brings
back "horrific" memories of Zope3.

------
thomasfl
The late professors Kristen Nygaard and Ole Johan Dahl, at the University of
Oslo in Norway, only intended their invention of the class based OOP
programming language named SIMULA to be used to simulate real world objects.
It has since proved valuable as a tool for making abstractions in software.
The concept was invented by LISP programmers.

------
ryandvm
So true. I'm glad that non-OOP is finally getting mainstream traction (again).

~~~
pjmlp
The problem is that most languages touted as non-OOP actually have OO as their
kernel data types, like Python, Ruby and JavaScript cases.

~~~
lmm
Why's that a problem? I like python because it removes a lot of the syntactic
ceremony of a "pure-OOP" language; how that's actually implemented is
irrelevant.

~~~
pjmlp
Because the people that rant against OO make the faux pas of using OO
languages as argument.

~~~
klibertp
Python: "Paradigm(s): multi-paradigm: object-oriented, imperative, functional,
procedural, reflective"

JavaScript: "Paradigm(s): Multi-paradigm: scripting, object-oriented
(prototype-based), imperative, functional"

Ruby: "Paradigm(s): multi-paradigm: object-oriented, imperative, reflective,
functional"

From Wikipedia. So, where the faux pas is?

~~~
pjmlp
All data types in those languages are objects.

~~~
klibertp
First you said that these languages are 'OO'. Confronted with evidence that,
in fact, they are multi-paradigm languages and that includes functional style
you changed your argument.

Now you're saying that those languages are not functional, because all the
main datastructures in them are implemented as objects. Well, object models of
JavaScript, Python and Ruby are vastly different and so I have to infer very
wide definition of an object - that of data+methods - because there is nothing
more that all three languages agree on in case of objects.

I have to tell you this - this kind of objects is not unique to OOP. The only
difference between C, for example, and Python in this respect is that you call
"methods" like this:

    
    
        list_append(my_list, my_element);
    

instead of

    
    
        my_list.append(my_element) # but note that you can write like this too:
        list.append(my_list, my_element)
    

My point is that every datastructure is an object. Every datastructure has
some data (internal representation) and methods operating on them. LinkedList
or HashTable is going to be an object no matter which notation you use to
access them.

There is more to OOP than just objects, of course, but that does not matter.
You can program in functional style using your language built-in objects
without any problem at all. Take a look at PowerShell, for example of
something even more exotic: turns out you can program in purely procedular
language while using objects from .NET framework. On the other hand, there are
bindings to wxWindow for Erlang; turns out you can program in purely (as in
'not supporting other paradigms'!) functional language using objects too.

I don't really know what you're trying to say, but JavaScript, Python and Ruby
are perfectly capable of functional programming and can be used as languages
supporting functional style. There is really no counterargument for this - or
at least the fact that operations on main datastructures of these languages
are implemented with syntactic sugar is not one.

~~~
pjmlp
My argument is still the same.

By having the data types exposed as objects, it doesn't matter what you do,
because from the CS point of view you are still manipulating objects.

Even lambdas and functions are objects with some kind of invoke method.

So it is not possible to use those languages without OO, because OO is part of
the language's type system.

Using CS language speak, you cannot do language semantic analysis without
making use of the object semantics.

~~~
klibertp
I think I don't understand what you mean. Why does it matter how lambda is
implemented on the language level if it retains lambda semantics?

Also, it seems that only languages you think worthy of being called
"funtional" are various (typed or not) lambda calculi implementations?

What is Scala, then? Not functional at all, too?

Could you maybe provide a few examples of languages you'd call "functional" or
"supporting functional programming paradigm"?

~~~
pjmlp
Standard ML, Haskell, Miranda, Lisp, Scheme, OCaml, F#

~~~
klibertp
Ok, thanks, now I understand what you mean.

Just note that F# is not functional by your definition, because all it's core
datastructures (tuples, arrays, lists) are actually objects, with methods on
them etc.

~~~
pjmlp
Regarding F# I was not sure if I should have mentioned it.

Still do not much about it or how its implementation generates code, as the
MSIL supports much more than just OO constructs.

~~~
klibertp
Yeah, but it's not possible (I think, but am not 100% sure) to reason about F#
code without object semantics because of how it's implemented: it probably
could be implemented on top of MSIL without objects, but (again, not 100%, but
95% sure :)) it isn't.

I thought about another language I think you'd call functional: Lua. It
supports first- and higher-order functions and has no objects at all in the
core language. What do you think?

Also, now that I understand what you mean I can agree with you, but you do
realize that this is not very common definition of "functional", right? :)

------
Uchikoma
This thread symbolizes everything that is wrong with our industry.

70% pop culture, 30% strong opinions, 0% facts. There is no science in
computer science outside of algorithms, there is especially no science or
engineering or anything substantial in the love child of this industry:
programming languages.

I have still some hope this changes someday, with

<https://leanpub.com/leprechauns> and the older

[http://www.codinghorror.com/blog/2008/03/revisiting-the-
fact...](http://www.codinghorror.com/blog/2008/03/revisiting-the-facts-and-
fallacies-of-software-engineering.html)

e.g. when to use FP (the tool to solve a problem, not the religion), or OOP
(the tool to solve a problem, not the religion) and what is the probability
that this works in that environment etc. Like when to use steel or wood. But
man, we're decades away from that.

------
ww520
> in the next assignment the simple three-element tuple representing an RGB
> color is replaced by a class with getters and setters and multiple
> constructors and--most critically--a lot more code.

This is in the classroom setting, during the learning process. It's good to
reuse the same example to explore different concepts, from map to class, so
that the students don't have to build up a different mental model with another
example. It's like writing Hello World in different languages to illustrate
the language mechanics. It has nothing to be with whether the requirement
(RGB) is too simple for an OOP implementation.

You are using trivial examples to prove "OOP Isn't a Fundamental Particle of
Computing." This is the classic straw man fallacy.

~~~
ridiculous_fish
Also, "rewrite this RGB color to be a class" is a poor choice of assignment
because it doesn't exercise any of the strengths of OOP.

Instead, try "rewrite this as a color class that can handle RGB, CMYK, or HSV
representations." Now the value (and challenges) of a good abstraction is made
apparent!

~~~
fholm
You mean like this? :)

    
    
        type Color 
            = RGB of int * int * int
            | HSV of int * int * int
            | CMYK of int * int * int * int
    
        let toHsv = 
            function
            | HSV(h, s, v) -> HSV(h, s, v)
            | RGB(r, g, b) -> ...
            | CMYK(c, m, y, k) -> ...

~~~
waxjar
That's very elegant. I don't recognise the language from the syntax. From what
I understand it makes a new type called Color and defines a function that can
do something with that type.

Is that much different from making a class called Color and defining some
methods for that class?

I don't see how one is better than the other. An OO example would perhaps be
slightly more verbose, but accomplishes the same result.

~~~
fholm
The language is F#, basically the code does this:

1) Creates a type which is a discriminated union called Color, that can have
three different values (RGB, HSV or CMYK).

2) Defines a function which converts a color of any value (RGB, HSV or CMYK)
to HSV.

The argument, although implicit, from me was that even though this can be
easily represented using OO, this approach is far cleaner and easier to read
(once you know the syntax and concepts obviously - but that can be said for
any language), and that the poster above me saying that the benefits of a
"good abstraction" becomes apparent implying that this good abstraction can
only be supplied by OO, which is not true.

The OO representation would either be one class called Color which always
stores it values in one format (say RGB), and then have get/set:ers for
manipulating it as CMYK, RGB or HSV. The other way would be an abstract base
class called Color which subclasses ColorRGB, ColorHSV, ColorCMYK, etc.

------
protez
The essence of OOP is the Interface, the human interface, which conforms to
human brain, rather than machines. It's all about the conceptual framework to
abstract out complexity into the names corresponding to what we're trying to
program in the first place. That's it. All the verbosity and complexity of OOP
is the secondary effect to actualize the particular implementation of the
Interface.

On the other hand, the fundamental particle of computing is indeed, not
object, nor class, but a sequence of bits. However, the fundamental particle
of programming, or collaborative programming so far, has been based on the
idea of objects. If one has the better idea, it would be about making the
Interface more friendly to human brains.

------
ianstallings
It's so funny being an older developer in this industry. Watching it go back
and forth between paradigms. Whatever is popular is wrong, so we should
switch, again. It never ends.

For the life of me though I can't understand why describing something
accurately is deemed a waste.

------
dschiptsov
That was solved long ago and already forgotten, or never noticed by ignorant.

When, for some specialized task, smart people need an object system they just
write one in their high-level language. They called it CLOS.) Why? Because
what we call OOP is just a set of conventions.

Almost every decent Scheme implementation has OO system.

btw, having message passing and delegation along with a supervision hierarchy
of closures is what OO is meant to be by Alan Kay. He modeled hierarchies of
people. So did Joe Armstrong.)

Programming is modeling. Smart people model real world phenomena. Idiots model
AbstractFactoryInstantiatorBuilderSingletons.

------
stcredzero
_> OOP can be verbose and contrived, yet there's often an aesthetic insistence
on objects for everything all the way down._

It depends on the language you're using. In languages like Smalltalk, it's
quite natural to have a Color object and not be passing around a tulle of RGB
values. Then adding something like a "beta" value to Color (in addition to
RGBA) involves no change to most of the code base.

------
tomlu
> Then in the next assignment the simple three-element tuple representing an
> RGB color is replaced by a class with getters and setters and multiple
> constructors and--most critically--a lot more code.

How about using a simple struct/record? Not too much boilerplate, and if
you're using a statically typed language it can be nice to at least give these
things a type to help you out.

------
scastillo
OOP have strong stuff behind, strong enough to say it a fundamental practice
of computing, specifically of the software engineering (And someone can have
opinions on what computing is and if software eng. is part of it, but lets say
it could be)

After reading all way down it seems that you see OOP as a simplifier tool that
make code easier to understand as you say in the very bottom paragraph:

"That's too bad, because it makes it harder to identify the cases where an
object-oriented style truly results in an overall simplicity and ease of
understanding. "

And also your argument of seeing OOP as just making abstractions for standard
types. And yeah objects represents information relaying on the standard types
the underlaying language provide but the fundamental stuff is not there,
actually the standard types relay on the underlying machine, SLA, etc... So
they all are just "repurposing of what's already there".

So my point here is that the same arguments can be applied to any upper
abstraction we made to "make stuff simpler". But as far as I can see what is
behind any of this abstractions or frameworks are the fundamental concepts of
_reuse_ , _coupling_ and _cohesion_. And here is really my point with this
replica, the discussion about something being fundamental or not on the
computing field, besides than it appears to be a subjective matter, should be
in terms of this 3 core concepts and the way the "tool" under discussion makes
easier to achieve each one as needed.

Of course there is a different tool for different works and OOP could be a lot
of overhead to solve some kind of problems (Technically talking because OOP is
more than just defining bunch of data and functions together). But if the
discussion is about computing and specifically to software engineering where
reuse is critical, let me say that the way OOP faces the 3 core concepts of
software engineering works very well if your problem space can handle the
overhead and most of our "Engineering problems" and restrictions can handle
that for sure IMO.

More code is not necessarily a bad signal, not if you are getting real
benefits from it, and again its not just simplicity to go and read the code
its simplicity modeling your problem so you can reuse your low coupled and
high cohesive abstract datatypes.

I have posted this replica using the cool replica.la service ;)
<http://www.replica.la/discussions/37>

------
atas
> Corporate programmers in OOP-land often never get the architectural
> experience of writing systems (starting with small ones) from scratch.
> Instead, they spend their 40 hours maintaining and tweaking large monoliths
> other people wrote, and this is a big part of why they never improve.

You just described my 'career', sir.

------
akurilin
My question then is: where is the sweet spot? What's the right amount of OOP
before it becomes overkill? Is it perhaps a matter of domain, and therefore
some domains will benefit much less of OOP?

~~~
mks
OOP is great when your main focus are data structures. Many enterprise
applications are basically just transforming data from one system to another.
Being able to come to an unknown code and figure out what is the format of
input and output just by reading couple of classes is a godsend. Even in this
case use OOP features sparingly (beware of deep hierarchies, contrived
polymorphism etc).

On the other hand when your problem is more about algorithms and evaluation of
data you might choose more functional approach.

Hybrid (object+functional) approach is getting even to the enterprise world -
SOA suggests to have dumb (possibly immutable) data objects for business
objects (just like struct in C) and service objects that perform operations on
these data objects (not unlike functional programming). Similar pattern is
with dependency injection frameworks that rely heavily on singletons (Spring)
- many beans become just containers for stateless functions.

OOP is not silver bullet (nothing is for that matter), but it is a very useful
tool.

------
zwieback
This discussion reminds me so much of the old OOP threads on comp.object and
on c2. I got nostalgic there for a minute.

Anyone here remember topmind? I think we should get him posting here. Then
again, maybe not.

------
jodosha
I think you're confusing classes with data structures: the first exposes
behaviors and hides as much as possible the internal state (data), the second
is a low level piece of information.

------
michaelochurch
One of the reasons discussions of OOP leave me feeling dissatisfied is that
OOP has a "blind men and the elephant" problem. One feels the tail and says,
"this Creature implements Rope". Another feels a leg and says, "this Creature
implements Tree". A third feels the side and says, "this Creature implements
Wall". We end up discussing dramatically different things.

My big issue with OOP is that it injects unnecessary complexity when used as a
catch-all. Alan Kay advocated OOP as something to do when complexity became
unmanageable: encapsulate it behind simpler interfaces. That's a good idea!
Unfortunately, the OOP fad dovetailed with the 1990s-ongoing attempt to
commoditize programming talent and we ended up with a generation of mediocre
programmers who took OOP to mean "Go out and build massive, complex, over-
featured objects", not "Here are tools to reduce complexity when needed". Alan
Kay's original message was hijacked, producing the monstrosity of corporate
OOP.

When I advocate FP, I usually put it like this. Instead of having 23 poorly-
understood and often badly implemented design patterns, functional programming
has two design patterns: Noun and Verb. Nouns are immutable data: from
integers to record types to OCaml's unions to Scala's tree-based Map and Set.
(Occasionally Nouns have to be mutable, which means they have attached Verbs.
I'm glossing over that for now.) Verbs are functions-- when possible,
referentially transparent. We can also use the latter as nouns, which gives us
cool "combinators" like map, reduce, and flatMap/mapcat that help us to
compose functions.

There are two annoyances we hit with "programming in the large" when we're
restricted to nouns and verbs. One is namespace collision, and the object-
oriented solution (see: Scala) is to have _locally interpreted_ functions, or
methods. That has advantages and disadvantages, but is sometimes the right
solution. A related issue is the Adjective Problem-- how to handle
similarities, such as objects with a "close" method or collections with
"foreach"-- which Haskell solves with type classes, Ocaml with functors, and
Java with inheritance. The issue I have there is that it's hard to solve the
Adjective problem using "modern" OOP without injecting non-locality into code
comprehension, and we learned a lesson about extreme non-locality with Goto. I
am not the world's biggest fan of inheritance.

The core idea of object-oriented programming-- that you should hide complexity
behind simple interfaces and expect the application programmer only to know
the latter-- is solid gold. That's why you don't have to learn a whole new
language whenever you move to a different SQL database (or a later version of
the same one): the implementation changed, the interface (or, at least, most
of the parts you care about) didn't. The problem is that it's really hard to
get interfaces right, and average corporate programmers don't have the
ability.

Professional programming, by the way, is all backward. The way to become a
decent programmer for realz is to start on very simple, self-contained
projects like scripts for data analysis, and move upward to more complex
programs once you get the non-trivial architectural problems associated with
simple ones down. The Unix philosophy (small components, large software
solutions being respected as _systems_ rather than thrown together as one
giant single program) is superior in general, but especially when people are
learning how to write software. You don't learn software architecture except
by doing it, and starting small gives you the quick feedback cycle that helps
you learn. Corporate programmers in OOP-land often never get the architectural
experience of writing systems (starting with small ones) from scratch.
Instead, they spend their 40 hours maintaining and tweaking large monoliths
other people wrote, and this is a big part of why they never improve.

~~~
stonemetal
_I usually put it like this. Instead of having 23 poorly-understood and often
badly implemented design patterns, functional programming has two design
patterns: Noun and Verb. Nouns are immutable data: from integers to record
types to OCaml's unions to Scala's tree-based Map and Set. (Occasionally Nouns
have to be mutable, which means they have attached Verbs. I'm glossing over
that for now.)_

Then I would have to ask how is that different than OO other than the straw
man at the beginning. It has Nouns and Verbs, sure data is default mutable
instead of immutable but immutable is certainly an option. OO typically calls
their verbs methods instead of functions but referential transparency is a
good thing on the OO side of the fence too. We tend to call map for_each, etc.

Frankly I am beginning to realize that FP vs OO is the biggest bike shed I
have ever seen in the real world.

~~~
michaelochurch
The problem with the mutability debate is that most examples show two
programs, one mutable and one immutable, on the scale of about 20 lines of
code. At the small scale, it's a toss-up which is better. Often the mutable
solution's more intuitive for most programmers, and sometimes just flat-out
superior: more clear, easier to understand. Mutable state isn't evil. It's
necessary in the real world. It's just that stateful actions don't compose
well, especially when you involve concurrency or Big Code problems. Good
programmers learn that they need to _manage_ (not eliminate) mutable state.
That's what FP is about.

So the aesthetic dominance that FP advocates hope to establish with their
20-line A/B depictions doesn't come through, because the truth is that the
problems with mutable state very rarely show up (except in contrived, over-
complex examples) at 20 LoC. At 20 LoC, the snippet you'll like better is
going to be the one you're most familiar with. The real differences show up at
2000 LoC, which can't be put in a PowerPoint.

Immutable programming is somewhat less prone (but not immune) to complexity
creep. For example, you see 500-line for-loops in corporate software all the
time. The conceptual integrity is gone because so many people (who never
learned what the others were doing) have added tweaks to it.

The difference between mutable and immutable programming is that making that
sort of change to an immutable program also changes the API, unless it's
purely a performance tweak (e.g. plus(2, 2) still returns 4, but does it
faster). If you add logging to plus in a purely functional world, you change
its signature from (Int, Int) => Int to something like (Int, Int) => (Int,
String). As you might guess, that's a double-edged sword. Sometimes you want
people to be able to add "purely stateful" (i.e. no API changes) effects
without changing a signature... but very rarely.

So I think the major upshot of immutable programming is that it makes it
impossible to add many varieties of complexity that corporate engineers tend
to add silently (in pursuit of a short-term hack) without changing an API and
breaking the build. This slows down complexity creep, and that's a good thing.

It's the reduction of that externalized-cost/complexity-creep dynamic that
makes FP superior, in my opinion. A 60-line referentially transparent function
really isn't less evil than a 60-line method of an object. They're both
fucking incomprehensible, in most cases. You're just less likely to see the
60-liner in a mature FP codebase. Also, because functions compose better than
stateful actions, it's usually a lot easier to break large functions up long
before they get anywhere near 60 lines. (In my opinion, double-digits are
"warning" territory and 25+ lines means it should almost always be split, at
least into inner functions.)

~~~
seanmcdirmid
When you have state...I mean real state, it sure is nice to encapsulate that
state in an object rather than in what is basically an unencapsulated monad.
OO supports state encapsulation, pure FP basically does not, that is a big
deal. Immutable programming sort of side steps the issue, that state is needed
at all, that an interactive program can somehow be stateless is ridiculous,
even many batch programs require some form of state (even if it is
unencapsulated in a monad).

Your 60-line function decomposed into nice small parts using beautiful
composable abstractions is an FP pipe dream. Yes, if the problem is well
understood, someone has thought about it for a long time and has come up with
some beautiful abstraction that works for a narrow set of related problems.
Now, as soon as you venture outside of a well-understood/nice abstraction
domain, your code is just as bad in FP, if you can figure out how to implement
at all.

~~~
chousuke
Could you explain why a monad is unencapsulated? As far as I can tell, using
monads is a far superior way to encapsulate state, since it's _impossible_ to
use the state without also operating within the type universe that you've
defined for your stateful calculations. Therefore, you can trivially tell if a
piece of code is relevant to the state of your program, and it's enforced at
compile-time.

Further, since the actual state values are immutable, you can store references
to them indefinitely. In most OOP languages, the whole concept of a "state at
this point of execution" is completely unsupported, and that state is
inaccessible to the programmer.

Edit: I wrote this assuming we're talking about a sufficiently advanced static
type system here, such as Haskell's. I've implemented monads in Clojure and
while they can be useful, of course in a dynamic language they provide less
safety since it's trivial to escape the monad.

~~~
seanmcdirmid
The problem with monads is that they DO leak into the type system. What effect
the object has must be exposed for type checking reasons, and it can't be
encapsulated, hidden, changed transparently, and so on. Try iterative
algorithms, UI programming with views and models, an interactive code editor,
etc...you get trapped quickly by the type system. The point is, you often want
to be oblivious about what that object is doing and when it is doing it.

~~~
chousuke
I'm sorry, I don't see the problem. You can encapsulate away the state behind
an opaque type so that it can be only accessed by functions that you have
defined. Whether this is a good choice is up to the programmer, but it's
common practice in haskell to use domain-specific types wherever it makes
sense.

For UIs and such, in addition to monads there are more powerful abstractions,
but at no point is it necessary to leak information to the client. The common
UI toolkits tend to have some impedance mismatch with FP because they've been
designed with OOP in mind, but this is not the fault of FP.

Of course, in FP it often doesn't even make sense to encapsulate everything.
Why hide useful data when it's guaranteed that any user will not be able to
misuse it?

Haskell provides tools for abstraction that are IMO vastly superior to
anything I've seen in an OOP language. If anything, you're more likely to have
leaky abstractions and failed encapsulation in Java or C++ compared to
Haskell, simply because of mutable state, closed classes, and limited
expressiveness of the type system.

~~~
seanmcdirmid
Ah, they just didn't design their UI libraries right? I hate this argument,
because its easily proven true (someone just has to design a "right" library)
and impossible to prove false (the "right" library could exist, it just hasn't
been built yet).

The problem with Haskell, which isn't a problem in an impure FP like Clojure,
is that you can't define objects at all (in the sense that objects completely
encapsulate state). Yes, you can do a lot of nice combinator tricks, but these
only work well for well understood problems, and so FP practitioners spend
most of their time trying to understand problems very well so that they are
amenable to their elegant abstractions. Whereas if I just use an OOP language
(or a pragmatic FP language), I just solve the problem in a hacky way without
needing to completely understand its essence (which, for me at least, only
comes after I've solved the problem N times in N different ways!).

Closed classes suck, but not all OOP languages restrict you to closed classes.
Scala does very well here, and traits are wonderfully expressive.

------
wissler
I think people generally prefer not to have answers to questions of this sort.

