
Rob Pike on Object Oriented programming - crawshaw
https://plus.google.com/101960720994009339267/posts/hoJdanihKwb
======
ianstallings
Can I ask a serious question? Why is there an argument about this every other
day on this forum? Do you guys honestly think that the language or paradigm
you choose is the most important decision you can make? I'm of the belief that
programmers are programmers. Procedural, OOP, or functional, doesn't matter.
What matters is the ability of your team to understand and build solid
software from it. Can the tool accomplish this? Good, then appreciate the fact
that _other_ tools can also do this. Is your hammer the best? Yes or No is an
opinion. There can be other hammers. And people using them aren't idiots. A
bad worker will screw up even with the best tools. And a good worker will
create the best software with even the worst tools.

I just feel like this entire industry gets caught up in trivial matters. What
language, what paradigm, what editor. Not that these aren't valid decisions
that need to be made but these are problems I can solve in a few minutes or a
few hours of thinking about it. The big problems I face are people problems.
Miscommunication. Lack of accountability. Developers going rogue. Management
not understanding. Users not being trained properly. I could go on and on and
I may sound like a grumpy old man. But I just do not get it. Choose your
platform and be okay with it. Understand there are others. We don't need one
platform or paradigm to produce great things.

~~~
bad_user
This industry has a tendency to make a religion out certain tools and
techniques ... IDE people don't ever think about the possibility of working
with a simple, yet effective text editor. People that have been doing OOP for
the last 10 years won't even look at functional programming techniques that
have been know for decades, with firm theoretical underpinnings and still
continue to produce hundreds of PeopleDaoFactoryFactoryImpl every day.

The biggest problem is that, as a developer, you simply cannot pick the right
tool for the job, because you're not doing your own thing as a solo developer.
You're always part of a team. Even when starting a business, you probably need
a good cofounder or first employee - finding a smart and dedicated individual
is hard enough, while also having preferences about the right tools and
techniques is a bitch.

So naturally there is a backlash ... some of us hate IDEs, some of us hate the
noun-centric and problematic designs that we see in large Java applications,
some of us are aware that there's a multi-processing revolution coming and
that mutable state does not scale, etc. And if you work in a team that gets
things done, you're really lucky and blessed.

But how can we avoid getting stuck in the status quo, other than expressing
our educated opinions on it?

And btw, I actually love OOP, but as a tool, not as a way of thinking about
the problems I have.

~~~
ditonal
I find it amazing you talk about mindsets that keep us in the status quo, and
then you disparage IDE users and write a post full of hacker news groupthink
regurgitation. I've noticed the type of person who hate IDEs tend to fall much
more into the religious and close-minded crowd than those who don't. Most
people I know who use IDEs are perfectly competent at an editor like emacs/vim
and simply prefer different tools, but that doesn't stop many people who
prefer text editors from trying to feel superior by stereotyping IDE users as
paint-by-number morons incapable of embracing the beauty of the command line.
It's a traditionalist and condescending attitude that I think is holding us in
the status quo more than something like OO's popularity, because there are all
sorts of cool programming language ideas you can come up with if you're
willing to sacrifice the source code being easily edited by a text editor. Yet
we see far less experimentation there then we do with functional programming
languages, and I think that's because functional languages are perceived as
cool and smart but IDEs are associated with the philistine class of
programmers.

~~~
bad_user
1\. I hate IDEs for concrete reasons, like being impeded to work with tools
for which your IDE does not have a plugin and sometimes it happens even for
really mainstream technologies ... how's the C++ support in IntelliJ IDEA
these days?

2\. Switching between IDEs and editors is a productivity kill, especially if
you do that switch a lot - instead of being a creator that bends the tool to
your will by customizing it to suit your needs, you're going to be just a
casual user that cluelessly clicks around.

That's not so bad, however to be good at what you do you need a certain
continuity in the tools you use, otherwise instead of learning about
algorithms or design tricks or business, you'll be learning about tools all
day; and unfortunately this cannot be applied much to languages and libraries,
because these are optimized for different things - although if you worked on
the same CRM for the last 5 years I guess it's not that important ... and I
don't know in what groups you hang out, but an IDE user that switches a lot or
that is familiar with grep/sed is a rare occurrence in my world

3\. I love Smalltalk-like environments where the IDE is part of your virtual
machine and can see and work with live objects and continuations, but get over
it, because your IDE is not like that - yes I would love to escape a little
from the text-based world we live in, however the current status-quo of IDEs
is still text-based and text-editing isn't even something they do efficiently

4\. HN groupthink should be natural, because it has attracted users with
similar interests; that's not bad per se, considering that HN users are a
small minority and not necessarily because we are smarter, but because we have
slightly different interests ... also, I don't see much evidence of groupthink
because I always see both sides of the coin in conversations here (you're
disproving your own point right now)

5\. I never implied that my opinions represents THE truth and I like engaging
in such discussions ... instead of reading about the same old farts coming out
of tech-darlings of our industry, because in these conversations I might
actually learn something

~~~
goostavos
...Surely, you realize the irony of your position, no? One could easily
replace "IDE" with "Programming paradigm [X]" and you'd suddenly be the exact
person you were railing against in your original post..

How about this.. _both_ have their merits..? Static typed languages do benefit
from a good IDE. That said, I personally prefer the cleanness of Sublime Text
over a proper IDE -- even at the expense of having to write my own getters and
setters! Doesn't mean the other is antithetical to productivity.

Let's end this senseless arguing and just agree that PHP is terrible.

~~~
bad_user
The even bigger irony is that people using terrible languages and paradigms,
like PHP and the original Visual Basic, have historically gotten things done,
even if that meant shoving a square through a round whole :-)

I guess the curse of "enlightened" people might be that we think way too much
about such things.

~~~
olavk
I think PHP and the original VB were actually great platforms. At the pure
language level they were not elegant (but good enough), but both were not just
languages, but a platform which as a whole were great for developing a
specific kind of apps.

------
DanielBMarkham
I had a good chuckle at the end of the referenced material

 _"The object-oriented programmers see the nature of computation as a swarm of
interacting agents that provide services for other objects. Further, the
sophisticated OO programmer lets the system take care of all polymorphic tasks
possible. This programmer sees the essence of object oriented programming as
the naive object-oriented programmer may not."_

Hey Dan! What wrong with the program?

Sorry, my swarm of interacting agents had a polymorphic pile-up on aisle 7.
Dangling pointers everywhere. It's not pretty.

Snarky jokes about buzzword soup aside, I love OO. We simply need to be aware
that OO lets us "play" at building complicated things when 1000x simpler
solutions may be available. OO works best for large-scale, lots-of-people
projects. A lot of business projects are like that. Many personal and startup
projects are not. The trick in loving any particular tool in the toolbox is
knowing when not to use it. So the example is a little bit unfair -- it's
tough to create a real-world example program of sufficient complexity to use
in OO examples. All the examples look like architecture astronauttery.

~~~
genuine
"OO works best for large-scale, lots-of-people projects."

Nice! But, I think it is the other way around. Throw a lot of people at
something with some software architects and you'll probably have a large-
scale, lots-of-people, "OO" project.

I worked at a company that was between small and mid size and had ~150
developers. We bought another company that basically did the same with 10x
fewer, and that group won out over ours. We had some excellent developers that
went on to other excellent shops, and I was proud of the work that we did
there and learned a lot about process. It was the best run development team I
ever worked for and probably ever will.

I was an "OO" developer, now I'm just a developer. Not because of that
learning experience, but because I found Ruby and I no longer see the benefit
in intentionally writing overly large applications. Ruby is truely more OO
than Java, imo, but I don't write like I used to which I think is what is
being called "OO" (lots of packages and interfaces, pattern usage, lots of
maven projects).

------
seanmcdirmid
I once had a heated argument with Rob Pike over lunch about Java when I was a
naive grad student; suffice it to say that I thoroughly got my butt whooped on
inheritance. My argument was, I think, that inheritance basically is
composition, just with some self recursion thrown in. Keep in mind that
programming is always about composition, and we are just arguing about
different styles of such.

OOP has recently been thrashed in the mud in the academic community, where it
was never completely accepted. Now, industry always loved OOP, not because it
was new, but it provided some stronger guidance on what they were already
doing (composing software out of stateful parts), and was much more pragmatic
than its older more aloof sibling (FP). People were already thinking in terms
of objects probably even without using Simula, Beta, Smalltalk, C++,
etc...vtables were even already being crafted like crazy in C.

I agree that object thinking is just another tool in your bag, sometimes you
really need lambdas and should use those. Sometimes you want raw tables. Any
program worth its salt is going to incorporate many different styles, and
avoiding one style on ideological grounds is ridiculous.

~~~
gwillen
I think inheritance basically _is_ composition plus self-recursion.

You get into trouble because of the self-recursion. Any time a base class
method calls another base class method, that call is part of the class's
interface, because when you extend the class, the call will be redirected to
the subclass' method.

But how many base classes have documentation for every self-call that can be
redirected in such a manner?

See e.g. the "hashtable with plurals" example:

<http://norvig.com/java-iaq.html#super>

~~~
georgeorwell
My understanding is that composition can be changed dynamically whereas
inheritance cannot. It has always seemed to me like composition is more
flexible.

To give concrete examples: using composition, if instance A has a B, then at
runtime you can replace the pointer to the B with a pointer to a C such that
now A has a C. You basically change the type of A by changing where messages
get sent / delegated.

With inheritance, you'd have B is an A and C is an A, and the relationships
here are static unless you start messing around with reflection and dynamic
class loading and stuff.

The tradeoff for the flexibility of composition is more verbose code, I think.

~~~
seanmcdirmid
Dynamic inheritance is not unthinkable, I've used it in my languages before
(or see research languages like Cecil). Of course, you can do this easily in
dynamic languages like ruby.

~~~
jrochkind1
Actually, can you really do dynamic inheritance in ruby? I don't _think_ so.
There are ways to apply inheritance dynamically at runtime of course
(including with module mix-ins, which are basically just inheritance even
though ruby pretends it isn't), but I don't think you can _undo_ inheritance
at runtime.

You can easily simulate dynamic inheritance in ruby.... with composition,
using delegate-like patterns.

~~~
draegtun
_but I don't think you can _undo_ inheritance at runtime_

I'd be surprised if you couldn't do it in Ruby. You certainly can do it in
Perl because it uses a package (class) variable called @ISA for it's
inheritance lookup.

And because package variables are dynamically scoped you can do this:

    
    
      {
        # remove everything except father from inheritance
        local @Some::Class:ISA = $Some::Class::ISA[-1];
    
        $some_object->foo;   # finds father foo() only
      }
      
      $some_object->foo;     # runs first foo() found in inheritance

------
btilly
I still believe that the book which taught me the most useful lessons about
object oriented programming was _Code Complete_ (first edition).

It was written before OO programming was popular. The concept is not described
there. But if you've read and understood its description of things like
abstract data types, it is obvious where and when OO is going to be an
extremely good hammer to use. And - just as important - you're not going to
wind up endlessly searching for nails for your OO hammer.

When I see things like <http://www.csis.pace.edu/~bergin/patterns/ppoop.html>
it is clear to me that someone does not understand the value of simplicity. I
don't care what complicated theories you have about what kinds of code are
easier to refactor. Less code is generally going to be easier to change later.
If need be you just rewrite it.

~~~
bad_user
> _Less code is generally going to be easier to change later. If need be you
> just rewrite it._

That's in general an unhealthy attitude, because later might be too late to
rewrite it, as complexity has a way of creeping in, as simplicity requires
eternal vigilance and leadership with an iron fist, something which most teams
lack.

You should watch the Hammock Driven Development presentation, by Rich Hickey,
in which he makes a case for the value of thinking about the problem before
acting. This man is in fact brilliant in how he delivers presentations, so
while you're at it, watch Simple Made Easy, in which he argues that simplicity
ain't easy.

TL;DR - easy to write code is not necessarily simple. But simple is objective,
so you know you have it when you achieve it.

~~~
coliveira
Another reason why simple code is not always better is that in a typical
project there are LOTS of simple decisions that need to be made. If you let
them for later, they will become: hard to find, hard to integrate with other
"simple" decisions you made, etc.

~~~
bad_user
Indeed and that's because in order to tackle complexity and scale the
development process, you need layering, composition and to also avoid cyclic
dependencies and complecting too many things at once.

The Unix philosophy, in which things should do one thing and do it well, is a
good case-study of good design, with the ugly parts being the instances in
which this philosophy wasn't respected (not mentioning all the wheel
reinvention going on).

However it's not easy to build things that do simple things, do it well and
then building on top of that. You need experience, forward thinking and
resources.

And OOP sometimes helps, but sometimes it makes things worse. For instance it
encourages a bottom-up design. But other times a top-down process for
development is better - in which you start at the top by outlining/creating a
domain specific language and then implement the layer below that knows how to
communicate with that language, then rinse and repeat until the layers are as
simple as possible and you need no more layers and the implementation works.

------
coliveira
Rob Pike is very smart guy, but he works in relative isolation from other
programmers. He usually creates his own tools and works with people that are
like minded. That is why he thinks it is so easy to just go directly to the
code and achieve what he wants writing the right function to solve the right
problem.

OO was created to deal with the general issue of organizing lots of code
around a reasonable design. It is a tool for industrial level programming,
where there are thousands of programmers, many of them with bellow average
skills, contributing to a single codebase. In that aspect I think OO has been
very successful, because it provides a framework to simplify design decisions.

~~~
dsymonds
Uh, no. Rob works with a number of other programmers on code that is directly
used by thousands of other programmers.

~~~
shadowmint
Does he really?

I was under the impression he worked in a relatively small team working on
code thats _used_ by lots and lots of people, but only _written_ by a few
people.

The OP's point is that when you have people you _dont trust_ working on your
code (and does he really?) there need to be controls somewhere to keep
_everything from getting screwed up_.

OO is one way of doing that; a good set of unit tests + CI is another.

~~~
dsymonds
I work with him at Google. Google has a single codebase shared by tens of
thousands of engineers. People have areas of that that they own, but it is a
wrong characterisation to say Rob "works in relative isolation from other
programmers".

~~~
Evbn
At Google, how large is the Go community compared to the C++ or Java
community, or the community working on a shared application code base like the
Search engine?

~~~
dsymonds
It's small, but quickly growing.

------
aufreak3
OOP vs FP is better seen, I think, as a duality rather than as a dichotomy.
Sort of like wave-particle duality. Sometimes you find it convenient to think
"wave", and at other times "particle", but the reality of the system is
neither. These are just convenient and equivalent constructs we use.

Some other such dualities are - code vs data, data structures vs algorithms,
closures vs objects. Enlightenment lies in seeing the false nature of these
dualities. (Now, say "Om" people :)

For another fun view on data structures, checkout the "numerical
representations" chapter of Chris Okasaki's "Purely functional data
structures" [1] where he draws parallels between number representations and
data structures, which I found fascinating.

[1] <http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf>

~~~
hackinthebochs
Probably the most insightful comment on this thread. Well put.

------
acqq
Re:

<http://www.csis.pace.edu/~bergin/patterns/ppoop.html>

I can only read it as a joke -- the title is almost 'poop' and it's insane to
write all the classes instead of the initial few lines. But I really see that
one of the authors has more 'OOP forewa' articles where he's fully serious:

<http://csis.pace.edu/~bergin/>

~~~
chimeracoder
> I can only read it as a joke -- the title is almost 'poop'

Don't be so sure!

POOP is the accepted term for Perl Object-Oriented Persistence - without any
trace of irony, as far as I have ever been able to tell.

~~~
acqq
Actually, I have an impression that I relate very good to most of things Perl,
I use it very often for small programs. I believe that most serious Perl
people have a sense of humor, if you know what I mean. And exactly because of
that bias, I've first believed that the article absolutely must be a very
successful joke, and didn't understand why Rob isn't sure. Only after seeing
the rest of the material, I wasn't sure myself. Maybe somebody should actually
ask the authors.

Do read the paper! Note that I would consider an elegant solution this (string
literals replaced with the names of them):

    
    
        static void judge()
        {
            String s = System.getProperty( p );
            if ( s.equals( t11 ) || s.equals( t12 ) ) {
                return m1;
            } else if ( s.equals( t21 ) || s.equals( t22 ) ) {
                return m2;
            }
            return m3;
        }
    
        public static void pjudge()
            { System.out.println( judge() ); }
    
    

Then read what they produced instead.

------
bcoates
The big problem with the ppoop paper isn't that OO is bad, it's that it
confuses OO vs Procedural with YAGNI vs Extensibility. (also, both the
"hacker" and "OO" solutions are lame as OP points out)

If you actually have a reason to believe you need a gold-plated general-case
OS identification system then throwing all those patterns at it is no worse
than the nested-if spaghetti procedural code that would be the naieve
procedural solution.

But in both cases it's just a stupid answer to a stupid question ("How do I
overengineer a string->string table lookup?")

~~~
ajross
You're dressing up the scotsman here (the tell being "it confuses OO ..."),
and I think completely missing the point of the paper.

Obviously truly complicated systems will need complicated solutions, and OO
has some not-completely-insane things to say about solutions like that.

But the real world doesn't see things like that very often. Real world
programming is made up of thousands of tiny problems not altogether unlike the
hack shown here. And real-world OO nuts, faced with these real-world problems,
tend to solve them badly.

So sure, "How do I overengineer a table lookup" is a dumb question, but that's
not the question posed. The question is "How do I _avoid_ overengineering a
table lookup", and the answer is "avoid OO".

~~~
bcoates
But the right answer _isn't_ "avoid OO", at least not any more than the answer
is "avoid if statements". The answer is to use the library function your
language already provides to solve the problem in front of you and get on with
your life. This is something applicable to any paradigm or language.

------
tsahyt
After reading the article referred to by this, I just had to do this. You know
that urge ;)

    
    
    		def osdiscriminator(string):
    			good  = "This is a UNIX box and therefore good"
    			bad   = "This is a Windows box and therefore bad"
    			unkn  = "This is not a box"
    			boxes = {"Linux"      : good,
    				 "SunOS"      : good,
    				 "Windows NT" : bad,
    				 "Windows 95" : bad}
    
    			if string in boxes:
    				return boxes[string]
    			else:
    				return unkn
    

Easy to extend, simple and therefore maintainable without any inheritance or
GoF design patterns. Oh, and about one minute of work.

Edit for formatting

~~~
qznc
A little trick to spare throw more lines:

    
    
        return boxes.get(string, unkn)

------
swanson
Matt Wynne raised a good point in a recent talk about hexagonal (Ports &
Adapters) architecture that I agree with.

People are exploring new ideas for building software. Why is that so wrong?
Instead of attacking people for adding unnecessary complexity or doing it
"wrong", why aren't we praising them for thinking about new solutions and
approaches to problems in software?

~~~
mathgladiator
Basically, we learn from the people before us _. I've noticed this, and you
learn through code-review that "that's a bad thing to do"; if you are lucky,
then you learn interesting edge cases.

I think people learn what is "right" by a combination of cleaning up peoples
crap and by dealing with their own crap, and the difficult thing is to
purposely try "wrong" things and push boundaries.

_Stephenson, G. R. (1967). Cultural acquisition of a specific learned response
among rhesus monkeys. In: Starek, D., Schneider, R., and Kuhn, H. J. (eds.),
Progress in Primatology, Stuttgart: Fischer, pp. 279-288.

------
frustratedOOP
Why is it OK to criticize OOP using non-pure OO languages like Java, C#, or
C++? Pure functional nut cases like Don Stewart and Simon Payton-Jones (whose
favorite pet example of a dangerous side effect is a nuclear holocaust) don't
have to defend FP from critics whose only experience with FP comes from non-
pure functional languages like Python, JavaScript, and now Java and C++. Yet
here we are again, with another under-informed, overly-prominent person
publicly airing his grievances with Java as if all OO languages bear some
collective guilt for them.

Smalltalk is not a hard language to learn. Haskell is far, far more
complicated. You can download Squeak/Pharo/whatever and learn the language in
a few hours. Why in 2012 is Java still such a potent argument against the
entire paradigm of OOP, both class-based and prototype-based? Why does OOP
alone have to put up with this sort of scurrilous, intellectually lazy and
dishonest propaganda against it?

~~~
emp
I agree fully - reading through these comments is frustrating, as the
languages mentioned can hardly be called Object Oriented. Learning Smalltalk
is something everyone who really wants to understand OOP should do. C++, Java
and the like really should be called Class Oriented languages, as demonstrated
by the often very deep class hierarchies. It's not about the classes, it's
about the communication between objects. Objective-C is far closer to
Smalltalk than most of the languages mentioned in these comments.

The downside to learning Smalltalk - the realization that a 30 year old
environment is more advanced than whatever modern tools you will need to
return to to earn a living.

------
ww520
I don't get this hate toward OOP. OOP is just a tool to organize your code.
Like function is a way to organize your code. When the whole program is 5
lines long, putting them in functions would be unnecessary complicate. Does
one seeing that would conclude that function is unnecessary complicate in
general?

~~~
gizmo686
My dislike of OOP is its emphasis on state. If you do foo.bar(), then you have
changed the state of foo, even for other references to it. If you do
foo=bar(foo), then the state of the original data is unchanged, while code
below behaves the same way. In most cases, I find minimizing state minimizes
bugs.

I do however like objects, and think that they can have a very good role in
code.

~~~
ww520
There's nothing wrong with state. State is a fact of life in programming. Even
pure functional program has states, which are the parameters passed among
functions.

I guess you meant mutable state. You don't have to use mutable state with OOP
- just create a class that allows state initialization in the constructors but
nothing else, with none of the methods changing the states.

------
hcarvalhoalves
The given example is pretty damn stupid though. I can't believe anyone is
taking a factory to return a string seriously. The procedural example isn't a
solution either (maybe if you're stuck in the 80's).

Any "given X return Y" is a mapping problem, thus all you need is a hashtable
and associated map function. It can be implemented equally well in ANY
paradigm.

~~~
stevoski
I agree, especially as the example problem is "given String x return String
y".

------
olaf
I think, OOP-polymorphism added some value to programming languages, people
who criticise OOP (and not its more or less useful application), have mostly
not understood it. One can use it where it's useful, elsewhere one can use
better techniques.

------
robomartin
C'mon kids. Not again.

Tools in your toolbox. Use them as you wish based on whatever criteria fits
the moment and the project.

I made hundreds of thousands of dollars with a program I wrote in 8051
assembler. To be fair, it was part of a larger hardware solution. Still, the
UI portion of the code was all assembly language.

It wasn't until well after the product was in the market and selling very well
that I converted it to C. I did so mainly to make it easier to maintain and
expand.

Could this have benefited from OO? Who cares?

To add insult to injury, the workstation portion of the solution was written
in --sit down-- Visual Basic! Yeah! VB. Did it matter that it wasn't Visual
C++? Nope. Was it ever converted to VC++. Are you friggin kidding me? Nope. It
was making plenty of money as it was.

Plenty of other projects were done using other languages, such as APL, Forth,
Lisp and, yes, C++.

My point is that none of this really matters. People have gone to the moon
without OO. Whole banking systems have been run without OO. OO has its place.
And, when applied correctly, it can be a lot of fun to work with.

Digging through one of the links in the posted article there's an article that
suggests new programmers should be taught Python without the OO stuff. What?
Crazy.

Every new programmer needs to start with C. In fact, I am convinced that every
new programmer needs to start with C and be tasked with writing an RTOS on a
small memory-limited 8 bit processor. And then write several applications that
run within that RTOS.

Then give them a budget of n clock cycles and m memory bytes and have them
create a solution for a particular problem that barely fits within these
constraints.

I would then expose them to Forth and ask that the re-write the same RTOS and
applications.

Then I'd move them up to Lisp.

From there move into one of the OO languages. My first OO language was C++,
but I suppose today I might opt to teach someone Java or something like that.
Definitely not Objective-C. Keep it simple.

The above progression will expose a new programmer to tons of really valuable
ideas and approaches to solving problems.

Then I'd get serious and ask them to write something like a genetic solver on
a workstation in all of these languages and optimize each solution for
absolute top performance (generations per second) first and absolute minimal
memory footprint as second batch. Lots of invaluable lessons in that exercise.

Now you have a programmer that can identify which technology to use under what
circumstance and for what reason. This is a programmer who knows how to get a
100x or 1000x performance gain out of a piece of code or how to get something
done 10x faster at the expense of raw performance. Here's a programmer who
understands exactly what is happening behind the code.

And, in the end the most important thing still is data representation. You can
make a program 100 times harder to write if you choose the wrong
representation for the problem being solved. Just like the first article
points out: search a small table and the "hacker" solution is almost trivial.

~~~
dxbydt
User1: Lets talk about OOP vs FP

User2: I made hundreds of thousands of dollars with assembler

User1: umm...

User2: Then I made hundreds of thousands of dollars with VB

User1: umm...

User2: Then I made hundreds of thousands of dollars with C

User1: umm...

User2: Then I made hundreds of thousands of dollars with VC++

User1: umm...

User2: Then I made hundreds of thousands of dollars with APL

User1: umm...

User2: Then I made hundreds of thousands of dollars with Forth

User1: umm...

User2: Then I made hundreds of thousands of dollars with Lisp

User1: umm...

User2: Then I made hundreds of thousands of dollars with Python

User1: umm...

User2: Then I made hundreds of thousands of dollars with RTOS

User1: Does HN stand for Hacker-News or Hundreds-Of-Thousands-Of-Dollars-News
?

User2: umm...

~~~
jwdunne
Why is this comment ranking above the much more insightful and beneficial
thread started by malandrew? It's partly a question of how HN works but I
assumed comments with the highest karma filter up to the top? If that's true,
I must also question the community: why has this received so many votes?

It only serves to make robomartin look like an idiot by maki g fun of him. He
is most definitely not an idiot and this commenter makes it clear that they
haven't read the entire comment.

I have a lot to take away from his comment and from the thread I mentioned.
I'm sure there are a few here who will also have big take aways. It's content
and comments like these that make me love HN. The negativity from the
commenter can spiral out of control and fatally harm a community if endorsed.
I've seen it happen before to another community I deeply loved and it pains me
to be reminded of such negativity.

~~~
saurik
While I agree that the form of this comment was too harsh (so, your second
paragraph largely rings true for me), I was wondering why robomartin's comment
was at the top in the first place: it starts with a general insult to
commenters ("C'mon kids. Not again.") and then continues with a "proof by how
much money I made using it" against the apparent strawman "if you don't use
OOP you can't do things that are useful (such as make a lot of money)".

The article that Rob Pike was responding to, and Rob Pike's response, were
about whether people who use (or do not use) OOP somehow fundamentally better
understand "the nature of computation". There are people out there, some of
whom I have on my list of "personal heroes", whom are quite clear when asked
that they know very little about computation or computers, and yet they wrote
a bunch of code and made tons of money anyway; that is simply irrelevant to a
discussion about understanding "the nature of computation".

(Yes: I have purposely ignored all of the mentions of an improved CS
curriculum in my primary comments here. All of that conversation was off-topic
for the argument being made by both the original paper and Rob's response: it
doesn't contribute in any way to the argument about whether knowing OOP or not
knowing OOP has anything to do with how to best understand "the nature of
computation", if nothing else as the things you learn first are often, in
pedagogic contexts, either approximations or downright incorrect, and are
later updated or replaced by later teachings.)

------
bsaul
Rob pike is doing system and middleware design, where concepts are not very
numerous and often purely technic-centric.

Oop is made for business and real-world modeling, where the first part of the
job is to find a good definition/representation of the concepts you're talking
about. When you're talking about a banking system, you really don't care if
the underlying memory representation of credit card properties will be a hash
dictionary or a struct. Your first concern is to clearly define what it is
using the correct words. So that you'll etablish a clear mapping of real-world
concepts into programming structures.

When rob pike talks about data, he only sees memory and related algorithmic
structure. Because on its field, it really is the only things that matter. The
fact that sometimes correct naming and proper conceptual representation is the
most important only speaks to someone that does business or real world
modeling.

------
marshray
The format of the ppoop article is sufficiently similar that I thought it
might be a riff on the classic "The Evolution of a Haskell Programmer"
<http://www.willamette.edu/~fruehr/haskell/evolution.html> (itself derived
from a similar work).

------
k3n
> But there's good news. The era of hierarchy-driven, keyword-heavy, colored-
> ribbons-in-your-textook orthodoxy seems past its peak. More people are
> talking about composition being a better design principle than inheritance.

Huh? Is he trying to say that composition is _not_ OOP, whereas inheritance
is?

I hate to break it to him, but composition is just an expression of the
composite design pattern, of which OO is part-and-parcel. You can't do either
inheritance or composition without using OO principles.

edit: Ok, I concede that I'm an idiot, but at least it resulted in a lot of
genuine discussion.

~~~
btilly
Consider carefully that Rob Pike is co-author of a well-known programming
language which doesn't really have inheritance, but in which composition is
trivially easy.

I think that he may know more about this particular topic than you do.

~~~
seanmcdirmid
Not only that. Rob Pike is one the original Unix guys, he is also the
principal of Plan9/Inferno, and its own language Alef/Limbo.

But appeals to authority are very shallow arguments. He should still be able
to out debate someone who doesn't know who he is.

~~~
btilly
Agreed. If Rob Pike were in this discussion, he would be able to do so. Heck,
I could debate the merits as well.

However sometimes it does make sense to appeal to authority, if only to let
someone know how far out of their depth they are.

~~~
abraininavat
No, it doesn't. You show someone he's out of his depth by attacking his
argument, not by pointing out that the counter-argument comes from someone
famous. Are you seriously claiming that no one you've never heard of can
possibly argue against a point made by someone you've heard of? That's a
sheepish mindset.

~~~
btilly
I actually did address the point as well.

The fact that Rob Pike wrote a usable language that is not OOP in the sense
that the commenter thinks of OOP, in which you have composition without
inheritance is direct evidence against the commenter's point of view that to
do composition you need to be doing OOP.

------
michaelfeathers
As a guy who does a fair bit of reading and teaching, I can only sympathize
for the writer of the paper Rob is criticizing.

When you come up with examples you have to deal with two conflicting forces.
One one hand, they have to be simple enough not be skipped over. On the other
hand, they have to be complex enough to seem real. The balance is never right.
It doesn't seem fair to criticize on that account. It's too easy.

------
oboizt
I'm glad for OOP. I'm also glad for functional features in the "cool"
languages. Thank you, Scala, for combining the best of both worlds. :)

------
ufo
> Every if and every switch should be viewed as a lost opportunity for dynamic
> polymorphism.

The truly sad thing about OOP is people not embracing the duality between if-
statement dispatching (pattern matching) and oop-style dynamic dispatching.

In situations where you have a fixed set of types its better to use switch
statements and even in other cases, sometimes its better to still use if
statements to avoid scatering your code all over the place (specially if you
compiler warns you if you forget to handle on of the cases when updating
things)

<http://www.c2.com/cgi/wiki?ExpressionProblem>

~~~
Peaker
if-statement dispatching is not pattern matching.

Pattern matching yields more type information than if statements and can match
recursively on multiple arguments (which even multi-method dynamic dispatch
cannot). However, it always dispatches on a closed sum type, whereas OOP-style
dynamic dispatch is on an open sum type, so the mechanisms are useful in
different circumstances.

~~~
colomon
Can you expand on this difference?

~~~
Peaker
"If/switch" branching is sometimes called "boolean blindness":

[http://existentialtype.wordpress.com/2011/03/15/boolean-
blin...](http://existentialtype.wordpress.com/2011/03/15/boolean-blindness/)

The reason is that no new type information is gained when you branch.

When you pattern match, however, you gain new names in scope that have new
types. This is new type information such that the branch choice represents new
information not only in the program position, but also at the type level.

For example:

    
    
      if(x != NULL) {
         .. compiler does not know if x is null or not ..
      } else {
         .. ditto
      }
    

Whereas with pattern matching:

    
    
      case x of
        Nothing -> .. Scope gets no new value.
                      x is a Maybe type, not usable as a direct value
        Just y -> .. Scope gets "y" as a value of the type
                     inside the maybe, which is directly usable.
    

Also, you can define a function like:

    
    
      f (Just (Right x)) (y:ys) = ...
      f (Just (Left e)) [] = ...
      f _ xs = ...
    

which pattern-matches multiple arguments at the same time, including recursive
pattern matching (Right inside Just, Left inside Just, etc).

If you meant the difference regarding open/closed sum types, I can expand on
that.

------
georgeecollins
I don't want to argue with anybody-- whatever you think is the right way to
program is fine with me but..

The article he pointed to was really funny. I think I worked with guys like
that who were so over the moon about OO that they made everything an object,
encapsulated a bunch of objects inside an object with no polymorphism. No
advantage that I could see except that it became a habit to make everything an
object.

Objects did a lot to advance programming and they still can be very useful.
Like many people have said here already: use the tool that is appropriate,
keep an open find.

But that is a funny code example.

------
teeja
Javascript's best feature is that almost any routine can be written without
objects. And those that have to be there are hidden from sight like objects
should be.

~~~
gizmo686
You do know that Javascript considers almost everything (including functions)
to be objects?

------
nsxwolf
There's nothing ridiculous about the OO pattern in the article Pike is talking
about.

When you're trying to demonstrate OO concepts, OO has a disadvantage because
it is needlessly complicated for the simple example you're trying to
illustrate. The hacker approach is always going to look more sensible than the
OO approach.

Once you get into very large enterprise systems the a-ha OO moments really
start to pile up.

------
dman
Can someone paste his comment inline here - google+ blocked here.

~~~
k3n
Rob Pike 10:31 AM - Public

A few years ago I saw this page:
<http://www.csis.pace.edu/~bergin/patterns/ppoop.html>

Local discussion focused on figuring out whether this was a joke or not. For a
while, we felt it had to be even though we knew it wasn't. Today I'm willing
to admit the authors believe what is written there. They are sincere.

But... I'd call myself a hacker, at least in their terminology, yet my
solution isn't there. Just search a small table! No objects required. Trivial
design, easy to extend, and cleaner than anything they present. Their "hacker
solution" is clumsy and verbose. Everything else on this page seems either
crazy or willfully obtuse. The lesson drawn at the end feels like misguided
epistemology, not technological insight.

It has become clear that OO zealots are afraid of data. They prefer statements
or constructors to initialized tables. They won't write table-driven tests.
Why is this? What mindset makes a multilevel type hierarchy with layered
abstractions better than searching a three-line table? I once heard someone
say he felt his job was to remove all while loops from everyone's code,
replacing them with object stuff. Wat?

But there's good news. The era of hierarchy-driven, keyword-heavy, colored-
ribbons-in-your-textook orthodoxy seems past its peak. More people are talking
about composition being a better design principle than inheritance. And there
are even some willing to point at the naked emperor; see
<http://prog21.dadgum.com/156.html> for example. There are others. Or perhaps
it's just that the old guard is reasserting itself.

Object-oriented programming, whose essence is nothing more than programming
using data with associated behaviors, is a powerful idea. It truly is. But
it's not always the best idea. And it is not well served by the epistemology
heaped upon it.

Sometimes data is just data and functions are just functions.

------
commentzorro
Gotta agree with Rob Pike here on this. The path to salvation comes through
simplicity not though complexity. Austerity is the way forward. Making do with
less is more.

------
dschiptsov
This is not a new realization. Some enlightened people never allow themselves
to be deluded.) Brian Harvey is one of them.

OOP is just a set of conventions which could be implemented _efficiently_ even
in Scheme. CLOS is another canonical example which people prefer not to notice
to maintain their comfortable reality distortion.

Everything was solved long ago by much brighter minds that now populating
Java/Javascript world. Just imagine (but almost no one could) how much more
clean, efficient and natural it will be to implement something like Hadoop in
Common Lisp or Erlang - passing data _and_ functions as first-class
S-Expressions or even packed Erlang binaries. Instead they re-implemented a
few concepts form FP in Java way.

Here is a heretic video about what OOP really is:
[http://www.youtube.com/watch?v=qbUJXsKAtU0&feature=edu&#...](http://www.youtube.com/watch?v=qbUJXsKAtU0&feature=edu&list=PL6879A8466C44A5D5)
;)

~~~
jlgreco
I think you are on to something, but I don't think the root of the issue is
OOP but rather a "C-like" syntax and all the baggage that seems to come along
with that.

C is great (I absolutely adore it) but despite the numerous reasons that C++
did it I think we would be in a better position today if the fad of making new
languages "C-like", even if just superficially, never took off. At each step,
it is hard to point the finger at any one person (even in retrospect, it is
hard to really _fault_ Stroustrup), but I feel nevertheless too many prior
advances were ignored along the way to modern Java for far too long.

A terribly superficial but I think potent example of how "C-like" has done
harm is that languages have keep using it's abusive declaration syntax for so
long. It is so clearly absurd and unnecessary that it is a wonder that people
haven't started dropping it sooner. Instead, such as in the case of Java, it
seems they have just redefined what is idiomatic in order to avoid the harsher
cases seen in idiomatic C. At least Go strays from the example set by C,
though it still falls a bit short I think.

Basically I see the primary driving force of many trends in programming,
including to some limited degree OOP, to be pain inherited from C.

~~~
mamcx
Agree. That is something I dislike about go. A lot of rigth things, ugly as
hell c syntax. For people that love C-like is hard to understand how taste
that bad for people that love something else. Is like OO vs FUNC, C-like VS
anything else.

~~~
jlgreco
Yeah. Go strayed far enough (compared to say, Java. It is absurd how close
_they_ toe the line...) that I can enjoy it, but further still would be nice.
Rehashed declaration syntax and multiple return values are welcome changes.
The rest? Eh, I would still prefer s-exps. Oh well.

------
xakshay
This is just flame war. Obviously using the right job for the right problem
makes sense. There is no silver bullet.

~~~
bryanlarsen
This was not so obvious a few years ago. A lot of people believed that OOP
_always_ made programming better, and was a requirement to creating
maintainable programs. Some still do.

~~~
oinksoft
FP had its little Joe-programmer renaissance ca. 2007 slowing in 2009. It's
been a while since anything approaching a majority thought OO-or-bust for all
purposes, perhaps 2004-ish.

------
frozenport
OOP's unique feaures can't deal with massively parallel CPUs. In the future
the following will not be allowed:

The following are not allowed for the core data in your code.

    
    
        Recursion.
    
        Variables declared with the volatile keyword.
    
        Virtual functions.
    
        Pointers to functions.
    
        Pointers to member functions.
    
        Pointers in structures.
    
        Pointers to pointers.
    
        goto statements.
    
        Labeled statements.
    
        try , catch, or throw statements.
    
        Global variables.
    
        Static variables. Use tile_static Keyword instead.
    
        dynamic_cast casts.
    
        The typeid operator.
    
        asm declarations.
    
        Varargs.
    

<http://msdn.microsoft.com/en-us/library/hh388953.aspx>

OOP may live at the top level of granularity, but when it does for working
with your data, OOP is not compatible. You can choose which module to run with
polymorphism, but you data can't be be processed with virtual functions.

~~~
wladimir
With "in the future" you mean "currently, in a restricted subset of C that
runs on massive numbers of very simple cores". Massive parallelism is also
possible with somewhat less simple cores; CUDA and OpenCL also started from
the "bare subset" philosophy but gradually expanded to allow more flexibility
because developers demand it.

And of course massive parallelism is also possible with normal CPUs, in a
cluster or in the cloud, and there an entirely different set of restrictions
hold, not so much on the programming language but on higher level design.

Whether or not you need to restrict yourself to a subset of C completely
depends on your requirements. The future is heterogenous, not homogenous [1].

1\. <http://herbsutter.com/welcome-to-the-jungle/>

~~~
frozenport
Notice that C++AMP is a language extension designed specifically for
heterogeneous computing, and it is where this list comes from.

The article you posted was rather extensive and as somebody who works in HPC,
I can say that I disagree with many points in link provided. Also it was too
god damn long to read. Most notably, that somebody needs to actually write the
cloud implementation, will still require models like OpenACC or CUDA.

There is not reason hat whatever mechanism was used to push parallelism onto
the GPU can't be used for moving it onto the cloud.

There is an infinite amount of musing that can be made against and possibly in
favor of expanding the acceptable language features in threading kernels. Yet,
it is safe to say 3 things.

1) Simple kernels run faster 2) Current specifications are closer to C++AMP's
restrict(amp) 3) Cloud computing uses GPUs for the data crunching

~~~
wladimir
My point is that you're focusing on low level kernels only, which obviously
need to be simple and highly optimized. However, the number of people actually
writing lowlevel HPC code (the super-optimized number crunching inner loops,
usually embarrassingly parallel), compared to highlevel code is very small,
and certainly isn't the only focus of "the future". It's safe to say that the
number of platforms that support more advanced programming features (be it
object orientation or closures or message passing or...) will only increase,
not decrease. Of course no one wise will be calling virtual functions in inner
loops, but they are perfectly fine to use for control flow, configurability,
modularity, etc.

