Hacker News new | past | comments | ask | show | jobs | submit login
Why Is Object-Oriented Programming Useful? With a Role-Playing Game Example (inventwithpython.com)
75 points by AlSweigart on Dec 2, 2014 | hide | past | web | favorite | 85 comments



Since this is aimed at new developers, I would encourage anyone who falls into that category and is reading this to also look at alternate programming methodologies. The key component of OOP is the mixture of code and data into "objects". This can be very useful for physical simulations such as games, etc., since the real physical world is actually made of objects.

However, many feel that applying OOP to software that is not for physical simulation has led to a huge amount of wasted effort over the previous decades. The problem is with code reuse. The only real form of code reuse that OOP addresses is direct object inheritance. If you want to reuse a piece of code, the way to do it is to make an object that is a 'subclass' of an object that has the code that you want to use. The actual relationship of these objects is often not that simple, and people often create baroque inheritance trees to force their logic into the OOP pattern. Worse, is that OOP is built into many languages, which leaves you no choice.

An alternate pattern is functional programming, where a program is modeled as a series of data transformations, instead of a universe of interacting 'objects'. To write the code, one looks at what data will go into the software, and what data will need to come out. After all, if the right data is coming out of the program at the right times, it works.

Instead of breaking code into objects, you write functions that process the data correctly, and assemble them into larger structures, often resembling pipelines. The nice thing is that a lot of these functions are very reusable across projects and within a project. Avoiding the messy mixture of code and data allows you to identify the common patterns in your code, and refactor and reuse more easily.

EDIT: Here are a couple of libraries which have really facilitated functional programming for me in node.js

- http://ramda.github.io/ramdocs/docs/

- https://github.com/dominictarr/pull-stream


This criticism against OO programming is more of a criticism against junior-level OO programming. A lot of beginning OO programmers go crazy with complicated object models that aren't necessary. Composition is often a better way to go than inheritance.

http://en.wikipedia.org/wiki/Composition_over_inheritance


To build on what you're saying, one could argue that functional programming also is a way to model the world; that is, if you're model of the world is that of a series of snapshots of instants, as opposed to mutable stateful objects. Abelson from the SICP lectures argues this in this lecture:

http://ocw.mit.edu/courses/electrical-engineering-and-comput...


Like you say, the main benefit of OOP is that it matches the way we perceive reality. That can make it intuitive to design software. However, this often comes at the expense of performance since the hardware that executes your system bears no resemblance to the way we perceive reality.

Take your physical simulations, for example. You may have a game scene with hundreds or thousands of entities. It's natural to try to model these as objects. In every frame you will then call a method on each object that applies a translation to its position. Very intuitive and simple. But unfortunately extremely inefficient for a computer to execute. You will likely pay a number of cache misses while jumping to each of your objects. And once you get there, the rest of the cache line where your coordinates are is full irrelevant stuff and therefore wasted (the other fields of your object).

For performance, you would want to have all your entities' coordinates in continuous memory one after another, and then apply a function to translate them all at once. Pretty much the opposite design from where OOP naturally leads you.


I kind of stopped understanding this explanation of inheritance (when I switched to composition of course ;) ). When I look at the world I see a universe of composition, we get molecules by composing atoms, atoms from composing protons, neutrons, etc. little "modules" that come together. More practically, everyday objects seem to follow this too. A motorcycle and car aren't both "refined" from some "wheeled vehicle" that car companies buy, rather they share some similar components that car companies buy. Their similarities are thus either emergent (interface/protocol) or a direct result of their shared components. In unity for example, you don't inherit every hit testable thing from some class (like you would in cocoa), but rather attach a collider to things that can be hit tested. The collider actually worries about one concern, and you can therefor conveniently attach multiple colliders to one object for example (makes it easy to form complex hit regions). Similarly things that are visible have renderers attached. This matches my intuition of working with real objects, where is buy parts and put them together, not think to myself which "master" part my new thing derives from, purchase it, then rip it open and start modifying it as step 1.


It's debatable whether OOP "matches the way we perceive reality". Many have come to believe this, but is it true? I don't perceive all of reality as a strict hierarchy of things constructed out of templates, with "is a" and "has a" relationships between them.

I don't think the main problem with OOP is performance, either. The problem is that it's not always the right or the most natural design approach, regardless of any performance considerations.


> It's debatable whether OOP "matches the way we perceive reality". Many have come to believe this, but is it true?

No, it isn't true. But neither do we perceive all of reality as a collection entities that never change but are used to create new unchanging entities by a series of idempotent mappings from domains to codomains.

But both abstractions are useful as ways to organize your thoughts, or as languages we can use to describe processes in an organized manner. Which one is more useful is often case specific - with the case covering not only the problem space but also the individual psychology of the person doing the work.

Many find that neither is so useful that they would be comfortable using it to the exclusion of the other, which is why there are so many functional languages with object-oriented features such as Caml and why so many major object-oriented languages such as C++, C# and Java have been acquiring increasingly many functional features over the past decade.


I'm not sure why immutability is being lumped into OOP alternatives, seems orthogonal (to me). You can have immutable AND mutable versions of OOP after all. Similarly, you can have immutable and mutable composability. To me the main issue with OOP not matching reality is having people put things into hierarchies instead of thinking of common elements found in them. For example, to me thinking about composing objects in Go and its nice-auto-applying interfaces is similar to Haskell's data with typeclasses, despite one being totally mutable and the other immutable (obviously these feature sets are quite different, my point is in how you think about "the world", i.e. a circle in both Go and Haskell doesn't inherit from anything but may abide by a contract thanks to either interfaces or typeclasses, getting you to talk about things in terms of its features and not its inheritance).


Inheritance is not necessary for OOP.

For example, if you have:

  struct Cat {
    meow: String
  }

  impl Cat {
    fn talk(self) {
      println!(self.meow);
    }
  }

  fn main() {
    let bob = Cat { meow: "Mroowwwww!" };
    bob.talk(); // Prints "Mroowwwww!" Good job, bob.
  }
Cat lumps the data and its associated functionality together into an object. This is object-oriented programming even if you never bring inheritance into the picture.

Both Go and Rust have objects, they simply go by the name struct. Haskell does not follow the object-oriented approach.

I think the confusion surrounding OOP is that people associate OOP with a particular implementation of it, like Java.


Sure, but with that wide a definition its really hard for a programming language not to be object oriented. C can be just as object oriented with structs, unless you consider bob.talk() to be just completely conceptually different from talk(bob) where talk's definition expects a Cat struct. Similarly, under your example, Haskell is also object oriented since you have data types with fields and functions that can only operate on that kind of data (Again at this point you'd really be arguing that the order of function/caller is the "OOP differentiator" since (ignoring inheritance), there is no difference between Cat's talk method and a talk function that takes in a Cat).

In other words, I don't think anyone is ever arguing against organization of data and functions into more abstract "types". Pascal does this with Records, C with structs, Haskell with data types, C++ with classes, JavaScript with prototypes, etc etc. So if thats all it takes to be "OOP" (in other words, not forcing you to only use ints and floats), then I guess I agree that OOP is a better representation of the world. But now I'd argue that the more interesting discussion is between the has-a and is-a versions of this.


You have to draw the line somewhere. The structure of a typical OOP language program such Go, Rust, Java, C++, C#, etc. are not comparable to that of a typical Haskell program.

All of the languages that are considered OO rely heavily on the binding of data and functions into objects.

Others will have deeper ideas of what an OO language needs to have, but the basic definition is about objects. That is, the binding of data and functions into an object.

In modern OOP discussion, it seems many already have come to the conclusion that inheritance is something to be used very sparingly, and can be done away with in favor of composition in most use cases.


I agree that you have to draw the line somewhere, but I believe the is-a/has-a dichotomy more accurately separates the different modes of thinking. To me the structure of a Go program is completely indistinguishable from one in C++ (precisely because of the lack of inheritance). When I look at the design of a C++ program (or say Obj-C), its all about class hierarchies. The docs are all about the class diagrams, step 1 of most those programs is usually "subclass ___". It immediately drops you into that view of the world, and I believe that with that view comes the guiding hand of your program's design.

Compare that to Go, which focuses on the traits of objects instead of their incidental ancestry. In go you'd define a function or method applying to an abstract interface that has certain properties. For example, I would say "give me an object that has a show method", not "give me something inheriting from Printable". This is completely analogous to the very abstract typeclass-style programming you do in Haskell ("give me something that derives Show"). Haskell object architecture is all about defining an abstract typeclass and reasoning about what you can do given these existing methods. You then supply an implementation that fits the type class definition, exactly the same abstract analysis of fundamental properties divorced from their specific owners as inheritance-less interface/protocol programming in a language like Go.

Again, there is quite literally no difference in "binding" a function to an object through the dot syntax vs through type. If in haskell you say the meow function applies to the Cat data type its not any different than having a meow() method on a Cat in C++, neither can call that on anything else, its quite bound.


Also, I agree with you about the point of C being used in an OO fashion. I think this goes to show that a language doesn't need to specifically support objects/methods to wield in an OO-manner, it is just more inconvenient.

Haskell lacks basic OO features but that in itself is a feature. Methods are often procedural in nature. They're also not generalized since they are a concrete implementation for a specific type. Both of these are completely contrary to Haskell style.


Rust's trait system is essentially identical to Haskell's typeclasses, and Haskell allows infix operators which work exactly like method calls, except that you don't have to put a period between the data and the function. So it seems to me that by your definition Haskell is also object oriented.

(As for my personal opinion: I think the confusion around OOP is largely a product of it not having a precise definition).


Agreed. I kind of miss the good old days when the dichotomy was functional/imperative (which is much more valid), and the prevalent rhetoric didn't seem quite so eager to erase the likes of Caml and Dylan from the pages of history.


> Many have come to believe this, but is it true? I don't perceive all of reality as a strict hierarchy of things constructed out of templates, with "is a" and "has a" relationships between them.

The strict hierarchy thing is more a feature of the class-oriented programming of C++ and its close relatives (an artifact of what made sense when approximating OOP in something merged into the existing syntax and type system of C) than it is inherent to OOP.

Unfortunately, the popularity of C++ and Java means that its hard to tell when people are talking about "Object Oriented" if they mean it in the general sense that existed prior to those languages which they are one of many approaches to facilitating, or if it means it in the narrow, static-class-oriented sense which evolved out of the particular approaches of those languages and their close relatives, which is something different.


At a high level a lot of systems are made of some kind of objects or actors. Things that transmit and receive signals or messages. Telecoms, the internet, Unix processes, micro services, smoke signals, human conversation and instruction. So maybe you can do pure functional down below but at a high level your system will always have some kind of objects (for want of a better word). Plus at a high level these systems don't share state. They can only do so via signals. This turns out to be a good enough solution for concurrency. People can demonstrate how you can still get deadlocks etc but if the granularity is large enough this rarely turns out to be a problem in practice.


Is-a and has-a, and more to the point, naming things, is just an outgrowth of our natural linguistic capabilities. The alternative is anonymous immutable values with structural comparisons (aka functional programming), which we've only been studing as math for a couple of thousand years or so (vs. Around 100k years for language).


I don't know if the alternative is functional programming or structural comparisons, and I wasn't arguing that in this particular case. I agree about naming things, but I don't consider it an OO exclusive.

What I'm saying is that there are a couple of assertions, more or less accepted in the industry, that may be flawed or simply false, and that may have led to the current mess of software design:

1- That OO decomposition and design reflects "how we perceive the world".

2- That if there is merit to this notion that we naturally decompose/understand the world in terms of objects, has-a and is-a, that this maps to the set of formalisms we usually call OO Design & Programming. For example, it's obvious to me "message passing" is a (sometimes useful) abstraction completely alien to how we perceive the world -- nobody thinks in terms of sending messages to the objects. A message is something you send to another sentient being, preferably a person.


> I agree about naming things, but I don't consider it an OO exclusive.

It is not an OO exclusive but it is at the foundation of OO (naming things, relating them, and such). Math (outside of maybe graph theory) is the exact opposite (names lie about fundamental truth, let's work with truth); that is the direction that pure functional programming really takes (many reduce it to immutability, but that is just a consequence of not having identity and aliases).

> 1- That OO decomposition and design reflects "how we perceive the world".

That is not really how its pushed. It is more about how we "talk" about the world. After all, humans are writing the programs, probably together, talking to stake holders and all. OO is not the best way to talk to the computer, but it might be the best way to talk to humans about programs until we can evolve into more Vulcan like creatures.

Also, are evolved capability for language has greatly influenced how we think and solve problems. People find it easy to talk about things, ascribe names to them, and arbitrarily relate them to other things. This really gets in the way of learning math for most people (names and ascribed relations are informal, they can lie), but it is very accessible to deal with the computer world like you would the physical world (even if it is often technically wrong).

> For example, it's obvious to me "message passing" is a (sometimes useful) abstraction completely alien to how we perceive the world -- nobody thinks in terms of sending messages to the objects. A message is something you send to another sentient being, preferably a person.

Many people anthropomorphize their objects, which again is naturally human; again, objects are not for computers, they are for people.


> [Naming things] is not an OO exclusive but it is at the foundation of OO (naming things, relating them, and such).

Naming things is at the foundation of most human activities, making it less than useful to define OO.

> [That OO decomposition and design reflects "how we perceive the world"] is not really how its pushed.

Sorry, but I disagree. It is. In this very HN thread we're replying to, for example.

> Many people anthropomorphize their objects, which again is naturally human; again, objects are not for computers, they are for people.

Now you are trying too hard. No, it's unnatural to speak of "passing messages to objects". This isn't how people understand the real world (except for old ladies who speak to their plants, but they are not the target audience of Smalltalk or Objective-C), and in fact message passing is one of the most alien (and harder to understand) aspects of the OOP abstraction. So alien, in fact, that some OO languages do away with this terminology.


> Naming things is at the foundation of most human activities, making it less than useful to define OO.

Yep, thinking in terms of "objects" is kind of broad and fundamental. It is preferable to the "OO is Java" definition that is often pushed.

> Sorry, but I disagree. It is. In this very HN thread we're replying to, for example.

I was presenting what I thought was what really is seen as the benefits of OOP.

> Now you are trying too hard. No, it's unnatural to speak of "passing messages to objects". This isn't how people understand the real world (except for old ladies who speak to their plants, but they are not the target audience of Smalltalk or Objective-C), and in fact message passing is one of the most alien (and harder to understand) aspects of the OOP abstraction. So alien, in fact, that some OO languages do away with this terminology.

Anthropomorphisms are as old as Aesop; heck before we had much formal science, it was are only way of understanding things (reading Plato and Aristotle). Message passing is just communicating with something, you might not ever think "we have to tell that object to update itself" but plenty of people do. Note I'm not really a big fan of message passing, and it is hardly something exclusive to OO (my colleagues are very much into RPC without an object in sight).


> Anthropomorphisms are as old as Aesop; heck before we had much formal science, it was are only way of understanding things (reading Plato and Aristotle). Message passing is just communicating with something, you might not ever think "we have to tell that object to update itself" but plenty of people do.

I understand what you're saying, I just disagree with it. No-one I know thinks in terms of "I have to send the teapot the message to pour tea"; it's just an unnatural way of thinking. Our old lady from the example may think of her lovely Chinese teapot as a "she", might even name it, but she still won't think in terms of sending messages to pour tea. People don't think that way. And that's alright: OOP is a formalism (like Math, only probably less formal), not a "natural" description of the world.

> Note I'm not really a big fan of message passing, and it is hardly something exclusive to OO (my colleagues are very much into RPC without an object in sight).

Message passing was defined by the inventor of OOP as its defining feature. Of course, Java, C++ et. al. then subverted this, but that's an entirely different debate.

----

To make this debate more constructive: I think OOP is valuable as a way to do modularization. Modularization is a worthy goal, but OOP is just one way to do it. Not the best way, but the one most programmers are familiar with, regrettably to the exclusion of other approaches.


I disagree with Kay on his assessment of OO, I disagree that he's even the inventor of it (though he coined the term, word out to the Scandinavians and even Sutherland).

I would claim that OOP is not about modularization at all; it is a way of thinking meant to help humans solve problems with a computer. It's a crutch, it is easy to apply, and has lots limitations. FP (thinking in terms of anonymous values vs. named objects) is an alternative, though more experienced programmers often use both ways of thinking where best appropriate (which is why you'll see OOP entity interfaces in languages like Clojure, and lots of immutable structs in C#). Neither can claim a decisive benefit in modularity or code reuse; you have to work extra hard for those.

We can definitely agree to disagree on this.


OOP is not that slow in a typical application. Pure functional programming's emphasis on copying everything causing excessive garbage has an impact on performance, too.

OOP's problem with performance is that it doesn't play nice with modern CPU's L1/L2 cache since most typical OOP implementations don't allocate objects in contiguous memory. But only a very narrow set of problem niches such as high data volume high performance simulation require packing data tightly to take advantage of L1/L2 cache. The hundreds or thousands of entities using OOP in a game won't cause a sweat. The cache problem would have an impact when there are hundreds of thousands or millions of entities and you need to process them 60 times per second.

The L1/L2 cache performance hit can exist in functional program, too. The list and it cells in LISP are not allocated contiguously. Basically any non-array data structures would not play nice with cache.

That just means to use the right data structures for the right job. For high data volume and high performance processing, use array. Whether it's used in the context of OOP is irrelevant.


Ugh, pure functional programming language implementations don't generally actually copy everything. Because data structures are often immutable, pointers can be used rather than copying.

Just a minor nit.


You're assuming that an object is always going to be modelled as a heap-allocated struct-like block of memory containing all of its state. There's no reason that has to be the case, though. You could create a system with an object-oriented programming model whose data storage was array-backed/table-oriented. You could probably even code something like that up in a fully GC'd OO language like C#. The way Java and C# manage string intern tables is an example of the kind of specialized data backing that's possible behind externally OO interfaces.


Thank you for raising this. All too often debates about OOP get hung up on the overuse of inheritance. That’s a fair point, to be sure, but IMHO there are two much more fundamental limitations that are inherent in the OOP way of doing things.

1. In OOP, one object is special.

Whether it’s the object that is receiving a message or the object that gets a special designation like `this` or `self`, something is always given a special emphasis in a purely OOP design.

However, many useful algorithms take multiple inputs where none needs to be singled out in that way. Where do such algorithms live?

Trivial example: A symmetric binary operator like +.

Grown-up example: You don’t really model funds transfers by sending a `debit` message to one instance of a `BankAccount` class and a `credit` message to another instance.

2. In OOP, one object is special.

This is a generalisation of the point that zmb_ made. Usually in programming we don’t work only with single, self-contained data points. Most interesting things happen when we manipulate structured data and consider relationships between data points.

In purely OOP designs, we often see classes representing single data points, and then further classes to represent containers of that type. However, if the implementation of each data point is locked up behind the interface to a particular class, and then we have to access the points in each container through the container’s own generic interface, we are constrained in the access patterns we can use. Emphasizing individual, self-contained data points is often the wrong level of granularity for promoting either code reuse or efficient designs.

Example 1: What if we want to implement a more efficient representation of a data set that supports a different access pattern, such as the kind of continuous memory case zmb_ mentioned?

Example 2: What if we want to enforce constraints on a whole set of data, or model relationships between structured data of different types?

In each case, it may be very difficult to reuse existing algorithms that are locked up in methods on existing single-data-point classes. We might want to store the underlying data in a different format, and converting between formats just to access functionality artificially tied to a specific variation is likely to be awkward and inefficient.

If we instead build our modules as a set of fundamental data types and a set of accompanying algorithms — which could be as simple as a library of C structs/enums and functions using them — then the artificial barrier doesn’t arise. We can still present a clean interface/implementation for each module as a whole, but we aren’t forcing the implementation details to be separated just because Everything Must Be A Class.


In #1, your "trivial" example is actually more of a real issue with OOP than your "grown up" one, since the the latter just illustrates poorly chosen objects and messages, not an issue with OOP as such.

In #2, you again just illustrate a potential poor choice of objects. In both examples for this point, the answer to the "What if" is "you create an object that modes the data set, rather than the individual data points". This is not only consistent with OOP, its fairly routine.


In both examples for this point, the answer to the "What if" is "you create an object that modes the data set, rather than the individual data points". This is not only consistent with OOP, its fairly routine.

And how are you going to implement those “data set” objects, if the only tool you have in your armoury is more objects? Where are you going to put code that works with the underlying data and is useful regardless of the specific structure that data happens to be kept in at the time?


You can model functions as objects and run all sorts of function.call(args) methods, but if it is done out of necessity, it doesn't matter if it is done regularly or consistently with OOP -- OOP is just not the best way to go with this sort of thing.


>The only real form of code reuse that OOP addresses is direct object inheritance.

Composition is far more useful and effective means of code reuse in OOP.


Yet composition in the OOP sense seems like it can be easily achieved in any other paradigm - so OOP does not seem to address composition in any distinct way.


I disagree. I think the criticisms in the GGP post are largely targeted at a very particular form of OOP (which is granted probably the most popular form): That popularized by C++ and later doubled down on by Java. Inheritance heavy, overly mingling interface and structure, etc.

The Smalltalk/Ruby/arguably Erlang form of OO does not really treat composition and inheritance as things at odds -- the way the object you're talking to has achieved its composition (where inheritance is a specialized form of composition) is basically irrelevant to you as the caller does address this, and quite well imo.


I agree with you but it is worth noting that many people I've come across will claim that the other forms of OO aren't really OO. For instance the OO system that we see in CLOS, Dylan, R's S4, etc. I vehemently disagree with them but my point is that for many people OO is by definition the one "popularized by C++ and later doubled down on by Java"


Indeed. For many people, the question "what is OO programming?" has an answer that mentions the word "class" a few times before it mentions the word "object".

That's why I call that subset of OO "class-oriented programming", instead.


I'm stealing that


> I agree with you but it is worth noting that many people I've come across will claim that the other forms of OO aren't really OO.

Any definition of "Object Oriented Programming Language" which excludes Smalltalk and Self is...disturbing.


Fully agreed. But how many Java/C++ programmers doing "OOP" would you guess are familiar with Smalltalk and Self? I'm not talking about people interested in PL theory, but the thousands of programmers out there working for the maintream industry and "doing OOP".

I'm going to guess: not many.

Lest I seem dismissive or condescending: I shamefully count myself among those industry programmers. I'm not at all familiar with Self, and my only knowledge of Smalltalk comes from one CompSci course and a presentation at OOPSLA.


Recently I was making a case to coworkers that the typeclass pattern in Scala (and presumably typeclasses in Haskell, I'm not personally familiar) was a completely OOP construct in the vein of CLOS/Dylan. I was met with raised eyebrows all around, and some of the people in the room were pretty far beyond "those industry programmers" so the problem isn't just one of naiveté


"Smalltalk and Self" more dramatically illustrates why its disturbing (to people familiar with the history of OOP), but what applies to them also applies to their more-currently-popular descendants, Ruby and JavaScript (among others).


Agreed fully


Composition in this sense is talking specifically about composition of objects. By definition, that's only going to be possible in languages with objects. No other paradigm achieves that (for better or worse).


From what I understand, composition in OO basically means that I put another object as a field in my new object. Is that correct? If so, I don't really see the fundamental difference from doing the same thing with a struct/record: put another struct/record inside another struct. Especially if I can use encapsulation to hide the internal structure of the struct/record.


Game developers have been big on data oriented design (http://www.asawicki.info/news_1422_data-oriented_design_-_li...) in recent years. It's funny, because data-oriented design examples tend to look like the original ("bad"?) array of values example in the blog post. Most of articles on DOD focus on performance, which seems of least concern to most software engineers, but I think it helps with understanding how your data is transformed and results in easier to understand code.

I don't see how OOP is any more applicable to games than it is to other software. In a typical business or social CRUD app you can represent things in the same way you could in an OOP game. "Hero" becomes "User", "Monster" becomes "Friend", "Inventory" becomes "Account", etc.

Otherwise I strongly agree with you. I think all software should be written in a more functional or data-oriented style. The biggest reason I'd write OOP code now is that many game designers and programmers, and most popular game engines and frameworks, prefer OOP.


> The only real form of code reuse that OOP addresses is direct object inheritance.

Not really, or at least, not with a well designed system. Give me a system with small, composable objects. I can reuse code by creating a new object for one of the components of the system, or by writing an object that knows how to talk to a previously existing objects. Sandi Metz' POODR [1] should be required reading for anyone who is writing object oriented code.

Functional programming is also worth learning as well, and Martin Odersky's Coursera [2] shows how Scala can be used for code that is both object oriented and functional.

[1]: http://www.poodr.com

[2]: https://www.coursera.org/course/progfun


I've always had a different view about OO vs Functional.

OO is beneficial to the programmer.

Functional provides benefits in performance, footprint and memory utilization.

You can solve any problem with either approach. Microprocessors don't care. I've written genetic evolver code in machine language (don't ask) as well as C, C++, Python and objective C. The OO languages make it easy for the programmer, while the Functional approach requires more thought, planning and perhaps attention to detail.

I tend to prefer The functional approach, probably because I did tons of real work using this model before OO came into my world. However, I certainly appreciate how much easier some things can be in OO. What I object to is this idea that everything today has to be an object, this is wasteful and slow.

Of course, I am biased. When you learned programming starting with machine language, then Forth and then C before touching C++ your view of how to solve problems computationally is bound to be quite different than that of someone who started life with, say, Java.


> [...] The problem is with code reuse. The only real form of code reuse that OOP addresses is direct object inheritance. [...]

Inheritance is a form of code reuse in OOP. Composition is another one which is often preferred since it does not have the issues you mention here.

Note that just a couple of days ago this was posted on HN: http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay... It's the definition that basically the inventor of Object Oriented Programming gave of OOP itself. One of the nice things he says is that he left out inheritance on purpose because he "didn't like it" the way it was done and wanted to "understand it better".

Regardless, I think your advice of learning other programming paradigms is great.


> The only real form of code reuse that OOP addresses is direct object inheritance.

This is not true.

COM, Smaltalk, Eiffel, BETA, SELF, CLOS offer different OOP models than the average Java and C# programmers are aware of.

You can have code reuse via composition, delegation, sub-typing, genericity, mixins.

OOP is nothing more than modules in steroids for those of us that started with languages like Modula-2 and Turbo Pascal, doing Abstract Data Types[1][2] programming.

[1] - http://cs.utexas.edu/~wcook/papers/OOPvsADT/CookOOPvsADT90.p...

[2] - http://www.amazon.com/Data-Structures-Abstract-Types-Modula-...


"OOP" means different things to different people. I think we can all agree that beginners should be encouraged to look beyond the C++/Java/Python object models and consider alternatives to direct inheritance, which is the core point here.


That I agree, yes.


> To write the code, one looks at what data will go into the software, and what data will need to come out.

What if your program has no output? The applications I write seldom has. Instead they save down stuff or send it away here and there during the lifespan of the program. When they're done they just exit without any output. FP seems to be focused on one input going through transformations ending up in one output.

(I'm not a good programmer and barely know how to do OOP, much less FP.)


> What if your program has no output? The applications I write seldom has. Instead they save down stuff or send it away here and there during the lifespan of the program.

What is saved and what is sent are outputs.

> FP seems to be focused on one input going through transformations ending up in one output.

That's not entirely untrue, but that one can be a set of simpler values, a processes can be values, too.


Functional programming is nothing more than applying the rigour of math to programming.


> This can be very useful for physical simulations such as games, etc., since the real physical world is actually made of objects.

Actually, OOP breaks down for this case (simulation) too.


I think a lot of developer don't really understand OOP, and as a result they mess it up. In many ways functional programming is more straight forward, easier to get your head around it, but you can still mess it up just as much.

I feel your statement above is a little naive. You should read some Martin Fowler. Be less reactive and more informed. Find balance.

Software systems, regardless of methodology, need good experienced designers There is no silver bullet.


I wouldn't just blame developers; I feel like the OOP languages most of us have been stuck with deserve some scorn as well.

The major functional languages all come from academic backgrounds, and it shows. They're very principled, and a lot of thought is put into designing languages that are clean and actively encourage you to write clean code.

The major object-oriented languages, by contrast, tend to come out of non-academic environments. Their design was often compromised by pragmatic concerns (C++, Java), or by their being hobby languages being designed by folks for whom Barbara Liskov maybe isn't a household name. I don't want to hate on these languages too much - C++ and Java and Python and Ruby make the world turn, after all, and I suspect that's partially because they let you get away with so much. But they are what they are.


> The only real form of code reuse that OOP addresses is direct object inheritance.

That is incorrect and it's misinformation about OOP. The most important mechanism for code reuse is a function or method, and it's the essential part of OOP. You put code into a function/method that can be used repeatedly. Function exists in most languages and is not the exclusive domain of functional programming.


Your reaction was very similar to mind. I thought, "What benefit here is not available via functional programming?"


Best description of functional programming I've ever heard, thank you


Unfortunately for the OP, when it comes to game development, it's widely understood that OOP only ends up painting the developer in a corner eventually. Yes using Objects, Inheritance, and Polymorphism works well for a long time, but the more objects and deeper the inheritance tree grows, the more inflexible the system becomes. You also run into problems with reality. When the code hot path calls across multiple object types and a invokes a bunch of polymorphism every frame, the code is going to be slow and there's no obvious bottleneck. The code is slow because the CPU is constantly waiting for its cpu and data cache to catch up.

Game development today is moving quickly towards components, composition over inheritance, and Data Driven Design techniques. Check out the myriad of posts on Entity/Component systems that have sprung up over the past 10 years. It can be harder to grasp at first but has huge wins over the traditional OOP model.


You're making a false equivalence here. Yes, the game industry is moving away from inheritance and for the reasons you note and others.

That doesn't at all mean the game industry is moving away from OOP. Classes and methods still abound and work perfectly fine. Even if you go whole-hog towards ECS, the stuff you're passing around is usually still an instance of some class and not a bare public struct.

Encapsulating data and providing higher-level methods to operate on it works really well. Polymorphism is also powerful and useful in many places. Even subclassing is a good fit for some things. But just realizing that you need to dial back on subclassing doesn't imply that the first one or two things on my list are broken.


This "huge win" depends heavily on what you're making. If you're making simple games where performance is not that much of a problem (like most indie games) then moving to entity/component systems makes no sense and tends to make everything more complex than it needs to be. A good middle ground in those cases is to just stick with normal OOP with shallow trees (no more than 2-3 levels) and favor simpler types of composition like mixins whenever needed.


As someone who has made indie games with both traditional OOP and component systems-- I vastly prefer component systems. I disagree that they make things more complex. I found that they made things much simpler. Although, it does require learning about and possibly developing a component system.

If you are talking about very very simple games, it would perhaps not be worth the effort, but at that level basically any architecture will work.


Entity component systems are just another form of OO that rely heavily on ontology and named object instances. Unless you mean OOP must be Java, then of course, it's not.


Even indie games can benefit a lot from components, and a basic component-entity system is very easy to implement. Make component and entity classes, give the entities a list of components, put the update method in the component, and put most of your code there.

This is fairly different from the most popular/performant ways of implementing ECS's (there are no "Systems"), but keeps a lot of the benefits (much more flexible than traditional class hierarchies, easier to develop and design content for without writing new code, etc), while avoiding a lot of the complexity which makes component systems overkill for smaller games.

You lose a bit of performance (compared to other methods of implementing component systems), but in higher level languages this will likely be faster than trying to imitate C++ patterns (I could say more about this, as I've seen some travesties, but I won't).


For a higher level language (I use Lua) I prefer a more loose type of system where I skip the part of creating components and putting them in a list and just inject methods/attributes to my objects directly in the form of mixins. At this point I wouldn't say that this is an ECS, it's just normal OOP favoring composition wherever it makes sense. From what I've seen though most people do tend to go for the system type of ECS and that's what I was referring to when I said that it makes things more complicated than they need to be.


I don't know...the Gold Source engine and Half-Life 1 were heavily OOP, and they did just fine, even with some mods placing dozens of fully AI enemies onscreen at once.

In some parts--like rendering--OOP is pretty terrible. Outside of that, though, it's not inherently bad.


I feel like a lot of the problems OOP solves are working around inadequate type systems. If you can make a distinction between Int @@ CharacterHP and Int @@ WeaponDamage such that you simply can't subtract one from the other without going via the appropriate function, then the argument for having private member variables and object methods to mutate them (which is the controversial part of OOP, I think everyone agrees with having data structures and some form of polymorphism) goes away, doesn't it?


Right, but you should also include notions of immutability and ownership in your explanation. OOP is partly about managing complexity. If the mutation and sharing in your data is managed by the type system, private data is much less important.

That being said, OOP is also about managing interfaces between the different parts of your code. Adding a little more type info to your variables doesn't help you if you need to drastically change how a variable is accessed and mutated (memoized, retrieved from a service, persistence layer, etc.) while avoiding a running a giant find/replace across your entire project.


> Adding a little more type info to your variables doesn't help you if you need to drastically change how a variable is accessed and mutated (memoized, retrieved from a service, persistence layer, etc.) while avoiding a running a giant find/replace across your entire project.

I feel like if you have a generic "context"-like notion then you can have a lot of this pass directly through. E.g. I recently changed where a Scala report gets most of its data, making it use an (async) web service rather than a database call, and I only really had to change the "top and bottom" - all the intermediate code handles any kind of context as a generic "F[_]", sometimes requiring particular typeclasses (Applicative, Comonad) depending on what it does with it.


The problem is that object oriented programming is a premature optimization that has outlived it's usefulness.

Object oriented programming enshrines mutation.

Mutation is a good optimization when memory is expensive and fast relative to computation. Memory is now cheap and slow relative to computation. You can execute 100's to 1000's of operations in the time it takes to chase a pointer that is not cached.


If OOP encourages mutation can you explain to me why the String class in java is immutable?


It's a holdover from when Java was supposed to be a language for small machines.

However, if you're being snide, let me retort "So why does everybody in Java immediately recommend that you use the StringBuilder class instead of String?"


So yeah, in games OOP makes perfect sense but I have a hard time applying it in other code. I mostly build applications that make some API calls, save the result to a database etc. What is an object in that context?

The same goes for the classical Car and Animal OOP examples. Yeah, that makes sense but how often do you have such entities in a real code base?


The problem with object-oriented programing is that it makes objects the most important abstraction, which it's not. I'd say states are the most important abstraction of a program.

I think I've grown so much as a programmer since I started thinking more about states and less about objects.


I was talking to a friend who started doing web development transitioning from game development. I came from a JavaScript background heavily based around Lodash and 'nearly' functional paradigm. I was discussing my interest in more purely functional languages like Haskell and Clojure. Kind of rattled of my distaste for OO programming and how it just doesn't make sense to me, it seems like a lot of fluff and boiler plate when our most common use case is moving data from a form to a data store.

He made a lot of good points that with game development OO makes a ton of sense. I kind of agree. OO works amazingly with game development but with application development I'm really finding the functional paradigm so much more logical.


Functional programming in JS is possible, ramda / wu (others?) make it very easy to get started, of course it is not as great as with functional programming languages, but still, it allows you to do pretty cool stuff, it's like lodash but with the callback moved to the first argument, turning something like this:

var isMultilanguage = function(field) {

  return field.multilanguage === true;
};

var isNotMultilanguage = function(field) {

  return field.multilanguage !== true;
};

var getMultilanguageFields = function (fields) {

  return _.filter(fields, isMultilanguage); 
};

var getNonMultilanguageFields = function (fields) {

  return _.filter(fields, isNotMultilanguage); 
};

into this

var isMultilanguage = R.where({multilanguage: true});

var isNotMultilanguage = R.not(isMultilanguage);

var getMultilanguageFields = R.filter(isMultilanguage);

var getNonMultilanguageFields = R.filter(isNotMultilanguage);

https://github.com/ramda/ramda

https://github.com/fitzgen/wu.js


You also have those in Underscore / Lo-Dash:

_.filter(fields, {multilanguage: true}) _.where(fields, {multilanguage: false})


The current stable version of Lo-Dash supports _.curry. In 3.0 Lo-Dash will add support _.rearg, _.ary, & _.curryRight. Using a combo of them you can easily create auto-curried functions.

  var where = _.curry(_.rearg(_.where,1,0));
  var isMultilang = where({multilanguage: true});


The weird thing is, many if not all of the arguments apply to C with structures, unions, and function pointers... and I don't know that consider that properly object oriented.


<unrelated> Al, could you made the fonts on your site bigger, make styling via css not on each element and introduce some nicer Google fonts? Also, if you could use `<code>` and `<pre>` tags that'd be a solution where you could ignore the rest of what I said.

Reason being that I save these articles for later to show people who I try to teach programming and my attempts to save this page were unsuccessful.


Wow I haven't seen that character record sheet in 30 years. So mewhere I have a 10th level elf fighter with 18's for every trait (17 for charisma just to break it up.)

Translation for non-D&Ders - I was so nerdy I made my own characters for my own games I was DM for with completely invented characteristics rather than rolling dice for traits like the rules say


The author should really consider some proper way of displaying code, instead of a textbox.

Also, for some reason, on safari the code is in one line, while on chrome it's in multiple lines, like it should be.


The highest priority rule is the in-line which specifies white-space: nowrap; which doesn't preserve multiple whitespace. Firefox and Safari does what he says. I'm not quite sure why Chrome is displaying it differently..




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: