Hacker News new | past | comments | ask | show | jobs | submit login

>> I might be alone on this, but whenever I read things by John Carmack I get a vague sense that he doesn't really get object oriented programming.

I'm starting to think object oriented programming is a bit over rated. It's hard to express why exactly, but I'm finding plain functions that operate on data can be clearer, less complicated, and more efficient than methods. Blasphemous as it may seem, a switch statement does the equivalent of simple polymorphism and can be kept inline.




Many game programmers decided long ago that object-orientedism is snake oil, because that kind of abstraction does not cleanly factor a lot of the big problems we have.

There isn't anything close to unanimous agreement, but the dominant view is that something like single inheritance is a useful tool to have in your language. But all the high-end OO philosophy stuff is flat-out held in distaste by the majority of high-end engine programmers. (In many cases because they bought into it and tried it and it made a big mess.)


As a fellow game developer, I have to agree. I find that inheritance is a form of abstraction which sounds nice on paper and may work well within some domains, but in large and diverse code bases (like in games), it makes code harder to reason about and harder to safely modify (e.g. changing a base class can have a lot of subtle effects on subclasses that are hard to detect). The same goes for operator overloading, implicit constructors... Basically almost anything implicit that is done by the compiler for you and isn't immediately obvious from reading the code.


That's suspicion of snake oil paradigm is why it's interesting that game developers seem to be much more open to functional programming. Compare egTim Sweeney's "The next mainstream programming language" (https://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced...)


>Blasphemous as it may seem, a switch statement does the equivalent of simple polymorphism and can be kept inline.

In the statically compiled languages that most people think of when they hear "OO" (C++ and Java), yeah, switch statements vs. virtual methods (performance differences aside) is basically a matter of code style (do you want to organize by function/method, or by type/object?)

However, the original proponents of OO intended it to be used in dynamically compiled languages where it could be used as a way to implement open/extensible systems. For instance, if a game entity has true update, animate, etc. methods, then anyone can implement new entities at run time; level designers can create one-off entities for certain levels, modders can pervasively modify behaviors without needing the full source code, programmers trying to debug some code can easily update methods while the game is still running, etc. You can get a similar effect in C or C++ with dynamic linking (Quake 2 did this for some large subsystems), but it's a pain and kinda breaks down at fine (entity-level) granularity.

The other, "dual" (I think I'm using that word correctly?) approach famously used by emacs is to create hooks that users can register their own functions with, and extend the program that way. Like switch statements, it basically amounts to storing functions at the call site instead of inside the object, except with an extensible data structure rather than burning everything directly into the machine code.

Obviously you can't really take advantage of any of this if you're writing some state of the art hyper-optimized rendering code or whatever like Carmack, I'm just saying that OO's defining characteristics make a lot more sense when you drift away from C++ back to its early days at Xerox PARC.


As Wirth puts it, Algorithms + Data Structures = Programs

What OOP nicely brings to the table is polymorphism and type extension. Two things not doable just with modules.

Although generics help with static polymorphism.

The problem was that the IT world went overboard with Java and C#, influenced by Smalltalk, Eiffel and other pure OO languages.

Along the way, the world forgot about the other programming languages that offered both modules and objects.

> Blasphemous as it may seem, a switch statement does the equivalent of simple polymorphism and can be kept inline.

Except it is not extendable.


Switch statements are extensible in that you can add extra switch statements to your program without needing to go back and add a method to every class you coded, spread over a dozen different files. Its the old ExpressionProblem tradeoff.


All code is extensible if you have the source code and can recompile the whole thing then restart the program. I think pjmlp meant extensible in the "extensible at runtime" sense.


switch statements and method calls are kind of duals of each other. One makes it easy to add new classes but fixes the set of methods and the other makes it easy to add new methods but fixes the set of classes. It doesn't have to do with runtime.

http://c2.com/cgi/wiki?ExpressionProblem


Yeah, I said as much (even calling them duals) in a sibling comment.

https://news.ycombinator.com/item?id=8375910

Method calls don't need to be fixed either. Just because C++ stores virtual methods in a fixed-sized table doesn't mean Lua/Javascript/etc can't store them in hash tables. And a list of hooks is sort of like an extensible switch statement, but bare switch statements like you were describing obviously don't have that kind of runtime flexibility.


If you get tired of the switch statement, pattern matching is the functional dual of polymorphic dispatch.


Even though, pattern matching (and algebraic datatypes) would work just as well in an imperative language as in a function setting.

(Not sure, whether you'd need garbage collection to make pattern matching really useful, though.


Pattern matching is welcomed everywhere, it saves conditionals and keeps the code clean.


If you have control the code.


No, you can add new functions that switch over the different types without having control over the code. That way you don't have to add the same method to each of the classes.

It's a different dimension of extensibility.


> Except it is not extendable.

Function pointers are a good way to achieve extensibility in such cases.


You are basically doing a VMT implementation by hand.

I rather let the compiler do the work for me.


No, using function pointers does not necessarily mean implementing a vtable.


No, but it feels like it.

Oberon and Modula-3 only provide record extensions for OOP, with methods being done via function pointers.

In case you aren't familiar with these languages, here is some info.

http://www.inf.ethz.ch/personal/wirth/ProgInOberon.pdf (Chapter 23)

http://en.wikipedia.org/wiki/Modula-3#Object_Oriented

In Oberon's case, which was lucky to have survived longer at ETHZ than Modula-3 did at DEC/Olivetti, all successors (Oberon-2, Active Oberon, Component Pascal, Zonnon) ended up adding support for method declarations.


For quite a while, there was significant buzz around OOP, and some people did, in fact, overrate it. But I get the impression that over the last couple of years, more and more people have begun to realize that OOP is not the solution to every problem and started looking in new directions (such as functional programming).

I think this why Go has become so popular. It deliberately is not object-oriented in the traditional sense, yet it gives you most of the advantages of OOP (except for people who are into deep and intricate inheritance hierarchies, I guess). (I don't know how many people are actually using it, but now that I think of it, the same could be said of Erlang - the language itself does not offer any facilities for OOP, but in a way Erlang is way OOP, if you think of processes as objects that send and respond to messages.)

So I think there is nothing blasphemous about your statement (in fact, Go allows you to switch on the type of a value).

(I am not saying that OOP is bad - there are plenty of problems for which OOP-based solutions are pretty natural, and I will use them happily in those cases; but I get the feeling that at some point people got a bit carried away by the feeling that OOP is the solution to every problem and then got burnt when reality asserted itself. The best example is probably Java.)


I've found the rise and fall OOP to line-up well with various trends in ontologies and semantic-like KM tools (after all OOP is basically designing an ontological model you code inside of). Outside of various performance issues, the idea that you can have one master model of your problem to solve everything is an idea that hopefully seems to be running out of steam.

In the area I've worked in, I've seen numerous semantic systems come and go, all built around essentially one giant model of the slice of the world it's supposed to represent, and the projects all end up dead after a couple years because:

a) the model does a poor job representing the world

b) nobody seems to have a use-case that lines up perfectly with the model (everybody needs some slightly different version of the model to work with)

c) attempts to rectify b by just adding in more and more to the model just leaves you with an unusable messy model.

More recent systems seems to be working at a meta-model level, with fewer abstract concepts and links, rather than getting down into the weeds with such specificity, and letting people muck around in the details as they see fit. But lots of the aggregate knowledge that large-scale semantic systems are supposed to offer gets lost in this kind of approach.

I think OOP at its heart is just another case of this -- it's managed to turn a programming problem into an ontology problem. You can define great models for your software that mostly make sense, but then the promise of code-reuse means your suddenly trying to shoehorn in a model meant for a different purpose into this new project. The result is code that either tries to ignore "good" OOP practices to just get the damn job done, or over specified models that end up so complicated nobody can really work with them and introduce unbelievable organizational and software overhead.

It's not to say that OOP and other semantic modeling approach don't have merit. They're very useful tools. I think the answer might be separate models on the same problem/data, each representing a facet on the problem. But I haven't quite gotten the impression that anybody else has arrived at this in industry and are instead just going for higher levels of abstraction or dumping the models all together.

Again, OOP manages to turn programming problems into ontology problems -- which is hardly a field that's well understood, while the goal is and always has been to turn programming problems into engineering problems -- which are much more readily understood.


You clearly don't work on a large code base =)

Most programming concepts are really about code organization and not expressiveness or the ability to express an algorithm clearly.

Object oriented programming only really starts to make sense when you are working on something that will take thousands of man-hours. If you are working alone, or on a small project is can be completely irrelevant.

The work flow you are describing is what MATLAB guys do. It's an absolute nightmare once the project gets too large. It is however very fast an flexible for prototyping.


You clearly don't work on a large code base =)

Whatever its other pros and cons, I find OO style tends to result in significantly larger code bases.

Most programming concepts are really about code organization and not expressiveness or the ability to express an algorithm clearly.

You imply a dichotomy where none exists. For non-trivial algorithms, the ability to express them clearly and the ability to organise code at larger scales are very much related.


Even on large projects, I think there are often better ways of managing complexity. A lot the reasons for encapsulating internal state disappear if that state is immutable.


I would say that OOP makes a lot of sense on large code bases, but that it's also very easy (and dangerous) to get overzealous with object inheritance, interfaces, abstract base classes, etc.


I honestly find that a good module system (like OCaml's) is much better for organizing code than objects.


You have clearly never worked on a large, non-OO code base designed by competent engineers.


> Most programming concepts are really about code organization and not expressiveness or the ability to express an algorithm clearly.

You could also separate your functions in different scripts with naming that is closely related. You've pretty much achieved code organization without the hidden scaffold that comes with OO codebase, only a chosen few with ridiculously large salary know about and newer developers largely having to pretend to praise with terminologies straight from CS text book because their monthly check comes from it.

Why do we need thousand different ways to write a simple CRUD web application in a language? Obviously OOP hasn't really done what it is advertised which is introduce a code organization to it's fully efficient state any better than functional coding.

If Java was supposedly so great with it's OOP as a core feature,where as humans we are supposed to model the extremely complex systems of reality into some fictional objects in our brain, where is Sun Microsystems now? It ended up as snake oil for a lucrative proprietary enterprise software company which is aimed at selling simple CRUD apps pleasing the business/government crowd. Have you seen the range of Oracle's suite of crap? It's literally insane, you have to pay to basically learn how to reinvent the wheel in their own terms and be pay a percentage back to them for speaking their language, as if landing business deals isn't hard enough already. Absolute garbage Java turned out to be. Even Android pisses me off. I used to hate on Objective C but I applaud Swift, there's no such innovation taking place because the JVM and Java is built entirely on a failing founding software philosophy, that the whole world is some simple interaction of objects interacting with each other, not it is not, there's quantum mechanics in play, with myriads of atoms that end up interacting with each other in a chaotic fashion that gives rise to some pattern our brain is supposedly good at finding.

Throw sand on a piece of blank paper, people will claim they see Jesus, and sell it on ebay.


Big things like say, the linux kernel? Which is not OO and is better off for it?


Ultimately our biases towards where the paradigms belong are a result of how the history has developed so far.

But hopefully we've learned that the guy selling OOP as the answer to everything is full of shit


"But hopefully we've learned that the guy selling OOP as the answer to everything is full of shit"

Replace OOP in your statement with "anything" and I'd say you're spot-on.


   I agree. The simplicity comes from the fact that you are focusing on different aspects at different times. I find that I will start off with defining my data structure and only focusing on the data structure. What information do I need, what is the best way to organize the data. Those sorts of issues. Once I have the data structure then I focus on what I want to do with it. This may result in some functions attached to the data structure using the object oriented features and sometimes the functions live apart from the data structure. The benefit comes from mentally decoupling the data from the functions.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: