Hacker News new | past | comments | ask | show | jobs | submit login

In the 1990s I did research on the efficacy claims of object oriented programming versus procedural programming. This article bares a striking resemblance in the claims. Case study after case study showed that object oriented code had less bugs due to compilers catching bugs, etc. However, almost every study was similar to this report: it was a re-write from procedural to object-oriented. There exists strong evidence to suggest that throwing away ones first prototype produces the same results as to benefits comparing object-oriented versus procedural programming. The conclusion of my research was the same as others: unless one is comparing the same project written from scratch without any sharing of design or code then these studies claims are correlation, not causation.



There's a lot of truth in that. A few days ago I wrote a comment about rewriting a program from C++ to C and coming in at about 1/4th the code size. However, I didn't mention C or C++ in that comment because the real savings came from reevaluating requirements, lessons learned, refactoring, et al.

Some time back, someone here on HN posted a comment about rewriting a Java app in... Java. Same basic story of large savings, without changing language at all.

Studying the actual effects of language is hard (time/effort/budget beyond constraints), so people don't do it.


I don't know how I came across it, but Thomson's Rule for First-Time Telescope Makers seems relevant here:

: "It is faster to make a four-inch mirror then a six-inch mirror than to make a six-inch mirror."

http://wiki.c2.com/?TelescopeRule


There is another aspect. Changing languages. Rewriting in a language that is sufficiently different from the original forces you to look at the problem with new eyes. You effectively have greater mental coverage of the problem domain.


Articles like: We switched from language X to language Y with the conclusion that language Y is better smile

My favorite is when they make another post half a year later saying they changed from Y to Z.

It seems that following the trend and using the latest and greatest tools doesn't necessary mean the new tools are better, but that you got to rewrite your applications from scratch, with much more knowledge about the problem domain then before.


> My favorite is when they make another post half a year later saying they changed from Y to Z.

Sometimes I wonder if the real subtext is "we have such high turnover that almost nobody was around when the first system was designed. We rewrote it in a new language, and now we're all much happier because we all understand it much better, having been involved in its design." Wait a year, repeat.


Being a hold over employee through 4 rewrites I'd say that can be one factor. Another is that the business landscape changes too. So all these factors combined make it difficult to say for certain which of them had an objective impact.


And they inevitably fail to mention the pissed-off product or QA manager or customer who realizes their favourite feature or corner case or bugfix is missing from the rewritten version because the developer didn't understand or notice that aspect of the original.


Instead of blaming the developers every time a feature is missing, consider who is setting the priorities for development. In places where developers are being paid for their work, this is rarely done by the developers themselves. Always interesting to see how much end users specifically blame developers for whatever's going on, I guess without understanding that there are usually all kinds of managers who may not understand software or the users or both but just want to sit on developers to crank features out as quickly as possible.


Now someone has to sound exasperated at everyone jumping to blame managers.


> It hardly seems worth even having a bug system if the frequency of from-scratch rewrites always outstrips the pace of bug fixing. Why not be honest and resign yourself to the fact that version 0.8 is followed by version 0.8, which is then followed by version 0.8?


LinkedIn: Our iOS app is web-based! Simple and fast! We save money!

2 years later...

LinkedIn: Our iOS app is native! Simple and fast! We save money!


This is so true. I usually write things three times: the first time to get a feel for the space, the second to solve the problem at hand. This then gives me actual understanding and them I'm ready to really solve the problem, usually with a fraction of the memory requirements and one or more orders of speed gain and less code to boot.

Very rarely do I hit anything near to an optimal solution the first time out of the gate.


Bingo.

1st iteration: 'does it' (barely) but kinda looks hacky 2cnd iteration: better. actually works. 3rd iteration: looks clean, concise, pro, something to be proud of!

3rd time is a charm in software.

It's unbelievable how much clearer and more concise the code is 3rd time around.

If you can ever actually get away with this on a project ... highly recommended.


I would love to have the opportunity to do this is my working life projects as most of the time while I'm doing iteration1, I have various others breathing down my neck about getting the next list of tasks in the sprint done as well. Time, and keeping others away is my hurdle.


The interesting thing is that in the longer term the third iteration will be the winner because it is more maintainable and most likely much easier to understand.


So, ship version one, early, and figure out whether it's worth keeping at all, or whether to kill it and not waste time on versions 2 and 3.


Given the awfulness of so many OOP-based frameworks that I encountered in the 90s, I began to seriously consider that OOP wasn't a better paradigm for most things and that instead it was worse, and simply caused people to rewrite code to the point that it overcame the procrustean bed of inheritance and whatever other tools the language of choice was providing.


It is interesting that Go and Rust eschewed OOP inheritance for simple encapsulation. Information hiding, not implementation inheritance, turned out to be the big practical benefit of OOP. Inheritance leads to fragile code because it allows unseen dependencies on private implementation details in far-flung classes and files.

Many C++ game engines also moved away from huge class hierarchies of game objects to property-based game objects that were collections of behavior handlers.


There are many calls for, and proposals to include, some form of inheritance in Rust. Among other things, people have found that Rust is poorly suited to modeling HTML/XML hierarchies, retained mode GUIs, industrial simulations, code specialization, and some forms of video games. Inheritance is a missed feature.

What is clear to me is that Rust does incredibly well without it, but wouldn't be hurt by having it. Most of the egregious sins of OOP are mitigated by having far better abstractions available in Rust. ML-style modules, ADTs with pattern matching, type classes, first class functions, etc. Much like scala, if you give them capable FP, OOP stops being abused and starts being used appropriately.


Aggregations and interfaces cover 99%of what you need. Implementation inheritance and virtual variation points create too tight coupling to be workable for evolving systems.


Maybe 99% of what you need, but more like 70% of what I need. I'm not sure what you mean by aggregations, but I design industrial simulations relatively frequently and I've seen the suggestions for replacing inheritance with composed typeclasses and they are not even close to a true alternative. Rust's current troubles with GUI development would suggest the same pattern there too.


Can you elaborate on that? What's an industrial simulation (compared to a "regular" simulation) and why do you like to solve it with inheritance?


An industrial simulation, in my context, is modeling the stateful flow of things through industrial processes. It typically involves millions of instances of thousands of types. There are typically much fewer very well defined processes, most of which would be trivial to implement with a simple impl trait with default methods. But some require stateful members in order to implement a default behavior, but Rust traits do not have fields/members, just functions.

Rust traits and Scala traits (or abstract classes) are more or less equivalent with this one exception. I was just pointed to a proposal that would allow for this in Rust, but it is not yet implemented [0]. Essentially, you would be able to define type members, and then "link" them to a member of the struct/enum that is being "impl"-ed. While ergonomically not quite the same as scala, it is effectively the same as inheriting an abstract class with a constructor parameter.

[0] https://github.com/rust-lang/rfcs/pull/1546


There was an interesting talk about industrial simulations at the International Conference on Functional Programming this year.

(The gist was that avoiding mutation and keeping everything purely functional allowed them to roll back and forth in time with no problem, or try out multiple different futures. That's somewhat orthogonal to inheritance.)


It's the same story in Java - inheritance used to be the first sledgehammer you reached for whenever you saw a nut, but these days it's barely used. Effective Java, the definitive text on the proper way to write Java, tells you to avoid inheritance [1], and that was published in 2008.

[1] https://books.google.co.uk/books?id=ka2VUBqHiWkC&lpg=PA81&ot...


My personal suspicion is that the problem there wasn't OOP, it was frameworks.


… and dogma. There was a lot of blather about hour Everyone Serious needed this complex structure and a lot of people superstitiously followed it without asking whether e.g. advice from a 2k engineer megaproject in a different industry was applicable to their 3 person in-house app.

The modern counterpart is probably scalability — I see a lot of parallels in all of the Google/Facebook-envy applied to “big data” problems which can fit on an iPad.


I think OOP will always run into the problem that it requires a taxonomical theory about your problem, which never holds up in reality. (This is just another way of saying, inheritance has to be a forest/bundle of disjoint trees. Yes, there's multiple inheritance, but I'm pretty sure acknowledging that is a mortal sin among OO types.) Haskell is slightly better, since typeclasses are less topologically constrained, but I suspect requiring ANY categorical theory about your problem is going to create mischief in the long run.


The problem is people not using abstract taxonomies.

If people keep trying to force code organization around the business jargon, they will keep getting the same awful result. It does not matter if they are writing OOP, Abstract Data Types, FP, or direct bits manipulation with assembly.


But then you end up growing your taxonomy ad-hoc. Also, Python does multiple inheritance sanely, and so does Eiffel.


Python might do nearly as well as any language could do with multiple inheritance, but "sanely" is relative, like finding the sanest way to drive two cars at once.


Multiple inheritance is perfectly sane, what isn't (or at least is less) sane is implicit invocation of methods defined in interfaces (and inheritance from a parent class is, among other things, taking an interface with a default implementation.)

Unfortunately, explicit interfaces were a very late idea (at least in terms of implementation in a major language, not sure if the idea was around earlier) in OOP implementation, and wasn't on v1 of any major language as a core approach, so we've got a bunch of languages where we accept multiple inheritance being messy or we don't have MI at all to avoid it being messy.

But if you had exclusively explicit access to interface (including inherited) methods, MI would be clean.


Yeah, pretty much everyone and his brother had a framework:

- Apple, MacApp

- Microsoft: MFC, and others

- Borland: I forget what they called their stuff

- Anyone remember Taligent? Apple, HP and IBM, all getting together in a money-burning party...

- NeXT / Apple revival: NextStep / OpenStep, etc.

- Any number of minor players, including folks with Honest to Goodness Smalltalk implementations (none of which have survived to this day, I believe)

- Java stuff that I have mercifully forgotten

... they were nearly all crazy, and nothing was portable. So much for the promise of OOP :-)


That's all well and good except for the fact that your standards are impossible to reach. Even if I do write the exact same project with two different languages, there will always be differences in either the skill of the teams or, if the teams are the same, the amount of experience the team has when they tackle the project.

What we can do is compare similar types of projects in different languages. And the things we can say there are pretty significant. For instance, at my last job using Angular we experienced a particular bug in production a couple times. In my current job our frontend is written in Haskell. I don't make definitive statements that often, but in this case I can definitively say that there is a ZERO chance that bug will happen in our codebase. I can say that because the type system guarantees that that class of bug can't possibly happen.


It seems pretty easy to control for this, just write the functional version first, and if it is as big free your results obviously sidestep this criticism.


Not really. Even then you'll get criticism that the teams were more skilled in one language, or that they didn't use the best practices of the other language. Also, even if you could manage to normalize for all these variables. You're still only going to be comparing the cost to create the project. You're never going to get someone to maintain the two projects side-by-side for years and compare the total maintenance costs. That is where the biggest benefit of languages like Haskell comes.

The fact of the matter is there never will be absolute proof of these questions. The naysayers will always have straws to grasp at.


Doing a rewrite in a different language and justifying language choice by a successful outcome of the rewrite is grasping at straws to justify the language change.

Every time I've been involved in a rewrite, including ones using the same language before/after, the outcome has been good. The act of doing a full rewrite is where the benefit comes from, it's hard to separate that from a language switch.


It wasn't a rewrite. It was two separate web apps. Type safety categorically prevents classes of bugs that I've seen in real Javascript apps on multiple occasions. Haskell frontend apps that I worked on both before and after the JS app never suffered from that class of bug.


On a political level, it may often be easier to justify a rewrite which switches languages to take advantage of bullet point features (promotional claims like 10x faster, totally secure, no more bugs, etc.) than it is to simply ask for time to do a rewrite, which will make many average managers start fantasizing about firing you. If you write a blog post afterward, that seems to make the company look good.


I'm not sure what you're getting at. He's saying Haskell's type safety makes impossible the bug that was in his JavaScript project. As far as I can tell, it's impossible to write a type safe system in pure JavaScript.

edit: To clarify this thought a bit, obviously you can transpile code in a type safe language to JavaScript. At the abstraction level of the code written, the project is type safe. The generated JavaScript itself is not; however this isn't a problem if the transpiler is correct. The same principle applies to Haskell compiling to non-typesafe machine code.


If Haskell is better than Python, then writing the first version in Haskell and the second version in Python should yield two versions that are closer in quality than if you did the reverse order. Assuming you can measure quality, this is a totally valid hypothesis to test!


Additional hypothesis: the result of this test depends strongly on which one you are more familiar with and which one you prefer. If I hate writing Cobol, I'm going to spend that part of the test pining for the other language and not using whatever the up-to-date idioms are for making Cobol manageable, but just fighting Cobol until I can switch to the language I really wanted to use.

So, anyone doing this should consider trying to get a sample of people who are more or less indifferent between the options, so they don't sabotage the test


> That's all well and good except for the fact that your standards are impossible to reach.

Right, yes. Perhaps this entire exercise is a complete waste of time. Perhaps we should be investing in skilled people who understand their problem domain and then just trust them to do the best they can, rather than trying to find silver bullets inside programming languages.


If it was a complete waste of time, we might as well write our web apps in machine language. There's never been a side-by-side study comparing machine language to modern languages that meets the scientific bar put forth. So even though you can't definitively prove causation, the history of programming languages has demonstrated that multiple substantial advances have absolutely happened. There's no reason to think there can't be more.


How about let's do both. Both are valid, especially when there are silver bullets like type safety available. Hire the best domain experts and teach them best tools.


To be clear, our new application is better (faster, scales better, more maintainable), for the most part because of the improved architecture.

As we were going to do the rewrite anyway, we considered more alternatives than just Python. For programs that have to be reliable and maintainable, I prefer a language with a strong static type system. Haskell has that, and it has a few practical benefits over other tools that made it an excellent choice for our use case.


I remember just three things about the book "Show Stopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft" (1994): the death march, that they tested the OS by writing a file over and over (!) and that they thought the graphic part written in the new object-oriented C++ would save them a lot of time (it didn't)


Yeah, fascinating book; weren't the graphics issues due in part to the lack of maturity of C++ tools at the time though? (I don't remember exactly because it has been a while since I read it)


Sounds right. Additionally, they didn't even reimplement what they had, the article gives the impression that they basically gave up on trying to reinvent relational query optimization ("We heavily modified pyDatalog to query Redis as its knowledge base") and focused on just solving the problem at hand.


Wouldn't a complete rewrite of a module usually be a lot easier if the program is procedural?

I think that is one of the big factors stopping that from actually happening with OO code. A rewrite is a lot harder if you tap into what exists elsewhere in the codebase.

I bet you have a lot more insight into this than me though.


I don't really think so. With OOP you can easily replace whatever part of the system you want. Obviously if you have written OO code, not if you have written a mixed procedural-spaghetti code in an OO language. You "just" write an implementation for your interfaces that makes all the unit, integration and acceptance tests green. With procedural programming you have no interfaces and more importantly testing was non-existent. Seriously, I thought that in 2017 the advantages of OOP over procedural were obvious, I fell like that I have time-traveled to 15-20 years ago where it was still debatable.


Prodcedural code can also be well compartmentalized and tested.

I was also until recently quite convinced OOP was the only way to go, but I'm seeing signs everywhere that a lot of the design-problems I've met over the past few years are at least magnified by OOP.

The abstraction promised by OOP is a good thing, however very few people are able to consistently make good, reusable and maintained abstractions. To the point where it becomes a weakness rather than a strength. I don't want a billion wrapper-objects that obfuscate my code, and makes the surface area of it bigger than it has to be. Often I struggle understanding code more because of how it was separated, than because of the complexity of what it actually does.

I liked Rich Hickeys talk "Simple made Easy" [1] and "Brian Will: Why OOP is Bad" [2]

[1]: https://www.youtube.com/watch?v=rI8tNMsozo0 [2]: https://www.youtube.com/watch?v=QM1iUe6IofM


I'm not so sure about that. I think a complete rewrite is easier if you correctly separated concerns and compartmentalized needs. OOP is supposed to enforce/encourage that, but it's fairly easy to not treat your objects as black boxes with APIs, and then you've put constraints on replacing a component. Procedural code doesn't necessarily tout it's ability to encourage that, but well planned and implemented functions can give you the same benefit.

In the end, it's all up the the programmer.


Absolutely! Thanks for standing up for rigor here.

(The actual argument for functional programming is its adoption by elite programmers like Standard Chartered's Strats team, Facebook's anti-spam group, and Jane Street as a whole.)


On the one hand, I have definitely had this experience - I rewrote a library three times, with large gains in performance, readability, reliability, etc, each time (but mostly on the third time, which was from scratch).

That said, isn't it also fair to say that with a better understanding of the requirements and problem, you may determine that a different language / paradigm is a better choice, in the same way you may decide a different class division is better?


What about skill too? Assuming the same programmer(s) worked on the same projects, they would be more skilled when coming to do the second project. Experience.


So what was the result of your research? I am no fan of OOP style, but I believe it has advantages over procedural style, and I also believe that functional style has advantages over OOP style.


Do you have anything published? I'd be very interested in reading.


Sorry, this was course work for a graduate class.


s/bares/bears/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: