The false assumption is that "good code" takes longer to write than "good software." In reality "good code" only takes longer if you don't know what you're doing. If you haven't internalized good OOP, and you haven't applied good OOP principles enough to be efficient and judicious in your application of those principles, then yes, you'll probably do more harm than good, and you'll take longer to get there.
If that's the experience level your team has, then you're faced with two bad options: 1) write a lot of duplicated, procedural-style code that you'll despise in 3 months and be begging the Software Gods for 6 weeks of clear time just to clean that crap up; or 2) attempt to design some nice DRY-ed up, SRP classes with all the right GOF patterns applied, more than likely get it wrong and really end up in the same place as 1).
But if you understand and have experience (there's the rub) with good design (see: POEAA, GOF, Clean Code, Effective Java, etc.) then there's no choice to make, because it's much faster to write clean, well-architected code. It's must faster now, it will be much faster later.
Of course there will always be that guy on the team that doesn't think in OOP and still writes 600 line methods and glazes over if there's interfaces or abstractions involved. That guy is just as bad as Hey Look At My Handy Dandy Patterns Guy.
The solution to over-architecture is not bad architecture. It's understanding architecture and developing experience with it. While I have encountered OP's situation often, even more often I've encountered the consequences of haphazard design for those very same projects that "shipped fast": two guys in a back room refactoring for six weeks so that -- please please dear God -- we can get our bug rates down and start shipping features "like we used to."
However, if you lack experience with certain kinds of problems, the initial investment of designing good code requires quite a bit of thought and care to do right, otherwise there's the risk of introducing lots of unnecessary complexity by adding lots of unnecessary classes and relationships.
And that's exactly the problem Alex is addressing in his blog post, because his fellow programmers don't share those same principles of good code. As a result, they get caught up in making large, cluttered interfaces with lots of interwoven objects that have shared responsibilities, and then they call that good Object Oriented code (because there's lots of objects, right?). They might even apply as many acronym riddled patterns as they can, because that's the way some really smart guys who wrote some books suggested, and they want to do things the right way.
In other words, there's a cargo cult of OOP, and it loves complexity.
In contrast to that, it sounds like what Alex does when he come across something where the architecture necessary isn't very clear is essentially what I do: carefully write code from the bottom up and refactor it as I go, if at all. That way, the relationships inherent in the code become clear, and you can create the minimal number of classes and relationships necessary to decouple those relationships.
And so what he's suggesting is that other programmers do the same. Don't waste time trying to develop a solid architecture from the top down when it isn't immediately obvious and start working from the bottom up instead.
Of course, in ten years, he'll probably be writing most of his software from the top down, finally capitalizing off all those years of hard earned experience writing code from the bottom up, but that'll be then, not now.
The old c2 discussion page for this is still good (and particularly relevant here at HN because of pg's contribution):
Also others have weighed in:
Let's take C#'s advancements over Java as an example. The LINQ features are clearly imported from the functional and/or dynamic world as replacements for iterator patterns. C#'s closures make Strategy and Command patterns basically irrelevant. Events and Delegates replace wiring up an Observer pattern.
A meta-concern, if you will, is that general OOP principles don't apply in the same way in dynamic environments the way they do in statically typed environments. When I'm in Ruby I'm not nearly as concerned with coupling and some of the larger architectural concerns the way I am in Java/.NET. Because the language is dynamic, to some degree everything is an interface, and there's no need for complicated frameworks like IOC, etc.
There's also no need for all the overhead of learning and managing generics/parameterized types. All those angle brackets and abstractions just to support what amounts to a more robust compile-time unit test. Makes me tired just thinkin about it.
Java is Java and I don't know if I would say it "sucks." I do know between it and Ruby/Rails which one makes me about four times more productive.
Some of the code-browsing and refactoring tools aren't as smooth, but they're also not as necessary.
I suspect you're dealing with concurrency issues? In that case, yes the JVM is brilliant at that and JRuby might be a great option for you. Ruby concurrency is coming along though. 1.9 is no longer green threads, and Rubinius has gotten rid of the GIL altogether.
So, when looking at productivity, I'm not necessarily looking at the language itself, but the entire environment I need to work in. I may just be doing something grossly wrong, which is why I inquired.
If it helps frame things at all, my primary project is coming up on 2 years old and started as a Rails 2.3 app that recently had a very painful upgrade to 3.1 (took on the order of 75 hours). And I had a couple years experience prior to that doing Ruby & Rails stuff.
This is just a shot in the dark, but if you're on REE/Passenger and getting thread/segfault issues, you might try Apache's Passenger config with the conservative spawn method. I've had similar issues in the past resolved by simply not using REE's shared memory.
Thanks for the feedback though. I find it helpful to do periodic sanity checks.
In fact, if you don't have doubts, you're probably in trouble and need to bring in other coders. If you're not using UML or at least sketching class relationships out in some visual form, you're likely to get it wrong.
Good OOP is hard, and requires experience and at lot of reading and concentration to get right. And even then, it usually requires a good bit of collaboration to get right, even for very experienced architects. But once a good clear design has been identified, it's much faster to code and much much easier to maintain.
Bonus: It's already been done!
I heard that when they wrote the book they basically mailed a whole bunch of people in the field to find out all the patterns people were using and they weren't able to find more then 23. After getting 23 every other pattern they would hear about would be just an iteration of one of the ones they had.
Full Disclosure: I haven't actually read the book yet. But I'm planning to...
It impressed me 4 years ago, now I honestly think it's a relic of a bygone era. Language advances, different API design and dynamic programming and anonymous functions have got rid of a lot of the problems that you actually had to do these shitty patterns for.
UML also sucks and is dead, again, not sure why anyone would bother with it.
When is the last time you saw a UML article or GoF article on HN?
Wake up, they're both dead concepts. I'm not even sure why patterns have died, they just have. Probably because people just program that way now anyway. Yes, they were useful, but they're not needed any more. People don't write code like that any more because they don't really have to.
I guarantee you've used Proxy, Observer, Factory, Abstract Factory, Facade, Bridge or some approximation of one of these if you've coded more than 100 lines of Java in the last year.
If I say "ActiveRecord" or "DataMapper" you probably think Ruby on Rails, not Chapter 10 of Patterns of Enterprise Application Architecture. If I say "Factory" you're probably not thinking Chapter 3 of GOF. When you think about node.js or EventMachine, do you think about the definition of the Reactor Pattern in POSA Volume 2?
Patterns aren't dead. They won.
That's just it. If you're a regular here, you're less likely than the average programmer to have done that, and it's probably not really the code you want to talk about if you have. Java isn't especially popular for personal projects, startups or most other things people want to talk about or show off here.
Given some combination of first-class functions, various metaprogramming capabilities, dynamic typing and paradigms other than class-based OO, many of these patters disappear.
Honest question, what does this pattern accomplish? I'm asking this because I just saw a bunch of Factory-like classes in some PHP code I was trying to port to Python, and for the life of me I couldn't understand why the original programmer had made it so complicated and convoluted, when he could have done it all in 30 lines of code.
And a second question for whoever might have the free time to answer it: does anybody actively use inheritance (or, why not, multiple inheritance) on a daily basis and in the same time do they feel like it helps them? (as in: does the size of their code base gets significantly smaller? does the code fits better in one's head? things like that).
Car car = CarFactory.newCar(someLocalContext);
Since php is dynamically typed there's not as much reason to use this pattern, although in certain cases it might be the right choice.
The Factories in question might actually be more like Builders -- classes used to hide complex construction processes.
I use inheritance all the time in Ruby and Java these days. With Ruby you get Modules and the Mixin approach, which allows for what is essentially multiple inheritance. I try to keep the level of inheritance close to 1 (I can't remember the last time I went past that) but yes, it's an essential tool in the toolkit.
I'm apparently one of the last coders on earth that still uses UML, but I find it helps me clarify architectures and visualize my code. OOP was meant to be visual in nature -- object relationships are, imho, best understood visually rather than through linear code. My suggestion would be to get used to UML or some hacked derivative thereof and express your dependencies visually, and you may find inheritance starts to make more sense.
And even then I'll come back to it the next day and see if I can get rid of it or it really does still make sense.
Their main effective point can be summarized sarcastically as
"Here is how to implement Greenspun's 10th Rule in a way that is allegedly more maintainable, and we can say is object oriented. Buy more books, and also we have some modelling tools you can spend time using instead of writing code that does something."
Circles are cool, I agree. Sometimes things are elliptical though, and it's much better to recognize that and use ellipses to model them then to cleave to the circle industry's way of doing business.
In summation, this, and most industry(software or otherwise) gripes can be best understood by meditating on this message from beyond the grave from Christopher Latore Wallace Smith[\\]
>Do you really need a full objet-oriented API right now? Do you really need to make a dozen interwoven classes, when it’s possible just a hundred or so lines in one class will do fine? Can you do all the same error checking and unit tests in a much smaller code base?
This is not necessarily "good code". That's code you think is good. Excessive or complex code is not good code, and I think the author should redefine his usage to what programmers sometimes perceive as good code.
Good code does exactly what it needs to, but will be maintainable and won't have to be thrown away when that "fantastic" product ends up getting popular.
I also tend to believe that doing it right doesn't necessarily take much longer. The code I write now is much better than the code I wrote 10 years ago for a number of reasons, but I still get a lot more done, and a lot faster than I did back then.
Do you really need to split your data into small pieces, when you can use global variables and have a code-base that eliminates the need for function arguments and complicated function invokation strategies in a much smaller code base?
While the fight for proper functions has been largely won thanks to unit testing, a lot of people think that the database gives them back the license to code against global state and believe a code-base should be judged by the number of keystrokes needed to type it.
Writing short, well working code often require a lot of thinking, redesign and reworking. That is mostly not the case with longer code. I personally find such short code far more beautiful than generic and expandable object oriented frameworks. I usually find that making things generic is futile, because later requirements most often becomes something entirely different from what you thought. Then the generality just get in the way. Unfortunately a lot of people seems to disagree with that.
i agree about the notion of focusing on big picture when designing a product, but, this article seems to make an assumption that beautiful code means more lines and more dependencies, and that by skipping some corners you greatly decrease both, while making your software more robust. That is a direct contradiction. you skip corners to ship faster, test the market, deliver on deadline, but not to make your code easier to work with.
As engineers, we're perfectionists, and each time we design a piece of software we try to make it a little better, easier to read/maintain, more elegant. That is not something that is achieved by cutting corners. 100 lines of code is fine for a prototype, but that's not what great products are built on, and certainly not how you achieve less dependencies.
I once had an algorithm composed of 3 functions that had immutable inputs and outputs, it was easy to understand and easy to debug, now a coworker took that and rewrote it into a class, which was mutable for no reason. The results: he had a code that was 5 times larger than the original, and was way harder to understand and debug.
OOP is cool and enterprisish and all that but some people really need to understand that it's not the only way and in many cases it's certainly not the best way.
I think making a code as short as possible without obfuscating it is one of the best ways to start out, all the extra layers can be added later on if required.
My design process has gone from "write code that does something" to "figure out what data you want to store and how to validate it, then write code".
Once the data is in a defined format, it's much easier to change how the code works without breaking things. That's the way to go about API design - designing callable libraries that work in only one language and have only one implementation is a quick way to cut off a huge swath of developers from interacting with you.
I always have been a fan of writing clean code, and especially of achieving appropriate separation of concerns, but previously it has mostly been for aesthetic reasons. When your codebase is small and new enough that the people working on it are the original authors, it's fairly easy to deal with messy code. Now, I'm really beginning to see the impact crappy code has on an organization as it scales. The code at my company is actually mostly not too bad, and the management is very supportive of refactoring, but it's absolutely clear to me that were the code cleaner, we would get more done, faster.
TL;DR; Writing bad code prevents you from writing good software (eventually.)
My background is airline and finance enterprise software where lots of highly skilled of people are paid a high salary to do relatively simple and small coding tasks. (That's what they think but in reality most just don't want to take the initiative to go beyond the status quo, but that is another rant altogether).
More often than not this leads to engineers taking every opportunity to over-engineer a solution to simple problem so they feel that they are seen as top-notch engineers when in effect they are taking a simple problem and turning it into a maintenance and support nightmare.
In these situations a simple change such as adding a field to a method signature impacts multiple classes and interfaces due to the Lasagna code effect.
In this environment engineers religiously spend additional time planning, designing, and coding their solutions to be adaptable to changes that have almost no chance of happening.
If you spend 4 days creating an IT system that can seamlessly switch between and active directory back-end and a RDMBS (or anything unforeseen for that matter) back-end but the change never happens, then you just wasted 4 days. If instead you waited until the RDBMS support requirement was necessary it would probably still take the same 4 days but they would not have been wasted.
On the other hand, it's an opportunity to rewrite the custom system into a product. Clients have shown interest in having their own copy multiple times, but the cost of implementing that horrorshow on their systems has always been a prohibiting issue.
It sounds like most custom IT business solutions. This brings to light something I have been pondering for a while now.
Basically, there are two types cultures related to development teams.
Culture A, treats the software as their business and respect it as much as they do their customers, employees and partners.
In an "A" department developers are empowered to take ownership over the code. This means that they feel obliged to refactor
code and improve comments, documentation, and tests as a part of of whatever change they are delivering.
It is understood that bugs can come from this but that the overall benefit of not ending up with an unmaintainable system
is worth the cost of an accidental bug from time-to-time.
Culture B, treats the software as a necessary evil or at best a commodity. Developers are instructed to perform the least amount
of changes to meet the changing business requirements. These department are almost always in crises mode as the inflexibility of there
bubblegum patchwork has put them into a reactionary relationship with the team who is driving the requirements.
In "B" departments you often see high turnover by the best engineers since they feel something is wrong with the departmental management strategy.
The high turnover in turn leads to less knowledge about the most complex components in the system and even more reluctance from the management
and engineers to risk refactoring something that no one completely understands.
I've noticed the "B" shops are always looking for a new silver bullet, the new servers to be able to run the integration tests in under 48
hours since the code is not unit testable or changing frameworks every two years in hopes it will lead to a more flexible system.
Unfortunately, in my experience, the "B" companies tend to pay the highest salaries.
Don't knock on OOP cause your colleague does not understand encapsulation or basic principles like SRP (Single responsibility principle).
Just my IMHO - If customer's business processes can not be described by clean code this means business process is messed up itself.
And if client want to ship it now and do not care if it is right - process is broken.
I think this comes from the world of proprietary software, where agencies write some mess code to fit exactly current business practices and continue to "iterate" with this mess patching ad-hoc new and new requirements from executives. This leads to huge overhead in software. The only reason we do not see all this disgusting shit - it is proprietary.
This approach is good for agency. And sometimes - for client. But not for all of us. I am talking about banking/law/health/etc systems built with exact ideas represented in this article.
In open source world such approach simply will not work. Nobody will want to touch junk spaghetti code.
So... I believe this approach flawed in long run. And we as developers should be more then stupid machines producing code. We need to think forward and if something do not make sense - tell it and push through if needed.
just some random rant after seeing boss pushing project which kinda fits client's business schema, but then it starts stagnate, 'cos it require more changes... And these changes lead to even more mess..
Problems that dont fit into the above categories require more thought to be put into identifying the underlying abstractions and for these great software solutions emerge from
a) Understanding the problem being solved
b) Identifying the abstractions which underpin the problem
c) Identifying how the problem is expected to evolve over time and verifying that the abstractions hold up for expected changes.
I think you are drawing too much from your experience of a single problem and code from a single programmer.
Not to say, there are more than OOP on the world for programming. e.g. Hacker news is not written in some OOP environment and it is a piece of good software (IMHO).
I always remember this saying:
// When I wrote this, only God and I understood what I was doing
// Now, God only knows
They don't care what you code in. They don't care about your frameworks. They don't care how you made your life easier to build something for them.
Good code is possible in almost every language. Every language has it's pros and cons.
I would argue developers code less in languages nowadays for the web. They code in frameworks, so it's already one step removed. Coding in frameworks (rails, django, pylon, _______) are meant to do one thing, build great things faster.
Often we get so tangled up in focussing on the frameworks and tools we forget the whole point, the users. Building software is not about building a mausoleum dedicated to ourselves.
It's about making the lives of others easier.
Are you as fanatical about making customers lives easier as your own coding?
You write good code to save time down the line. Sure, bad code is fine for one-use scripts, but if you are building a product that you know will be around for a long time (i.e not a prototype, you've done that already), it pays to write good code; Good code will make it easier to fix bugs, easier to add features, and lead to better software.
The main problem I have with this article is the intent that is based on a distinction between
good code and good software. One intention of design and good code for large applications (in order to be good software)
is to try and cope with and understand its complexity - before it is too late - and not to underestimate
Murphy - who is waiting for us. And I don't know how that is possible (good design and maintainable code that
could be refactored more easily) without good code.
If it truly is so clever it must be saved, throw it on your github account and reference it another day when you may need it again. Regardless, get everything that isn't needed out of the code base.
What? So 3000 lines of code to do the same thing would be "good code", and 1000 lines of code would be "good software".
I meant 'good code' as seen by those that write 3000 lines for an API where only a simple 1000 was necessary. I work with plenty of people that see it this way, which is why I phrased it like so.
What happens when you need to fix a bug or add a new feature. I know he says "just change what needs to be changed" but sooner or later you will run into the situation where a shared resource, like a database table, needs to be changed....
Ah forget it he is talking about incurring technical debt to ship on time. Debt is debt, you will pay the price sooner or later.
My attempt at drawing a diagram of the Goldilocks zone:
It seems to me like you'd skip the whole deadline-shipnow-ASAP nonsense and just write awesome software with awesome code, even if it takes longer.
Do you really need to make a dozen interwoven classes, when it’s possible just a hundred or so lines in one class will do fine?
The 100-line simple solution is better code. Complex solutions with clever object topologies are not "good code".
Instead of a ton of object dependencies where one change means having to find the references to it in all of your objects, you simply change what needs to be changed, knowing there’s no extra dependencies in your code for the sake of having beautiful code.
Dependencies are innately ugly, not "beautiful". Again, the code the OP describes is not "good code".
I agree with what the OP is saying. I just think the definition of "good code" he's using is bizarre, because I can't think of anyone who would call endlessly complex code "good". I think it's better to say, "stop writing clever code; start writing useful and maintainable software".
The Software Craftsmanship movement has been having this discussion for a while, i.e., assessing the costs of bad code and making those costs visible to the stakeholders. You might find it worthwhile to engage that conversation.
Code is like regulation. Everyone agrees that we need laws and regulations, and that some complexity is needed to get it right, but a 7500-page code is strictly worse than a 25-page code that achieves the same end.
OOP is an interesting paradigm. The original vision of it (seen in Smalltalk, Ruby, and with a completely different approach, Scala) was a middle ground between imperative and functional programming that can capitalize on the benefits of each, and that looked a lot like the message-passing paradigm we see in Erlang. Unfortunately, what's now called OOP is absurd scaffolding and misuse of "design patterns" that makes solutions much more complex than the problems they exist to solve.
A major problem occurs when OOP is expected to fill needs better-suited by functional approaches, as in C++ and Java. It's not that OOP is evil. It's that when you start seeing FooFactoryFactory classes, you should really have been using higher-order functions a long time ago.
Too many programmers forget the former, and too many managers forget the latter.