Hacker News new | past | comments | ask | show | jobs | submit login
Stop writing good code; Start writing good software. (alexobenauer.com)
93 points by alexobenauer on Nov 19, 2011 | hide | past | favorite | 79 comments

Of course this is a false dichotomy, but one I encounter quite a lot. It usually comes from younger inexperienced coders, or procedurally-minded coders who never took the time to understand OOP.

The false assumption is that "good code" takes longer to write than "good software." In reality "good code" only takes longer if you don't know what you're doing. If you haven't internalized good OOP, and you haven't applied good OOP principles enough to be efficient and judicious in your application of those principles, then yes, you'll probably do more harm than good, and you'll take longer to get there.

If that's the experience level your team has, then you're faced with two bad options: 1) write a lot of duplicated, procedural-style code that you'll despise in 3 months and be begging the Software Gods for 6 weeks of clear time just to clean that crap up; or 2) attempt to design some nice DRY-ed up, SRP classes with all the right GOF patterns applied, more than likely get it wrong and really end up in the same place as 1).

But if you understand and have experience (there's the rub) with good design (see: POEAA, GOF, Clean Code, Effective Java, etc.) then there's no choice to make, because it's much faster to write clean, well-architected code. It's must faster now, it will be much faster later.

Of course there will always be that guy on the team that doesn't think in OOP and still writes 600 line methods and glazes over if there's interfaces or abstractions involved. That guy is just as bad as Hey Look At My Handy Dandy Patterns Guy.

The solution to over-architecture is not bad architecture. It's understanding architecture and developing experience with it. While I have encountered OP's situation often, even more often I've encountered the consequences of haphazard design for those very same projects that "shipped fast": two guys in a back room refactoring for six weeks so that -- please please dear God -- we can get our bug rates down and start shipping features "like we used to."

As a younger programmer, I've come to the conclusion that good code shares good interfaces, decoupled relationships, appropriate abstractions, and clear responsibilities regardless of methodology or paradigm used, so that your code is easier to implement, change, and maintain. After all, the end result is less complexity, and you can stub out some interfaces and get coding fairly quickly.

However, if you lack experience with certain kinds of problems, the initial investment of designing good code requires quite a bit of thought and care to do right, otherwise there's the risk of introducing lots of unnecessary complexity by adding lots of unnecessary classes and relationships.

And that's exactly the problem Alex is addressing in his blog post, because his fellow programmers don't share those same principles of good code. As a result, they get caught up in making large, cluttered interfaces with lots of interwoven objects that have shared responsibilities, and then they call that good Object Oriented code (because there's lots of objects, right?). They might even apply as many acronym riddled patterns as they can, because that's the way some really smart guys who wrote some books suggested, and they want to do things the right way.

In other words, there's a cargo cult of OOP, and it loves complexity.

In contrast to that, it sounds like what Alex does when he come across something where the architecture necessary isn't very clear is essentially what I do: carefully write code from the bottom up and refactor it as I go, if at all. That way, the relationships inherent in the code become clear, and you can create the minimal number of classes and relationships necessary to decouple those relationships.

And so what he's suggesting is that other programmers do the same. Don't waste time trying to develop a solid architecture from the top down when it isn't immediately obvious and start working from the bottom up instead.

Of course, in ten years, he'll probably be writing most of his software from the top down, finally capitalizing off all those years of hard earned experience writing code from the bottom up, but that'll be then, not now.

Experience with OOP is certainly essentially, but mostly for the reason that the OOP language you're using sucks. Most of the "handy dandy patterns" that have sprung up are to recover properties that should have come for free, but didn't. As a result, code isn't objectively good or bad -- you must consider how it will be used. All of the boilerplate crap you have to write has a flat cost and a benefit proportional to how things scale in certain ways. The author of this piece is advocating "don't write any boilerplate crap at all", which we agree isn't the right answer. But experience or no, you'll get the design wrong too, because you can't predict how your requirements will change. In summary, repeating my earlier point, your OOP language sucks.

I think over time we've seen that this is a good argument, as far as it goes.

The old c2 discussion page for this is still good (and particularly relevant here at HN because of pg's contribution):


Also others have weighed in:


Let's take C#'s advancements over Java as an example. The LINQ features are clearly imported from the functional and/or dynamic world as replacements for iterator patterns. C#'s closures make Strategy and Command patterns basically irrelevant. Events and Delegates replace wiring up an Observer pattern.

A meta-concern, if you will, is that general OOP principles don't apply in the same way in dynamic environments the way they do in statically typed environments. When I'm in Ruby I'm not nearly as concerned with coupling and some of the larger architectural concerns the way I am in Java/.NET. Because the language is dynamic, to some degree everything is an interface, and there's no need for complicated frameworks like IOC, etc.

There's also no need for all the overhead of learning and managing generics/parameterized types. All those angle brackets and abstractions just to support what amounts to a more robust compile-time unit test. Makes me tired just thinkin about it.

Java is Java and I don't know if I would say it "sucks." I do know between it and Ruby/Rails which one makes me about four times more productive.

Have you found your productivity level stays the same as a project matures? My experience has been Ruby or Python allow me to ramp up quickly, which is great, because quick wins help boost morale early on. But after 6 months or so, I found productivity roughly evened out on several projects. Right now I'm fighting with a lot of things in Ruby & Rails that I know would be trivial in Java and on the JVM. FWIW, I'm trying to cut over to JRuby to see if I can somehow benefit from the best of both worlds.

I'd have to say yes, once I got comfortable with good Ruby practices I found it was as easy if not easier to keep the code clean as projects mature.

Some of the code-browsing and refactoring tools aren't as smooth, but they're also not as necessary.

I suspect you're dealing with concurrency issues? In that case, yes the JVM is brilliant at that and JRuby might be a great option for you. Ruby concurrency is coming along though. 1.9 is no longer green threads, and Rubinius has gotten rid of the GIL altogether.

Most of the issues I've run into are environment related. I can make 1.9.2 segfault with ease, but there's mounting pressure against supporting REE. Libraries seem to rarely support backwards-compatibility, so I spend an inordinate amount of time backporting fixes or trying to upgrade a large dependency graph. And as you noted, concurrency is problematic.

So, when looking at productivity, I'm not necessarily looking at the language itself, but the entire environment I need to work in. I may just be doing something grossly wrong, which is why I inquired.

If it helps frame things at all, my primary project is coming up on 2 years old and started as a Rails 2.3 app that recently had a very painful upgrade to 3.1 (took on the order of 75 hours). And I had a couple years experience prior to that doing Ruby & Rails stuff.

Ah, gotcha. Yeah, config issues can be painful in many environments, and Rails has its own brand of frustrations, particularly when upgrades are concerned.

This is just a shot in the dark, but if you're on REE/Passenger and getting thread/segfault issues, you might try Apache's Passenger config with the conservative spawn method. I've had similar issues in the past resolved by simply not using REE's shared memory.


Thanks. I had to do that recently because of issues with the prepared statement pool in 3.1 + Passenger. But I can segfault in 1.9.2 just by loading a console or running tests. It doesn't instill a grand sense of stability.

Thanks for the feedback though. I find it helpful to do periodic sanity checks.

But I'll also say that crappy OOP (CROOP?) is a language-independent concern. Those guys who I talked about taking six weeks to refactor? That actually happened and they were on a Rails project.

One problem with OOP is that it's highly dependent on getting it right. If your design is correct, that's great, but otherwise you end up in increasingly hot water. The problem I have with OOP for new projects is that it's hard to see the correct design until you have enough mileage.

Bad design is usually inexperienced design. 99% of the design problems you'll encounter have been encountered before and are probably cataloged. If you have doubts, read up, and bring in other experienced coders.

In fact, if you don't have doubts, you're probably in trouble and need to bring in other coders. If you're not using UML or at least sketching class relationships out in some visual form, you're likely to get it wrong.

Good OOP is hard, and requires experience and at lot of reading and concentration to get right. And even then, it usually requires a good bit of collaboration to get right, even for very experienced architects. But once a good clear design has been identified, it's much faster to code and much much easier to maintain.

" 99% of the design problems you'll encounter have been encountered before and are probably cataloged. "

Bonus: It's already been done!


I heard that when they wrote the book they basically mailed a whole bunch of people in the field to find out all the patterns people were using and they weren't able to find more then 23. After getting 23 every other pattern they would hear about would be just an iteration of one of the ones they had.

Full Disclosure: I haven't actually read the book yet. But I'm planning to...

Have you read this recently?

It impressed me 4 years ago, now I honestly think it's a relic of a bygone era. Language advances, different API design and dynamic programming and anonymous functions have got rid of a lot of the problems that you actually had to do these shitty patterns for.

UML also sucks and is dead, again, not sure why anyone would bother with it.

When is the last time you saw a UML article or GoF article on HN?

Wake up, they're both dead concepts. I'm not even sure why patterns have died, they just have. Probably because people just program that way now anyway. Yes, they were useful, but they're not needed any more. People don't write code like that any more because they don't really have to.

Maybe it's that patterns are so common place we take for granted that someone had to name them.

I guarantee you've used Proxy, Observer, Factory, Abstract Factory, Facade, Bridge or some approximation of one of these if you've coded more than 100 lines of Java in the last year.

If I say "ActiveRecord" or "DataMapper" you probably think Ruby on Rails, not Chapter 10 of Patterns of Enterprise Application Architecture. If I say "Factory" you're probably not thinking Chapter 3 of GOF. When you think about node.js or EventMachine, do you think about the definition of the Reactor Pattern in POSA Volume 2?

Patterns aren't dead. They won.

There is a fundamental problem with Deign Patterns: the knowledge and effort required to apply them appropriately vastly exceeds the knowledge necessary to effortlessly re-create them on the spot.

if you've coded more than 100 lines of Java in the last year.

That's just it. If you're a regular here, you're less likely than the average programmer to have done that, and it's probably not really the code you want to talk about if you have. Java isn't especially popular for personal projects, startups or most other things people want to talk about or show off here.

Given some combination of first-class functions, various metaprogramming capabilities, dynamic typing and paradigms other than class-based OO, many of these patters disappear.

Sure, and a lot of them get codified and built in to HN-friendly frameworks -- again node.js, EventMachine, ActiveRecord, DataMapper...

> If I say "Factory" you're probably not thinking Chapter 3 of GOF

Honest question, what does this pattern accomplish? I'm asking this because I just saw a bunch of Factory-like classes in some PHP code I was trying to port to Python, and for the life of me I couldn't understand why the original programmer had made it so complicated and convoluted, when he could have done it all in 30 lines of code.

And a second question for whoever might have the free time to answer it: does anybody actively use inheritance (or, why not, multiple inheritance) on a daily basis and in the same time do they feel like it helps them? (as in: does the size of their code base gets significantly smaller? does the code fits better in one's head? things like that).

Factory classes make less sense in a language with first class functions, but Factory functions are handy sometimes. As you can see in this example of a Factory function in Python from the standard library[1], it can be useful to return different subclasses depending on the arguments, but it doesn't have to be heavyweight like in Java.

[1]: http://docs.python.org/library/collections.html#collections....

A Factory is used to create an object of a specific type, when the calling code only has a reference to a supertype or prefereably an abstract type of that object. This reduces coupling and therefore side-effects in your code.

For instance:

  Car car = CarFactory.newCar(someLocalContext);
Might return a specific type of car for the given context, but the calling code's coupling is only to the supertype Car, and therefore can operate in the same way on any kind of Car.

Since php is dynamically typed there's not as much reason to use this pattern, although in certain cases it might be the right choice.

The Factories in question might actually be more like Builders -- classes used to hide complex construction processes.

I use inheritance all the time in Ruby and Java these days. With Ruby you get Modules and the Mixin approach, which allows for what is essentially multiple inheritance. I try to keep the level of inheritance close to 1 (I can't remember the last time I went past that) but yes, it's an essential tool in the toolkit.

I'm apparently one of the last coders on earth that still uses UML, but I find it helps me clarify architectures and visualize my code. OOP was meant to be visual in nature -- object relationships are, imho, best understood visually rather than through linear code. My suggestion would be to get used to UML or some hacked derivative thereof and express your dependencies visually, and you may find inheritance starts to make more sense.

Perhaps it's just a difference in styles, I avoid inheritance like the plague. To me something has to be really, really special to inherit off another class that I've written.

And even then I'll come back to it the next day and see if I can get rid of it or it really does still make sense.

The factory pattern exists to create jobs and products. Whether anyone wants or needs either is yet to be determined.

Sure, not only the GOF, but 5 volumes of POSA, Martin Fowler's POEAA, and reams of digested pattern books like Head First Design Patterns, etc. There's dozens.

If you do not see correct design - you do not have enough information to start project anyway. Usually, this happens when modeled business process is a mess.

The trick is to use a prototyping-friendly paradigm, and mentally write it off as a research project.

Nice discussion. I agree whole-heartedly, when it comes to full systems and programs. As in the article, when it comes to a simple plug-in to a system however, rewriting an entirely new version of an API just for the plugin was simply unnecessary.

If your API is not flexible enough and require to write ad-hoc code - it is broken and require refactoring. I know this is not lines up with "customer want it now and do not care", but code do not magically become less broken because of it.

API of objects, not public-facing API. The internal API objects had to be re-written in a new language.

Agreed, YAGNI is just as important as SRP. That's good design too. But hopefully if you do eventually need an API, your design was good enough to allow easy interface extraction, etc.

Absolutely; that's a fantastic point.

OOP isn't a panacea for bad design. It is a single design strategy that works well for some problems. We can descend deeply into the OOP way of viewing the world . Then we can construct really complicated, epicycle[0] like efforts to model everything under the sun using OOP's ontological system, or we can recognize it as one useful ontology amongst many and use it when it's appropriate. All of the "OOP is the one true way" literature ends up making some good and valid points.

Their main effective point can be summarized sarcastically as

"Here is how to implement Greenspun's 10th Rule in a way that is allegedly more maintainable, and we can say is object oriented. Buy more books, and also we have some modelling tools you can spend time using instead of writing code that does something."

Circles are cool, I agree. Sometimes things are elliptical though, and it's much better to recognize that and use ellipses to model them then to cleave to the circle industry's way of doing business.

In summation, this, and most industry(software or otherwise) gripes can be best understood by meditating on this message from beyond the grave from Christopher Latore Wallace Smith[\\]





I really think the author of the article has no clue about what "good code" is. I could start a whole rant about what's wrong with the post - but davesims summarizes it pretty good.

completely agree.. it takes experience to be able to quickly write code that is also good, but its a worthwhile effort. The hype of trying to start a startup before having really mastered the art of writing code leads to some buggy products later on

Thank you!

Thanks for this comment. I'm glad that some people understand OOP. It gets really anoying reading the whining Of developers that don't realize that it is them that suck, not really the patterns, language, or whatever they are attacking. Ruby programmers use to do this a lot.

There's good code, which is something that good products build their foundation on, and near perfect code, which can get in the way of building realistic solutions on time.

>Do you really need a full objet-oriented API right now? Do you really need to make a dozen interwoven classes, when it’s possible just a hundred or so lines in one class will do fine? Can you do all the same error checking and unit tests in a much smaller code base?

This is not necessarily "good code". That's code you think is good. Excessive or complex code is not good code, and I think the author should redefine his usage to what programmers sometimes perceive as good code.

I was just about to post the same thing. I agree with the underlying thesis that a good shipping product is better than perfect code that never ships, but the author seems to be confusing good code and over-engineering.

Good code does exactly what it needs to, but will be maintainable and won't have to be thrown away when that "fantastic" product ends up getting popular.

I also tend to believe that doing it right doesn't necessarily take much longer. The code I write now is much better than the code I wrote 10 years ago for a number of reasons, but I still get a lot more done, and a lot faster than I did back then.

That's a great distinction. As you saw, the definition of 'good code' for this post was definitely based on the 'good code' remarks I hear from coworkers, where it is often the notion that more objects, bigger API's, etc., is 'good code' and a 100 line one-class add-on is 'bad code'. Great distinction though, thanks for bringing this up!

>Do you really need a full objet-oriented API right now? Do you really need to make a dozen interwoven classes, when it’s possible just a hundred or so lines in one class will do fine? Can you do all the same error checking and unit tests in a much smaller code base?

Do you really need to split your data into small pieces, when you can use global variables and have a code-base that eliminates the need for function arguments and complicated function invokation strategies in a much smaller code base?

While the fight for proper functions has been largely won thanks to unit testing, a lot of people think that the database gives them back the license to code against global state and believe a code-base should be judged by the number of keystrokes needed to type it.

"I would have written a shorter letter, but I did not have the time." - Blaise Pascal

Writing short, well working code often require a lot of thinking, redesign and reworking. That is mostly not the case with longer code. I personally find such short code far more beautiful than generic and expandable object oriented frameworks. I usually find that making things generic is futile, because later requirements most often becomes something entirely different from what you thought. Then the generality just get in the way. Unfortunately a lot of people seems to disagree with that.

" Instead of 3,000 lines of code, you have 1,000. Instead of a ton of object dependencies where one change means having to find the references to it in all of your objects, you simply change what needs to be changed, knowing there’s no extra dependencies in your code for the sake of having beautiful code."

i agree about the notion of focusing on big picture when designing a product, but, this article seems to make an assumption that beautiful code means more lines and more dependencies, and that by skipping some corners you greatly decrease both, while making your software more robust. That is a direct contradiction. you skip corners to ship faster, test the market, deliver on deadline, but not to make your code easier to work with.

As engineers, we're perfectionists, and each time we design a piece of software we try to make it a little better, easier to read/maintain, more elegant. That is not something that is achieved by cutting corners. 100 lines of code is fine for a prototype, but that's not what great products are built on, and certainly not how you achieve less dependencies.

What annoys me about many OOP people is that they think designing class hierarchies are more important than making the code actually do what it needs to, it's like they get bonus points for every new class they introduce. I know sometimes this is the way to go, but most of them to it prematurely and in the end all they have is more code to refactor.

I once had an algorithm composed of 3 functions that had immutable inputs and outputs, it was easy to understand and easy to debug, now a coworker took that and rewrote it into a class, which was mutable for no reason. The results: he had a code that was 5 times larger than the original, and was way harder to understand and debug.

OOP is cool and enterprisish and all that but some people really need to understand that it's not the only way and in many cases it's certainly not the best way.

I think making a code as short as possible without obfuscating it is one of the best ways to start out, all the extra layers can be added later on if required.

Writing an API isn't the problem - being a perfectionist about internal design is.

My design process has gone from "write code that does something" to "figure out what data you want to store and how to validate it, then write code".

Once the data is in a defined format, it's much easier to change how the code works without breaking things. That's the way to go about API design - designing callable libraries that work in only one language and have only one implementation is a quick way to cut off a huge swath of developers from interacting with you.

Reminds me of one of the dev managers at my first "real job" -- this was in the 1990s -- who would say "get the data model right, and the code will almost write itself" and I have found in most cases (certainly for most "business" apps) that is true.

That design process is great, and I'll bet it gets rid of a lot of premature errors that might end up needing too much code; etc.

I've recently started a job at a much larger, older company than anywhere I've worked before. Previously, I worked at companies with around ~5 developers, where most of the code was at the very most 2 years old. Now, I'm working at a company with 90 developers (I think, we keep hiring people) and the oldest bits of the code are as much as 6 years old.

I always have been a fan of writing clean code, and especially of achieving appropriate separation of concerns, but previously it has mostly been for aesthetic reasons. When your codebase is small and new enough that the people working on it are the original authors, it's fairly easy to deal with messy code. Now, I'm really beginning to see the impact crappy code has on an organization as it scales. The code at my company is actually mostly not too bad, and the management is very supportive of refactoring, but it's absolutely clear to me that were the code cleaner, we would get more done, faster.

TL;DR; Writing bad code prevents you from writing good software (eventually.)

This is why, as developers, we need to remember the end user and optimize for their experience. I have often found a good exercise is dedicating time to deleting code. I am always surprised on how much this process ends up improving my overall UX.

I absolutely agree. I have too many coworkers in the 'more code, more objects' camp; which is usually bad news for the end user. That's exactly why I wrote the article.

I find the article to be in line with my recent rants at work.

My background is airline and finance enterprise software where lots of highly skilled of people are paid a high salary to do relatively simple and small coding tasks. (That's what they think but in reality most just don't want to take the initiative to go beyond the status quo, but that is another rant altogether).

More often than not this leads to engineers taking every opportunity to over-engineer a solution to simple problem so they feel that they are seen as top-notch engineers when in effect they are taking a simple problem and turning it into a maintenance and support nightmare.

In these situations a simple change such as adding a field to a method signature impacts multiple classes and interfaces due to the Lasagna code effect.

In this environment engineers religiously spend additional time planning, designing, and coding their solutions to be adaptable to changes that have almost no chance of happening.

If you spend 4 days creating an IT system that can seamlessly switch between and active directory back-end and a RDMBS (or anything unforeseen for that matter) back-end but the change never happens, then you just wasted 4 days. If instead you waited until the RDBMS support requirement was necessary it would probably still take the same 4 days but they would not have been wasted.

On the other hand, I am now looking at a system that has been fast-track maintained for multiple years, to the point that a simple change now takes multiple days of puzzling over mysterious side effects. I can't even begin refactoring it. I don't know where to start. It's a tightly interwoven ball of 'interesting' decisions, patches, changes and obsolete requirements. The fact that all the original developers no longer work in this place is additional bonus. The only plausible solution seems to collect the current business needs and rewrite the damn thing.

On the other hand, it's an opportunity to rewrite the custom system into a product. Clients have shown interest in having their own copy multiple times, but the cost of implementing that horrorshow on their systems has always been a prohibiting issue.


It sounds like most custom IT business solutions. This brings to light something I have been pondering for a while now. Basically, there are two types cultures related to development teams.

Culture A, treats the software as their business and respect it as much as they do their customers, employees and partners. In an "A" department developers are empowered to take ownership over the code. This means that they feel obliged to refactor code and improve comments, documentation, and tests as a part of of whatever change they are delivering.

It is understood that bugs can come from this but that the overall benefit of not ending up with an unmaintainable system is worth the cost of an accidental bug from time-to-time.

Culture B, treats the software as a necessary evil or at best a commodity. Developers are instructed to perform the least amount of changes to meet the changing business requirements. These department are almost always in crises mode as the inflexibility of there bubblegum patchwork has put them into a reactionary relationship with the team who is driving the requirements.

In "B" departments you often see high turnover by the best engineers since they feel something is wrong with the departmental management strategy.

The high turnover in turn leads to less knowledge about the most complex components in the system and even more reluctance from the management and engineers to risk refactoring something that no one completely understands.

I've noticed the "B" shops are always looking for a new silver bullet, the new servers to be able to run the integration tests in under 48 hours since the code is not unit testable or changing frameworks every two years in hopes it will lead to a more flexible system.

Unfortunately, in my experience, the "B" companies tend to pay the highest salaries.

Touching multiple files for micro-changes, eh?

Don't knock on OOP cause your colleague does not understand encapsulation or basic principles like SRP (Single responsibility principle).

I might be out of this site moto, but..

Just my IMHO - If customer's business processes can not be described by clean code this means business process is messed up itself.

And if client want to ship it now and do not care if it is right - process is broken.

I think this comes from the world of proprietary software, where agencies write some mess code to fit exactly current business practices and continue to "iterate" with this mess patching ad-hoc new and new requirements from executives. This leads to huge overhead in software. The only reason we do not see all this disgusting shit - it is proprietary.

This approach is good for agency. And sometimes - for client. But not for all of us. I am talking about banking/law/health/etc systems built with exact ideas represented in this article.

In open source world such approach simply will not work. Nobody will want to touch junk spaghetti code.

So... I believe this approach flawed in long run. And we as developers should be more then stupid machines producing code. We need to think forward and if something do not make sense - tell it and push through if needed.

just some random rant after seeing boss pushing project which kinda fits client's business schema, but then it starts stagnate, 'cos it require more changes... And these changes lead to even more mess..

The following kinds of problem lend themselves well to simple solutions that do a 1:1 mapping of the solution to the problem being solved (aka the 100 line script) - a) Problems that are simple b) Problems that are well defined, static and not expected to change over time c) Problems about which the implementor knows so little that a more elaborate solution would be premature d) Problems that appear complex but lend themselves to asimplistic solution (usually underpinned by some beautiful mathematical truth) e) Problems that dont require long lasting solutions. eg ones where you are trying to cash into an ongoing meme which could run out at any moment

Problems that dont fit into the above categories require more thought to be put into identifying the underlying abstractions and for these great software solutions emerge from a) Understanding the problem being solved b) Identifying the abstractions which underpin the problem c) Identifying how the problem is expected to evolve over time and verifying that the abstractions hold up for expected changes.

I think you are drawing too much from your experience of a single problem and code from a single programmer.

I think the problem of mess is that, if you have a team and they think so differently that they all don't like how others code.

Not to say, there are more than OOP on the world for programming. e.g. Hacker news is not written in some OOP environment and it is a piece of good software (IMHO).

I always remember this saying:

  // When I wrote this, only God and I understood what I was doing
  // Now, God only knows

Supports the idea that customer's don't care about you.

They don't care what you code in. They don't care about your frameworks. They don't care how you made your life easier to build something for them.

Good code is possible in almost every language. Every language has it's pros and cons.

I would argue developers code less in languages nowadays for the web. They code in frameworks, so it's already one step removed. Coding in frameworks (rails, django, pylon, _______) are meant to do one thing, build great things faster.

Often we get so tangled up in focussing on the frameworks and tools we forget the whole point, the users. Building software is not about building a mausoleum dedicated to ourselves.

It's about making the lives of others easier.

Are you as fanatical about making customers lives easier as your own coding?

"Are you as fanatical about making customers lives easier as your own coding?" Good quote.

Good code is succinct. OOP is not a pre-requisite for "good code", it depends on what you are using it for. It takes longer to write less code than to write more code. Not taking the time to write good code will eventually result in a very complex program that is hard to modify, hard to fix bugs with.

You write good code to save time down the line. Sure, bad code is fine for one-use scripts, but if you are building a product that you know will be around for a long time (i.e not a prototype, you've done that already), it pays to write good code; Good code will make it easier to fix bugs, easier to add features, and lead to better software.

For large and complicated applications I do not understand what a good program with not so good code will look like: I personally find it difficult to imagine.

The main problem I have with this article is the intent that is based on a distinction between good code and good software. One intention of design and good code for large applications (in order to be good software) is to try and cope with and understand its complexity - before it is too late - and not to underestimate Murphy - who is waiting for us. And I don't know how that is possible (good design and maintainable code that could be refactored more easily) without good code.

The best code is the code you don't need to write. Build only what you need (YAGNI) and introduce abstraction to avoid duplication (DRY). My rule is don't ever do the same thing three times (though I sometimes enforce it at two). Conversely, if you find you no longer need something (no matter how clever it was) delete it. It will only confuse you or whoever else is working on the system later.

If it truly is so clever it must be saved, throw it on your github account and reference it another day when you may need it again. Regardless, get everything that isn't needed out of the code base.

> If you can, you will build a much more maintainable piece of software. Instead of 3,000 lines of code, you have 1,000. Instead of a ton of object dependencies where one change means having to find the references to it in all of your objects, you simply change what needs to be changed, knowing there’s no extra dependencies in your code for the sake of having beautiful code.

What? So 3000 lines of code to do the same thing would be "good code", and 1000 lines of code would be "good software".

My apologies; and thanks for bringing this up! Someone above brought up the same thing.

I meant 'good code' as seen by those that write 3000 lines for an API where only a simple 1000 was necessary. I work with plenty of people that see it this way, which is why I phrased it like so.

Elaborate code? Is he talking about encapsulating things like database and web service implementation details? Is he talking about stuff like IoC?

What happens when you need to fix a bug or add a new feature. I know he says "just change what needs to be changed" but sooner or later you will run into the situation where a shared resource, like a database table, needs to be changed....

Ah forget it he is talking about incurring technical debt to ship on time. Debt is debt, you will pay the price sooner or later.

I think most of us agree that there is a Goldilocks zone of number of classes, given a certain number of lines of code. Alex points out, quite rightly in my opinion, that many of us developers have a bias that sometimes takes us out of that zone.

My attempt at drawing a diagram of the Goldilocks zone: https://plus.google.com/u/0/101149790069455088279/posts/CoB9...

linkbait garbage... you can write good code without wasting a bunch of time. Having worked for companies that have products that were started out the way this guy describes, I can attest that no one should ever build software that way, at one point he seems to be suggesting writing procedural code rather than "bothering" with making it object-oriented from the beginning. A good developer can write code can walk the line between wasting too much time initially and writing clean, object-oriented code that will be extendable later on. Inversion of control and programming against Interfaces also can help with this, cutting corners from the beginning is just going to cause a headache later.

Why can't it be both? The claim in the article is that there are deadlines, "you have to ship", etc.

It seems to me like you'd skip the whole deadline-shipnow-ASAP nonsense and just write awesome software with awesome code, even if it takes longer.

Makes the think of "Stop writing good software; Start writing good solutions."

Just another sign people are realizing design is eating the world.

Author's definition of good code is wrong.

Didn't you see the comment I put above the article? I should have defined it better; I work with tons of people who view that as good code. More objects, more code, more everything. Then it's good code. It's a super-OOP methodology that I think many of them couldn't get out of. So if they look at a class of mine that doesn't make use of a ton of objects, that's 'bad code'. The definition of 'good code' in this article strictly pertains to those people who view it that way. By the title, I meant if you think tons of objects and extra stuff is good code, then stop writing good code, and focus on the software, the end result for the user.


Do you really need to make a dozen interwoven classes, when it’s possible just a hundred or so lines in one class will do fine?

The 100-line simple solution is better code. Complex solutions with clever object topologies are not "good code".

Instead of a ton of object dependencies where one change means having to find the references to it in all of your objects, you simply change what needs to be changed, knowing there’s no extra dependencies in your code for the sake of having beautiful code.

Dependencies are innately ugly, not "beautiful". Again, the code the OP describes is not "good code".

I agree with what the OP is saying. I just think the definition of "good code" he's using is bizarre, because I can't think of anyone who would call endlessly complex code "good". I think it's better to say, "stop writing clever code; start writing useful and maintainable software".

Thanks for your comment! I should have defined it better; I work with tons of people who view that as good code. More objects, more code, more everything. Then it's good code. It's a super-OOP methodology that I think many of them couldn't get out of. So if they look at a class of mine that doesn't make use of a ton of objects, that's 'bad code'. The definition of 'good code' in this article strictly pertains to those people who view it that way. By the title, I meant if you think tons of objects and extra stuff is good code, then stop writing good code, and focus on the software, the end result for the user.

While I appreciate the intent of your article, I think the title is regrettable and obscures the real distinction you were going for. There should never be a distinction between "good code" and "good software."

The Software Craftsmanship movement has been having this discussion for a while, i.e., assessing the costs of bad code and making those costs visible to the stakeholders. You might find it worthwhile to engage that conversation.


I think the issue is many engineers value things like tests, short functions, interfaces, dependency injection, pure functions, comments, DRY, etc. To them, these things are qualities of "good code", and they spend all their time on this stuff instead of the bigger picture objectives. When you get out of that mindset, then I think "good code" can be equal to "good software".

It sounds like you're talking about over-engineered code, not "good code". When people solve problems that aren't there, or prematurely optimize, or make solutions needlessly general at the expense of readability, they're making the code worse.

Code is like regulation. Everyone agrees that we need laws and regulations, and that some complexity is needed to get it right, but a 7500-page code is strictly worse than a 25-page code that achieves the same end.

OOP is an interesting paradigm. The original vision of it (seen in Smalltalk, Ruby, and with a completely different approach, Scala) was a middle ground between imperative and functional programming that can capitalize on the benefits of each, and that looked a lot like the message-passing paradigm we see in Erlang. Unfortunately, what's now called OOP is absurd scaffolding and misuse of "design patterns" that makes solutions much more complex than the problems they exist to solve.

A major problem occurs when OOP is expected to fill needs better-suited by functional approaches, as in C++ and Java. It's not that OOP is evil. It's that when you start seeing FooFactoryFactory classes, you should really have been using higher-order functions a long time ago.

Agreed. Good OOP is all about creating decoupled components that can be understood independent of the rest of the code. That's why it's so hard.

Agreed. Good code is one that conforms to KISS. And "100 classes" are definitely not short and simple.

You're not paid to code, you're paid to ship product (create value), but one has to balance development speed with maintainability.

Too many programmers forget the former, and too many managers forget the latter.

Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact