
Stop writing good code; Start writing good software. - alexobenauer
http://blog.alexobenauer.com/stop-writing-good-code-start-writing-good-sof
======
davesims
Of course this is a false dichotomy, but one I encounter quite a lot. It
usually comes from younger inexperienced coders, or procedurally-minded coders
who never took the time to understand OOP.

The false assumption is that "good code" takes longer to write than "good
software." In reality "good code" only takes longer if you don't know what
you're doing. If you haven't internalized good OOP, and you haven't applied
good OOP principles enough to be efficient and judicious in your application
of those principles, then yes, you'll probably do more harm than good, and
you'll take longer to get there.

If that's the experience level your team has, then you're faced with two bad
options: 1) write a lot of duplicated, procedural-style code that you'll
despise in 3 months and be begging the Software Gods for 6 weeks of clear time
just to clean that crap up; or 2) attempt to design some nice DRY-ed up, SRP
classes with all the right GOF patterns applied, more than likely get it wrong
and really end up in the same place as 1).

But if you understand and _have experience_ (there's the rub) with good design
(see: POEAA, GOF, Clean Code, Effective Java, etc.) then there's no choice to
make, because it's _much faster_ to write clean, well-architected code. It's
must faster now, it will be much faster later.

Of course there will always be _that guy_ on the team that doesn't think in
OOP and still writes 600 line methods and glazes over if there's interfaces or
abstractions involved. That guy is just as bad as Hey Look At My Handy Dandy
Patterns Guy.

The solution to over-architecture is not bad architecture. It's
_understanding_ architecture and developing experience with it. While I have
encountered OP's situation often, even more often I've encountered the
consequences of haphazard design for those very same projects that "shipped
fast": two guys in a back room refactoring for six weeks so that -- please
please dear God -- we can get our bug rates down and start shipping features
"like we used to."

~~~
johnrob
One problem with OOP is that it's highly dependent on getting it right. If
your design is correct, that's great, but otherwise you end up in increasingly
hot water. The problem I have with OOP for new projects is that it's hard to
see the correct design until you have enough mileage.

~~~
davesims
Bad design is usually inexperienced design. 99% of the design problems you'll
encounter have been encountered before and are probably cataloged. If you have
doubts, read up, and bring in other experienced coders.

In fact, if you don't have doubts, you're probably in trouble and need to
bring in other coders. If you're not using UML or at least sketching class
relationships out in some visual form, you're likely to get it wrong.

Good OOP is hard, and requires experience and at lot of reading and
concentration to get right. And even then, it usually requires a good bit of
collaboration to get right, even for very experienced architects. But once a
good clear design has been identified, it's much faster to code and much much
easier to maintain.

~~~
cop359
" 99% of the design problems you'll encounter have been encountered before and
are probably cataloged. "

Bonus: It's already been done!

<http://en.wikipedia.org/wiki/Design_Patterns_(book)>

I heard that when they wrote the book they basically mailed a whole bunch of
people in the field to find out all the patterns people were using and they
weren't able to find more then 23. After getting 23 every other pattern they
would hear about would be just an iteration of one of the ones they had.

Full Disclosure: I haven't actually read the book yet. But I'm planning to...

~~~
mattmanser
Have you read this recently?

It impressed me 4 years ago, now I honestly think it's a relic of a bygone
era. Language advances, different API design and dynamic programming and
anonymous functions have got rid of a _lot_ of the problems that you actually
had to do these shitty patterns for.

UML also sucks and is dead, again, not sure why anyone would bother with it.

When is the last time you saw a UML article or GoF article on HN?

Wake up, they're both dead concepts. I'm not even sure why patterns have died,
they just have. Probably because people just program that way now anyway. Yes,
they were useful, but they're not needed any more. People don't write code
like that any more because they don't really have to.

~~~
davesims
Maybe it's that patterns are so common place we take for granted that someone
had to name them.

I guarantee you've used Proxy, Observer, Factory, Abstract Factory, Facade,
Bridge or some approximation of one of these if you've coded more than 100
lines of Java in the last year.

If I say "ActiveRecord" or "DataMapper" you probably think Ruby on Rails, not
Chapter 10 of Patterns of Enterprise Application Architecture. If I say
"Factory" you're probably not thinking Chapter 3 of GOF. When you think about
node.js or EventMachine, do you think about the definition of the Reactor
Pattern in POSA Volume 2?

Patterns aren't dead. They _won_.

~~~
paganel
> If I say "Factory" you're probably not thinking Chapter 3 of GOF

Honest question, what does this pattern accomplish? I'm asking this because I
just saw a bunch of Factory-like classes in some PHP code I was trying to port
to Python, and for the life of me I couldn't understand why the original
programmer had made it so complicated and convoluted, when he could have done
it all in 30 lines of code.

And a second question for whoever might have the free time to answer it: does
anybody actively use inheritance (or, why not, multiple inheritance) on a
daily basis and in the same time do they feel like it helps them? (as in: does
the size of their code base gets significantly smaller? does the code fits
better in one's head? things like that).

~~~
davesims
A Factory is used to create an object of a specific type, when the calling
code only has a reference to a supertype or prefereably an abstract type of
that object. This reduces coupling and therefore side-effects in your code.

For instance:

    
    
      Car car = CarFactory.newCar(someLocalContext);
    

Might return a specific type of car for the given context, but the calling
code's coupling is only to the supertype Car, and therefore can operate in the
same way on any kind of Car.

Since php is dynamically typed there's not as much reason to use this pattern,
although in certain cases it might be the right choice.

The Factories in question might actually be more like Builders -- classes used
to hide complex construction processes.

I use inheritance all the time in Ruby and Java these days. With Ruby you get
Modules and the Mixin approach, which allows for what is essentially multiple
inheritance. I try to keep the level of inheritance close to 1 (I can't
remember the last time I went past that) but yes, it's an essential tool in
the toolkit.

I'm apparently one of the last coders on earth that still uses UML, but I find
it helps me clarify architectures and visualize my code. OOP was meant to be
visual in nature -- object relationships are, imho, best understood visually
rather than through linear code. My suggestion would be to get used to UML or
some hacked derivative thereof and express your dependencies visually, and you
may find inheritance starts to make more sense.

~~~
mattmanser
Perhaps it's just a difference in styles, I avoid inheritance like the plague.
To me something has to be really, really special to inherit off another class
that I've written.

And even then I'll come back to it the next day and see if I can get rid of it
or it really does still make sense.

------
huckfinnaafb
There's good code, which is something that good products build their
foundation on, and near perfect code, which can get in the way of building
realistic solutions on time.

> _Do you really need a full objet-oriented API right now? Do you really need
> to make a dozen interwoven classes, when it’s possible just a hundred or so
> lines in one class will do fine? Can you do all the same error checking and
> unit tests in a much smaller code base?_

This is not necessarily "good code". That's code you think is good. Excessive
or complex code is not good code, and I think the author should redefine his
usage to what programmers sometimes perceive as good code.

~~~
abk
I was just about to post the same thing. I agree with the underlying thesis
that a good shipping product is better than perfect code that never ships, but
the author seems to be confusing good code and over-engineering.

Good code does exactly what it needs to, but will be maintainable and won't
have to be thrown away when that "fantastic" product ends up getting popular.

I also tend to believe that doing it right doesn't necessarily take much
longer. The code I write now is much better than the code I wrote 10 years ago
for a number of reasons, but I still get a lot more done, and a lot faster
than I did back then.

------
alephnil
"I would have written a shorter letter, but I did not have the time." - Blaise
Pascal

Writing short, well working code often require a lot of thinking, redesign and
reworking. That is mostly not the case with longer code. I personally find
such short code far more beautiful than generic and expandable object oriented
frameworks. I usually find that making things generic is futile, because later
requirements most often becomes something entirely different from what you
thought. Then the generality just get in the way. Unfortunately a lot of
people seems to disagree with that.

------
SonicSoul
" Instead of 3,000 lines of code, you have 1,000. Instead of a ton of object
dependencies where one change means having to find the references to it in all
of your objects, you simply change what needs to be changed, knowing there’s
no extra dependencies in your code for the sake of having beautiful code."

i agree about the notion of focusing on big picture when designing a product,
but, this article seems to make an assumption that beautiful code means more
lines and more dependencies, and that by skipping some corners you greatly
decrease both, while making your software more robust. That is a direct
contradiction. you skip corners to ship faster, test the market, deliver on
deadline, but not to make your code easier to work with.

As engineers, we're perfectionists, and each time we design a piece of
software we try to make it a little better, easier to read/maintain, more
elegant. That is not something that is achieved by cutting corners. 100 lines
of code is fine for a prototype, but that's not what great products are built
on, and certainly not how you achieve less dependencies.

------
voidr
What annoys me about many OOP people is that they think designing class
hierarchies are more important than making the code actually do what it needs
to, it's like they get bonus points for every new class they introduce. I know
sometimes this is the way to go, but most of them to it prematurely and in the
end all they have is more code to refactor.

I once had an algorithm composed of 3 functions that had immutable inputs and
outputs, it was easy to understand and easy to debug, now a coworker took that
and rewrote it into a class, which was mutable for no reason. The results: he
had a code that was 5 times larger than the original, and was way harder to
understand and debug.

OOP is cool and enterprisish and all that but some people really need to
understand that it's not the only way and in many cases it's certainly not the
best way.

I think making a code as short as possible without obfuscating it is one of
the best ways to start out, all the extra layers can be added later on if
required.

------
zdw
Writing an API isn't the problem - being a perfectionist about internal design
is.

My design process has gone from "write code that does something" to "figure
out what data you want to store and how to validate it, then write code".

Once the data is in a defined format, it's much easier to change how the code
works without breaking things. That's the way to go about API design -
designing callable libraries that work in only one language and have only one
implementation is a quick way to cut off a huge swath of developers from
interacting with you.

~~~
ams6110
Reminds me of one of the dev managers at my first "real job" -- this was in
the 1990s -- who would say "get the data model right, and the code will almost
write itself" and I have found in most cases (certainly for most "business"
apps) that is true.

------
notJim
I've recently started a job at a much larger, older company than anywhere I've
worked before. Previously, I worked at companies with around ~5 developers,
where most of the code was at the very most 2 years old. Now, I'm working at a
company with 90 developers (I think, we keep hiring people) and the oldest
bits of the code are as much as 6 years old.

I always have been a fan of writing clean code, and especially of achieving
appropriate separation of concerns, but previously it has mostly been for
aesthetic reasons. When your codebase is small and new enough that the people
working on it are the original authors, it's fairly easy to deal with messy
code. Now, I'm really beginning to see the impact crappy code has on an
organization as it scales. The code at my company is actually mostly not too
bad, and the management is very supportive of refactoring, but it's absolutely
clear to me that were the code cleaner, we would get more done, faster.

TL;DR; Writing bad code _prevents you from_ writing good software
(eventually.)

------
southpolesteve
This is why, as developers, we need to remember the end user and optimize for
their experience. I have often found a good exercise is dedicating time to
deleting code. I am always surprised on how much this process ends up
improving my overall UX.

~~~
alexobenauer
I absolutely agree. I have too many coworkers in the 'more code, more objects'
camp; which is usually bad news for the end user. That's exactly why I wrote
the article.

------
imechura
I find the article to be in line with my recent rants at work.

My background is airline and finance enterprise software where lots of highly
skilled of people are paid a high salary to do relatively simple and small
coding tasks. (That's what they think but in reality most just don't want to
take the initiative to go beyond the status quo, but that is another rant
altogether).

More often than not this leads to engineers taking every opportunity to over-
engineer a solution to simple problem so they feel that they are seen as top-
notch engineers when in effect they are taking a simple problem and turning it
into a maintenance and support nightmare.

In these situations a simple change such as adding a field to a method
signature impacts multiple classes and interfaces due to the Lasagna code
effect.

In this environment engineers religiously spend additional time planning,
designing, and coding their solutions to be adaptable to changes that have
almost no chance of happening.

If you spend 4 days creating an IT system that can seamlessly switch between
and active directory back-end and a RDMBS (or anything unforeseen for that
matter) back-end but the change never happens, then you just wasted 4 days. If
instead you waited until the RDBMS support requirement was necessary it would
probably still take the same 4 days but they would not have been wasted.

~~~
silversmith
On the other hand, I am now looking at a system that has been fast-track
maintained for multiple years, to the point that a simple change now takes
multiple days of puzzling over mysterious side effects. I can't even begin
refactoring it. I don't know where to start. It's a tightly interwoven ball of
'interesting' decisions, patches, changes and obsolete requirements. The fact
that all the original developers no longer work in this place is additional
bonus. The only plausible solution seems to collect the current business needs
and rewrite the damn thing.

On the other hand, it's an opportunity to rewrite the custom system into a
product. Clients have shown interest in having their own copy multiple times,
but the cost of implementing that horrorshow on their systems has always been
a prohibiting issue.

~~~
imechura
Sad,

It sounds like most custom IT business solutions. This brings to light
something I have been pondering for a while now. Basically, there are two
types cultures related to development teams.

Culture A, treats the software as their business and respect it as much as
they do their customers, employees and partners. In an "A" department
developers are empowered to take ownership over the code. This means that they
feel obliged to refactor code and improve comments, documentation, and tests
as a part of of whatever change they are delivering.

It is understood that bugs can come from this but that the overall benefit of
not ending up with an unmaintainable system is worth the cost of an accidental
bug from time-to-time.

Culture B, treats the software as a necessary evil or at best a commodity.
Developers are instructed to perform the least amount of changes to meet the
changing business requirements. These department are almost always in crises
mode as the inflexibility of there bubblegum patchwork has put them into a
reactionary relationship with the team who is driving the requirements.

In "B" departments you often see high turnover by the best engineers since
they feel something is wrong with the departmental management strategy.

The high turnover in turn leads to less knowledge about the most complex
components in the system and even more reluctance from the management and
engineers to risk refactoring something that no one completely understands.

I've noticed the "B" shops are always looking for a new silver bullet, the new
servers to be able to run the integration tests in under 48 hours since the
code is not unit testable or changing frameworks every two years in hopes it
will lead to a more flexible system.

Unfortunately, in my experience, the "B" companies tend to pay the highest
salaries.

------
rvenugopal
Touching multiple files for micro-changes, eh?

Don't knock on OOP cause your colleague does not understand encapsulation or
basic principles like SRP (Single responsibility principle).

------
hippich
I might be out of this site moto, but..

Just my IMHO - If customer's business processes can not be described by clean
code this means business process is messed up itself.

And if client want to ship it now and do not care if it is right - process is
broken.

I think this comes from the world of proprietary software, where agencies
write some mess code to fit exactly current business practices and continue to
"iterate" with this mess patching ad-hoc new and new requirements from
executives. This leads to huge overhead in software. The only reason we do not
see all this disgusting shit - it is proprietary.

This approach is good for agency. And sometimes - for client. But not for all
of us. I am talking about banking/law/health/etc systems built with exact
ideas represented in this article.

In open source world such approach simply will not work. Nobody will want to
touch junk spaghetti code.

So... I believe this approach flawed in long run. And we as developers should
be more then stupid machines producing code. We need to think forward and if
something do not make sense - tell it and push through if needed.

just some random rant after seeing boss pushing project which kinda fits
client's business schema, but then it starts stagnate, 'cos it require more
changes... And these changes lead to even more mess..

------
dman
The following kinds of problem lend themselves well to simple solutions that
do a 1:1 mapping of the solution to the problem being solved (aka the 100 line
script) - a) Problems that are simple b) Problems that are well defined,
static and not expected to change over time c) Problems about which the
implementor knows so little that a more elaborate solution would be premature
d) Problems that appear complex but lend themselves to asimplistic solution
(usually underpinned by some beautiful mathematical truth) e) Problems that
dont require long lasting solutions. eg ones where you are trying to cash into
an ongoing meme which could run out at any moment

Problems that dont fit into the above categories require more thought to be
put into identifying the underlying abstractions and for these great software
solutions emerge from a) Understanding the problem being solved b) Identifying
the abstractions which underpin the problem c) Identifying how the problem is
expected to evolve over time and verifying that the abstractions hold up for
expected changes.

I think you are drawing too much from your experience of a single problem and
code from a single programmer.

------
itsnotvalid
I think the problem of mess is that, if you have a team and they think so
differently that they all don't like how others code.

Not to say, there are more than OOP on the world for programming. e.g. Hacker
news is not written in some OOP environment and it is a piece of good software
(IMHO).

I always remember this saying:

    
    
      // When I wrote this, only God and I understood what I was doing
      // Now, God only knows

------
j45
Supports the idea that customer's don't care about you.

They don't care what you code in. They don't care about your frameworks. They
don't care how you made _your_ life easier to build something for them.

Good code is possible in almost every language. Every language has it's pros
and cons.

I would argue developers code less in languages nowadays for the web. They
code in frameworks, so it's already one step removed. Coding in frameworks
(rails, django, pylon, _______) are meant to do one thing, build great things
faster.

Often we get so tangled up in focussing on the frameworks and tools we forget
the whole point, the users. Building software is not about building a
mausoleum dedicated to ourselves.

It's about making the lives of others easier.

Are you as fanatical about making _customers_ lives easier as your own coding?

~~~
jccodez
"Are you as fanatical about making customers lives easier as your own coding?"
Good quote.

------
meric
Good code is succinct. OOP is not a pre-requisite for "good code", it depends
on what you are using it for. It takes longer to write less code than to write
more code. Not taking the time to write good code will eventually result in a
very complex program that is hard to modify, hard to fix bugs with.

You write good code to save time down the line. Sure, bad code is fine for
one-use scripts, but if you are building a product that you know will be
around for a long time (i.e not a prototype, you've done that already), it
pays to write good code; Good code will make it easier to fix bugs, easier to
add features, and lead to better software.

------
louw
For large and complicated applications I do not understand what a good program
with not so good code will look like: I personally find it difficult to
imagine.

The main problem I have with this article is the intent that is based on a
distinction between good code and good software. One intention of design and
good code for large applications (in order to be good software) is to try and
cope with and understand its complexity - before it is too late - and not to
underestimate Murphy - who is waiting for us. And I don't know how that is
possible (good design and maintainable code that could be refactored more
easily) without good code.

------
nahname
The best code is the code you don't need to write. Build only what you need
(YAGNI) and introduce abstraction to avoid duplication (DRY). My rule is don't
ever do the same thing three times (though I sometimes enforce it at two).
Conversely, if you find you no longer need something (no matter how clever it
was) delete it. It will only confuse you or whoever else is working on the
system later.

If it truly is so clever it must be saved, throw it on your github account and
reference it another day when you may need it again. Regardless, get
everything that isn't needed out of the code base.

------
jronkone
> If you can, you will build a much more maintainable piece of software.
> Instead of 3,000 lines of code, you have 1,000. Instead of a ton of object
> dependencies where one change means having to find the references to it in
> all of your objects, you simply change what needs to be changed, knowing
> there’s no extra dependencies in your code for the sake of having beautiful
> code.

What? So 3000 lines of code to do the same thing would be "good code", and
1000 lines of code would be "good software".

~~~
alexobenauer
My apologies; and thanks for bringing this up! Someone above brought up the
same thing.

I meant 'good code' as seen by those that write 3000 lines for an API where
only a simple 1000 was necessary. I work with plenty of people that see it
this way, which is why I phrased it like so.

------
Hominem
Elaborate code? Is he talking about encapsulating things like database and web
service implementation details? Is he talking about stuff like IoC?

What happens when you need to fix a bug or add a new feature. I know he says
"just change what needs to be changed" but sooner or later you will run into
the situation where a shared resource, like a database table, needs to be
changed....

Ah forget it he is talking about incurring technical debt to ship on time.
Debt is debt, you will pay the price sooner or later.

------
momander
I think most of us agree that there is a Goldilocks zone of number of classes,
given a certain number of lines of code. Alex points out, quite rightly in my
opinion, that many of us developers have a bias that sometimes takes us out of
that zone.

My attempt at drawing a diagram of the Goldilocks zone:
[https://plus.google.com/u/0/101149790069455088279/posts/CoB9...](https://plus.google.com/u/0/101149790069455088279/posts/CoB9MSsFqjq)

------
devs1010
linkbait garbage... you can write good code without wasting a bunch of time.
Having worked for companies that have products that were started out the way
this guy describes, I can attest that no one should ever build software that
way, at one point he seems to be suggesting writing procedural code rather
than "bothering" with making it object-oriented from the beginning. A good
developer can write code can walk the line between wasting too much time
initially and writing clean, object-oriented code that will be extendable
later on. Inversion of control and programming against Interfaces also can
help with this, cutting corners from the beginning is just going to cause a
headache later.

------
ggwicz
Why can't it be both? The claim in the article is that there are deadlines,
"you have to ship", etc.

It seems to me like you'd skip the whole deadline-shipnow-ASAP nonsense and
just write awesome software with awesome code, even if it takes longer.

------
Zolomon
Makes the think of "Stop writing good software; Start writing good solutions."

------
acgourley
Just another sign people are realizing design is eating the world.

------
gsap
Author's definition of good code is wrong.

~~~
alexobenauer
Didn't you see the comment I put above the article? I should have defined it
better; I work with tons of people who view that as good code. More objects,
more code, more everything. Then it's good code. It's a super-OOP methodology
that I think many of them couldn't get out of. So if they look at a class of
mine that doesn't make use of a ton of objects, that's 'bad code'. The
definition of 'good code' in this article strictly pertains to those people
who view it that way. By the title, I meant if you think tons of objects and
extra stuff is good code, then stop writing good code, and focus on the
software, the end result for the user.

------
michaelochurch
Bizarre.

 _Do you really need to make a dozen interwoven classes, when it’s possible
just a hundred or so lines in one class will do fine?_

The 100-line simple solution is better code. Complex solutions with clever
object topologies are not "good code".

 _Instead of a ton of object dependencies where one change means having to
find the references to it in all of your objects, you simply change what needs
to be changed, knowing there’s no extra dependencies in your code for the sake
of having beautiful code._

Dependencies are innately ugly, not "beautiful". Again, the code the OP
describes is not "good code".

I agree with what the OP is saying. I just think the definition of "good code"
he's using is bizarre, because I can't think of anyone who would call
endlessly complex code "good". I think it's better to say, "stop writing
clever code; start writing useful and maintainable software".

~~~
alexobenauer
Thanks for your comment! I should have defined it better; I work with tons of
people who view that as good code. More objects, more code, more everything.
Then it's good code. It's a super-OOP methodology that I think many of them
couldn't get out of. So if they look at a class of mine that doesn't make use
of a ton of objects, that's 'bad code'. The definition of 'good code' in this
article strictly pertains to those people who view it that way. By the title,
I meant if you think tons of objects and extra stuff is good code, then stop
writing good code, and focus on the software, the end result for the user.

~~~
davesims
While I appreciate the intent of your article, I think the title is
regrettable and obscures the real distinction you were going for. There should
never be a distinction between "good code" and "good software."

The Software Craftsmanship movement has been having this discussion for a
while, i.e., assessing the costs of bad code and making those costs visible to
the stakeholders. You might find it worthwhile to engage that conversation.

<http://scmanifesto.heroku.com/main>

~~~
danek
I think the issue is many engineers value things like tests, short functions,
interfaces, dependency injection, pure functions, comments, DRY, etc. To them,
these things are qualities of "good code", and they spend all their time on
this stuff instead of the bigger picture objectives. When you get out of that
mindset, then I think "good code" can be equal to "good software".

------
georgieporgie
You're not paid to code, you're paid to ship product (create value), but one
has to balance development speed with maintainability.

Too many programmers forget the former, and too many managers forget the
latter.

