Hacker News new | past | comments | ask | show | jobs | submit login
There's no shame in code that is simply "good enough" (phiz.net)
182 points by Brajeshwar on Jan 1, 2012 | hide | past | favorite | 67 comments

This actually causes "coder's block" in me when I start a new project. If I have this cool app or site idea, it often never gets off the ground because I spend too much time trying to create some architectural framework that I assume I need.

This is REALLY bad for me in Cocoa/Cocoa Touch apps. Objective-C is so verbose and writing classes is so "mechanically cumbersome" (having to write .h and .m files and duplicate stuff, for example) that I sometimes lose interest before I ever really get off the ground.

The trick I use to beat this tendency is to totally eschew good software engineering principles at first. Typically all of my code lives in main.m (and I usually start out in CodeRunner rather than XCode). Interfaces are done sloppily in Interface Builder or entirely in code. This really helps me get started.

Before long, of course, I have this unwieldy spaghettified mess of a single source file. By that time, though, I have some stuff WORKING, so doing some refactoring into a good architecture for the code I have while keeping the working stuff working is kind of fun.

I find that the limited free time (wife, kids, etc.) I have available for side projects forces me to prioritize. If I gold-plate everything, nothing will ever get done, so I simply don't write code that's not absolutely necessary.

I will code very thin "frameworks" if it becomes obvious that it will save time over the long haul, but there is no "this might be useful someday" code. If it doesn't move the project forward, it doesn't get done.

If your most precious resource -- time -- is limited, it forces ruthless prioritization.

I read a helpful change of perspective for gold-plating perfectionists: instead of writing "perfect" code, seek the "perfect" compromise. :)

Yeah, this is basically my approach too; my first priority when "banging out an idea" is /get it working/ and make a "minimum viable product" (or tool, utility, script...whatever). If it's awesome, I might consider re-writing it, learning from the mistakes or architectural flaws of my first design. Note that the very first revision can usually be dropped in favor of a paper design of the thing, where you can visualize the flaws before writing the first line, and correct them before you start typing.

There are also things that I have that technically work, but are ugly, but are never getting fixed. I learned from them and it's fun to look back at how much better I got over time. I could probably go back and turn everything into nice classes and remove big blocks of commented-out code, but...it'd be pointless. I'd rather work on something new and do it better the next time. Tackling exciting new problems is what makes coding interesting for me, and I'd hate to lose that "just" to be proper.

I like to think of my lang dirs in my homedir (ruby, c, python, js, php, etc) as language-specific "coding sketchbooks" where I'm developing recipes that I might borrow from later. It's kind of similar to being a chef, I suppose -- you finely craft and refine dishes for your day job ("staging/production"), maybe you're lucky enough to get some paid time to work on your ideas ("dev"), and you dedicate one night a week to culinary experiments on your own time, maybe with friends or family ("passion").

I feel like a good programmer is like a good chef, and if you lose the passion, you mostly stop getting better.

It's important to get your hands dirty and do a bottom-up approach, but it is equally as important to think top-down as well. What I mean is don't be ignorant of the "gold plate" while you are creating the minimal viable product, but be flexible enough so that it doesn't get in your way.

I try to think in terms of interfaces, but implement quick & dirty.

So, for example, I may imagine that data sources for a page will come from several places: the db, Google maps, an api like Twitter etc. And I will create stub classes to capture this concept, but the methods will just return hard-coded elements. Eventually if the project goes anywhere I can build out the underlying methods as necessary.

I find this helps me feel like I'm doing real engineering but not get too lost in the abstractions.

Right there with you. I have started doing the same thing on my iOS projects. I just keep telling myself to plow ahead and not to make it perfect. Go with my gut instinct at that time and keep on moving forward.

What I also like to note to myself, mentally anyway, is that it is far far far... more rewarding to start and finish an app or project than it is to be "almost" finished, but what you may consider perfect code.

Joel Schindall, an EE at MIT, tells a good story about this:

The ship date for a chip he was managing was just a few weeks away when he got a call from a supplier informing him that a key component would be delayed. Worried, he went to one of the engineers who designed the chip and told him of the problem.

The engineer was not concerned; "I thought they might not deliver that component, so I left extra space around it where we can add these additional parts that do the same thing."

Joel was happy, but surprised, and he took a closer look at the chip. "But you didn't leave extra space for other components on the chip."

"Yeah," the engineer replied, "I just didn't think those would be a problem."

One of the harder jobs in engineering and design is anticipating problems. That engineer took the time to think carefully about the problems that might arise with each of the components on the chip, prioritized the risk, and only spent the time to really carefully architect the parts of the circuit that were most likely to be trouble.

It seems that software is the same way; the best coders will assess the risk of all the code they write, and only spend their time and energy to protect against the problems that are most likely to arise. That leads to code that is better than "good enough" but is not over-architected. Of course, that intuition is really hard to develop and impossible to perfect.

"Would an engineer design a small, single lane bridge for a rural Northumberland village so that it could support the weight of a thousand double decker buses? No. So why do we, as software engineers try to do exactly this? That day will never come."

It comes every time a boss or client says "oh, and now we need it to do XYZ. And you can't rewrite or start over. We need it tomorrow. Build on what you have - reusable software is our goal." and so on.

No one is going to go to a bridge engineer after a two lane bridge is up and running, and that took 2 years to build, and say "ok, add 4 more lanes by next week. Oh, and you have no budget". No one in their right mind would ever dream of doing that. But with software we face it all the tim.

Basically, many of us are conditioned to have to extend on top of whatever our first iteration is, so we try to make the first iteration extendable. I'm not saying it's right or good, but I've fallen in to this trap too many times in my career and seen it happen to too many other people to think it's not a contributing factor.

I can tell you from first-hand experience, though, that if you cannot scale immediately when the time comes, your business can drastically suffer for it.

I don't disagree, but a few points spring to mind. And in my earlier post, I wasn't really thinking about 'scaling', but new functionality/features.

Occasionally "immediately" really does mean "in the next 5 minutes". Oftentimes it actually means more on the order of days or a couple weeks. "Scaling" can mean different things, and anything beyond "scaling" web requests alone will likely mean a shift in business operations that can't/won't happen overnight either.

In most situations where I've been in where people request something, but the rest of the business unit really isn't ready to handle the change. "We need the new data reports now" but when you make the change you realize the other company who pulls the data to process it is on vacation and won't be back for two weeks, and if you make the change now, then it'll break everything.

Yeah. I was only referring to scaling in the sense that the number of users increases. Often, it is easier to implement an application by ignoring scalability, and you can get it out the door quicker.

However, this can be a terrible mistake if your users are not resilient to downtime or lost data (e.g. in the case of Facebook games). Even just a few hours of downtime or slowdown can cause a significant, permanent drop in users.

Always keep in mind the great Larry Wall quote:

"Always write code as though it will be maintained by a homicidal, axe-wielding maniac who knows where you live ..."

There are two ways to write poorly maintainable code. One is poorly organized spaghetti code and the other is overly architected code.

Larry Wall also once said, "a programmer can write in assembly using any language."

Never realised Larry Wall was a Young Ones fan.

The key I've found is not worrying about how beautiful or clever the code is, but how maintainable it is.

If the code is designed properly, the only requirement I have is that I can go back into my code and change it easily to what I now need it to do. It should be malleable like silly putty. If I can do that easily, without requiring massive rewrites, then this means that the code can change as my requirements change, and that to me is a good design, and "good enough" code. So don't worry about optimizing too early, but make sure the code can be optimized easily when you need it to be.

If I want to make code changes that requires massive rewrites when my requirements change a little, then it means I've coded myself into a corner, and I've done a poor job designing the code.

This has to be tempered, though, with some consideration of how likely it is that the requirements will change. Of course we might say "requirements always change" but there are definitely situations where to know with a high degree of certainty that you are writing something that will be used once or a few times and never again, or something small enough and well defined enough that the requirements can't change much. If you worry too much about maintainability on code that will never be maintained, you're wasting time.

I agree completely, this is usually the driving principle when coding, I don't spend a lot of time with premature optimization in terms of making the code run as fast as possible, etc but if I am face with an issue where I realize I haven't thought something through in terms of maintainability or ease in extending the code for similar purposes I know I will need sometime in the near future then I have no problem spending a lot of time thinking through these problems. In my opinion, this is really what pushes one to become a better developer as the time spent thinking about this sort of issue pays off later when a similar issue comes up again and the solution is already second nature.

Agreed with much of this!

However I disagree with "that's the nature of software engineering. You never stop learning and evolving."

The reason: burnout. My guess is that at least 10% of developers begin to lose enthusiasm for coding after a few years and then at some point either change jobs, become managers, or just have very little motivation to learn having seen the futility of it all. They may be forced to continue to learn, but may do so at a slow pace.

Why? You write code and after years or less, it can be thrown away or unused without much of a thought. You see that many of those driving projects really don't have some sort of higher purpose, and other than some perceived business need, must of it is just "wouldn't it be nice".

I feel that it is sick for a person to continue blindly learning new technology just for the sake of it. You need to have a reason. Jobs was not my favorite person in the world, but one thing he did right was to believe in what he was doing and why he was doing it. Without this, any evolution is worthless.

I agree with this for the most part, this is one reason I have fallen in love a bit recently with RAD frameworks, personally I have been using Spring Roo, which is somewhat similar to Ruby on Rails, but for Java, from what I understand. Basically, when working on a project, I want to take the shortest route to accomplishing the end need, maybe it won't be immediately scalable to thousands of users or fully optimized but as long as you don't paint yourself into a corner that can come later. I have noticed a lot of other developers are so caught up in the minutia that they can't see the forest for the trees, they care more about endless iterations, writing test cases, etc than just delivering something that works.

Agreement; the more I code the more I want to find "quick iteration" solutions and minimize deliberate engineering of complexity.

I know a few coders who have spun their wheels for a decade or possibly longer because they're still idealizing wheel reinvention. The reinvention is the easy part - after all, someone already did it, so you're just learning what they did, "the hard way." What's hard is learning to leverage the ecosystem as much as possible while bringing in original ideas; as in entrepreneurship, there are no tutorials for that.

yep, I just found myself saying something like this the other day when talking with another engineer about the issues at the current company I work for. Its my belief that modern web application development is primarily about leveraging existing frameworks and libraries to deliver the results, failure to take advantage of an existing resource can cause the project to be much more complex to maintain and waste tons of efforts programming aspects of the project that aren't concentrating on the actual domain problem that is trying to be solved.

A college teacher of mine quipped to his students that if the software is inefficient, upgrade the hardware. I have flip-flopped on this mentality over the years. Part of me longs to write code the way Charles Lindbergh knocked together the Spirit of St. Louis.


Throughout the design of his aircraft, Charles knew that his life hung on the details. For example, rather than use an all-metal design, much of the exterior was cotton fabric (to reduce weight). His design was pragmatic, practical, and brilliant.

For the most part, software developers do not create systems where the modularity, efficiency, and stability are paramount to the success of a business or the safety of the people who use the programs. Developers often create systems for data entry and data analysis. It is the data that allows a business to take flight, as it were.

You can replace system front-ends in a fortnight. Dirty data, however, can skew results and impart inflexibility in the system. Bad data can ground a business. These days I care about the software, as my mind reminisces about the minimalism and beautiful design imparted upon the Spirit of St. Louis. Yet I care much more about the quality of the database and the cleanliness of data.

I've often noticed how software is similar to hardware in one way.

Software and hardware are both destined to hit a limit, in any current configuration, no matter how it's built or put together.

Whatever computer we buy, it has a limit. The day will come that the capability we originally had will not be able to power what we need. We decide how much (and how far) to invest into the future to stay on a machine. This can be a benefit sometimes, other times, not. Sometimes we need a computer to be good enough to do a certain task, other times not.

Building software has a similar shelf life. All software, no matter how it is, or isn't architected, will have it's limits because of it. This can be a benefit sometimes, other times, not. When those limits are hit, you'll have to deal with it. Throw more horsepower at it, or refactor.

If there's code that isn't updated often, and doesn't need to be super performant, good will be the same as great code.

When starting a new project, I find myself more and more asking the questions:

- How long will I need this codebase to do what it does? - Will the codebase grow? - How soon/often will it grow? - Will additions be trivial / non-trivial?

Most often I now just start with an ultra lightweight MVC framework to keep my coding semi-organized and primed to re-factor, but not much more. I have a set of scripts that will initialize an entire project how I like and I can quickly start hacking on a new project/idea in a few minutes.

The less I obsess over every small architectural detail and let my decent habits of being reasonably kind to my future developer self, I find myself having fun while being responsible.

Would you mind expanding on the MVC framework and scripts that you use to initialize an entire project?

It sounds like that setup would really cut down on the mental overhead need to start a new project, and also cut down on obvious errors (typos, forgetting to add something, etc).

Sure, what language do you like to work in?

Any of these:

  * Python
  * Java (Android)
If a different language works better, feel free to use that instead. Thanks.

Write code, expecting to iterate. Organize it with large strokes and plenty of wiggle room.

Then if its 'good enough', leave it at that.

If it needs work later, anybody can do it - you didn't make it too dense or compact or concise to easily reinterpret or diagnose.

Its not just a good idea, its pretty much an obligation if you are paid to create it. Keep personality out of it, complete it on time and under budget, move on.

I still experience the drive to "over-architect" a solution every so often, especially when starting a new project. For me, it tends to be a result of thinking, "oh, I can add that additional feature with little impact to the timeline of the project." I have to force myself to remember the mantra: You're Not Gonna Need It. More often than not it ends up being right.


Nice read. I really appreciate his comments about over-engineering to the point that nothing useful actually gets accomplished. I've seen a lot of that.

And really the over-abstraction is wasted time. I've seen things added because "we might need them some day". These things take days or weeks to add and then never are used by customers even after many years have passed.

The over-abstraction seems to be a drug to some developers. They can't stop doing it.

Not to mention that code that is only there to provide future-proofiness can be a huge hurdle to understanding an existing codebase for any developer new to a project. The hours spent on trying to understand code that doesn't seem to make sense, only to be told that "we put that in just in case we needed xyz, but it's actually not used anywhere", can be incredibly unproductive and frustrating.

Good article. I experienced this transition myself. Small, easy projects unfold to super-huge constructs in your head and you lose passion and productivity.

"Perfect is the enemy of good enough; good enough is the enemy of all." [1]

There's no shame in code that's 'good enough' but I think there's a danger in this article of missing an important point: you can't classify your output if you don't know its context and goals. If you're coding something that definitely won't be used again then make it 'good enough' for this use case, if you need it to be extended by others over the next few months then make it 'good enough' for that use case, if you have a contractual obligation to get it out to the client 'now' then stop thinking and start doing!

But paralysis is costly. If you lack the required knowledge to take a sensible decision, then take a gamble: never hold up making a low-risk choice because you don't know enough. The opportune time to learn what the right choice was is once you've made it and can evaluate the outcome with data.

The other thing to note is that complexity on its own isn't bad, complicatedness is [2]. The knowledge you've gained shouldn't be making your code more complicated. You should spend your time on creating an arsenal of simple solutions to complex problems. That is perfection (and you will never reach it.)

Optimize your time by evaluating after instead of planning before; over time create simple solutions to complex problems and use these as shortcuts to act decisively.

[1] http://paulbuchheit.blogspot.com/2007/04/perfect-is-enemy-of...

[2] http://usabilityworks.org/2006/12/13/simplicity-complexity-a...

I was confused about your point until I clicked the first reference. You made a slight misquote that change the whole quote.

Good enough is the enemy of at all.

Oh, oops. Thanks for that! I actually miswrote it ages ago when I first read it. When I went to riff off it today I knew that I had liked the quote but I couldn't quite understand why. I told myself, "Ah Seb, this makes sense, you're just too tired to understand it!" ;)


Herein lies the art of programming - writing clean, elegant code that isn't overwrought. It's hard, but the key is to keep in mind that complexity is the enemy and simplicity is the aim.

Easier said than done :)

I was working on a project with another more senior developer and I remember saying we should refactor some part of the code or design it differently somehow and he said "Why, it already works?"

The only argument I could really come up with involved too many coding buzzwords to be taken seriously and we moved on to the next task.

That's kind of stuck with me. Whenever I'm thinking about a code change or "clever" design I just try and see if there's a justification beyond something involving words like "abstract" or "cohesion".

Trade-off between building quick functionality vs a flexible, maintainable code is a business decision. Engineers at a startup cannot sit down and design a system for months that will work for 10 million users. However, they also cannot patch their code and have it create problems within a month and then having to redesign it again.

Once your product vision is clear, you can try to see as much into the future as you can. You know where your product can/will go based on what problem you are trying to solve. Once you get into "what-if" territory, you know you have ventured too far.

Therefore, I truly believe that engineers should be aware of the business needs and the product roadmap/vision to make such decisions. Engineers can then decide where(and how much) flexibility should be added. Most of the future-proofing is done for scenarios that may not exist out of the imagination of the engineers. They should know what can be possible and what cannot. No system can be designed to handle all scenarios without adding untold complexity.

There is absolutely shame in "good enough". Mainly that created by weaselly coworkers trying to get ahead and establish technical superiority in the eyes of the non-techie management. These people never write any code themselves but are the first to start WTF'ing really loudly when someone else completes something. Usually the volume and banality of things programmers complain about identify their technical competence, with the bottom of the barrel being complaints about whitespace and formatting. "WTF! Bob put TWO spaces instead of ONE all after method names all over the ENTIRE file! whaaah!" Yes, it becomes this petty with some people.

So a smart programmer will strive for 'quick perfection', establish respect in some other way to counter this, or if they really want to sink that low, fight back with similar tactics. Smart programmers can also create review traps if they can guess what colleagues will attack them on.

Its also good if you beat up another programmer on the first day so the others know not to mess with you ;).

enjoyed reading your comment, but if one is drawing too many similarities between their work environment and prison, it could be time to find/create a new work environment.

This is the experienced developers dilemma: "To engineer or to not engineer"; engineering usually turning out to be over engineering. A more experienced developer friend of mine would always tell me: "build for today's requirements". I try hard to fight the design/architect voices in my head that always want to imagine this made up future where we will need X or else we cannot go live. These voices usually only stall real actual work, instill fear, and serve very little to no purpose.

I once had an argument with a Wall Street Java developer who was made the "lead" of one of our team projects. He decreed that every single class have an interface so that we can be generic and not tightly couple any of the components to concrete classes. I agreed that in some instances where functionality, i.e. methods that can be represented by different classes with the same method signatures made sense but not every single class needs an interface (if that is the case just go with a beautiful dynamically typed language like Python and avoid the code bloat). He got management on his side and we went off and built an overly engineered Straight Through Processing solution. It was a sheer nightmare to debug and the code bloat made me scream one day when we had a serious production issue. Even our manager (who finally had to look at the code when most of us were out on vacation once to answer some user questions) was flabbergasted at the amount of code he had to read through in order to answer the most trivial of questions. One extreme example was an interface for trade references. Our trade references were always strings with a date and some numeric value concatenated to it. The "engineer" decided that we needed an interface for this and added one interface and concrete class for our trade references. I told him that all classes needing trade references instance variables could just have a String instance variable named tradeReference or something like that and he went on to give me a design pattern lecture. We argued for nearly 20 minutes about this silly thing as he kept insisting that the future was unknown so we have to future proof the code from unforeseeable changes. When he said this I asked him to remove the Crystal ball plugin he had in Eclipse for predicting the future and get real. He got angry and we had a team call to waste yet another hour of developer time to discuss this. In the call I mentioned that our trade references scheme had not changed in 8 years and was unlikely to change... I lost the debate anyway. The ratio for most of the code base from interface to concrete class was largely 1-1 thus not justifying this code bloat approach.

Experienced developers (at least I think) seem to have these crystal balls in their heads or IDEs and usually try to be clairvoyant when it comes down to building a product. We need to get out of the business of overly engineering and just do as my friend said: "build for today's requirements". It is called software for a reason: it is soft. It can change (most likely will), can be refactored, redesigned, and/or incrementally made better or more abstract to accommodate changes. I am in no way saying no design, just limit it and get to work. A successfully built product is more satisfying then the imaginations of your head and the "perfect" engineering/scaling solution that never materializes. Users will like you, you will like you, and the team will get an andrenaline boost with each and every release keeping the spirits high. Remove the Crystal ball plugin from your head/IDE and stop trying to be clairvoyant and be a developer.

There's a reason why this happens more often in languages like Java. Because of the verbosity of the language, it's just painful to rewrite anything, even if the current solution isn't that much over engineered. So there might be a greater tendency to over engineer at the beginning, just to avoid any rewriting, which at the end isn't possible.

I'm a full time C++ developer, which might be a bit better in this regard than Java, but not that much, and a hobby Haskell programmer, and one of the greatest things about Haskell is it's brevity. It makes rewriting a lot less painful, so you're not avoiding it that much.

I disagree with your point that the verbosity of Java is the reason for over-engineering. The refactoring capabilities of modern-day IDEs help immensely with reducing the amount of work one has to do to make syntactic changes over the whole codebase. So the argument that developers tend to over-engineer when coding in Java to avoid any pains that may arise because of its verbosity doesn't hold, IMO.

I think that the over-engineering happens simply because it's a pain to do serious refactoring when working on large enterprise software in general, never mind what language it's written in. The sad truth of our profession is that the customer requirements may change quickly and drastically, requiring us to rewrite large portions of our code, and very often we find ourselves thinking "If I only engineered it that way instead of this way, I wouldn't have so much trouble right now". This is why we strive to create the most robust, flexible solution that will be able to handle any future customer requirement. So we basically turn our code into a framework that, we hope, will allow us to respond to change quickly. Unfortunately, we can never predict everything that the users might want, so this whole approach falls down like a house of cards when a user requirement comes in and we need to change a large portion of the code. I believe this is true for a sufficiently large app written in any language, Haskell included.

"The refactoring capabilities of modern-day IDEs help immensely with reducing the amount of work one has to do to make syntactic changes over the whole codebase. So the argument that developers tend to over-engineer when coding in Java to avoid any pains that may arise because of its verbosity doesn't hold, IMO."

Refactoring tools might be nice and will help you here and there, but there's a difference in the abstraction abilites of a language like Java compared to language like Haskell.

It's not only about the amount of code, but also about the complexity of the code, when building abstractions.

Yes, a refactoring tool might help you dealing with the complexity, but it's still there and makes it more difficult.

I never understood the point of using a less capable language and then using a tool to compensate it, e.g automatically generate code for it.

I never understood the point of using a less capable language and then using a tool to compensate it, e.g automatically generate code for it.

I definitely agree with you on this one :). Sure, it's better to use a language that lets you have less complexity even as your codebase grows quite large. You mentioned Haskell. Since I don't have any experience with it, what do you think is the reason that it's not used very often for building large enterprise applications (or maybe it is, and I just don't know about them)?

"Since I don't have any experience with it, what do you think is the reason that it's not used very often for building large enterprise applications (or maybe it is, and I just don't know about them)?"

Well, I don't know if it's even clear why other languages are used for enterprise software?

I don't think that their technical or whatever superiority was the main reason. Sometimes it seems that everything that is needed is to push it with a lot of marketing into the mainstream and then just let it go.

At some point there're more libaries for a language, most people use that language, universities are teaching it, so that's then the main reason to use a language.

Java might been there, pushed into mainstream, at the right time, with the right features, which made it less complex and less error prone (garbage collection, no memory pointers) to use, compared to C/C++.

But perhaps there's something about "object orientation", how it's implemented in Java, which makes it for people easier to grasp, if I read all the hate about these strange scheme/lisp courses in universities, but perhaps they're just already used to much to other languages.

On Haskell, at the beginning it looks very strange, especially compared to languages like C/C++, Java or C#, but I think that most of the felt strangness is a matter of habit, because most of the mainstream langugages aren't that different.

I don't think that learning Haskell was that much harder for me than learning to program in C++ or Java. Sometimes people seem to forget the challenges they had, when they learned programming for the first time.

Depends if you are engineering an enterprise system/product, or building a smaller one off project.

In the case of the enterprise solution that will be around for the next 10 years, I would agree with your tech lead.

Every time you make a code change for something that is out of dev budget, you face a budget overrun in the project that was interrupted.

If you choose to deliver something that just works and as soon as possible, your total costs tend to balloon over the lifetime of the system.

Question to ask is: how long would it take for a new dev (someone who has never seen the code) to change the tradeReference naming convention to include the asset type, or some conditional tag, lets say to conform to reporting regulations or even an expanded business mandate?

Interfaces do help here, because the new guy can make a localized change, write a small unit test, and commit the code to source control before you can say "rebuilding search index".

Keep in mind that you do not know if something is well architected until AFTER its been in production for at least a year and had features built into it for another year or two, and has added new members to the team, and has lost a few of the original team members in that time,

I believe that most people who are able to look back and claim that they have delivered at least three large projects (>500k LOC c++ or >200k LOC java) or product releases that meet the above criteria, would agree with your tech lead.

Good points; we were building an in-house solution for an Investment management wing in a bank.

To give some context on the trade reference I referred to, it was an internal tracking within the STP system used solely for tracking state and for communication between IT and the business. It was a simple date + numeric value used in our STP system for users to use in our Struts web and C# front end to check trade state through the STP flow and communicate with us if issues arose.

We did have lots of interfaces where it made sense and relied highly on object composition to represent financial concepts more richly and for inject-ability via Spring and Unit testing (makes writing tests easier when you mock things out). Asset types, security identifiers, etc, were represented correctly from an OO perspective and were a part of the xsd layer/interface between us and the trading systems. To us these were read-only values we just passed through for STP.

You are right about interfaces and unit testing but this is one case I highlighted of many where I think the lead was going over board. The internal trade reference was the same for 8 years and still the same to this day (which gives it another 3 years since I left for a total of 11 years). It never had more than one concrete class. It is no big deal on its own, but when combined with the other interfaces that only have one concrete class, it just bloats the system for no good reason.

Design and architecture is good for the reasons you have mentioned and more but it can go over board as is the case here IMO. There were other instances of that in our code base but that would take an entire blog post to cover some of the atrocities this engineer created because of his forecasting ability.

This. A hundred times this. It is truly mind-boggling how many crimes against maintainability are committed in the name of future-proofing.

The rule of the thumb: If an extension point is used by only a single extension, get rid of it. It won't fit the next extension anyway so the work needed to maintain it in the meantime is a waste of time.

I think it all boils down to solving well-defined tasks with minimum possible effort and time spent, and keeping in mind that the end goal is the product, not the code.

For me, the understanding happened in similar order:

- First, as a beginner, I solve all problems with minimum effort possible. The goal is the product.

(It's not real programming, but working with CMS that involves writing code sometimes.)

- Then I see the way to make much cooler and more “custom” products—with a web framework. In order to be able to do that, however, I need to start doing real programming.

- Learning programming, I find that what I was writing before was pure crap. I also forget that the product is the end goal, and care instead about writing code.

- Lots of LOC but few finished projects after I discover that code actually doesn't matter much. Instead, other stuff does—like speed, communication, measurement.

- Learning to make and deliver products with minimum possible effort—that's where I am now.

I think what this article is referring to is the importance of retaining perspective of the goals of the project. An experienced programmer might have the ability to program to a very high standard but he should reserve that for times when it is justified—not forgetting that practice or curiosity alone is occasionally adequate justification.

You can go further with this thinking and suggest maybe it isn’t even useful to ask “is this good enough?” and instead ask “is this sensible given xyz?” or ”will this be worth doing?” and forget about what is means to be good enough or assess the quality of your work in absolute terms.

I would rather do "forgettable" development. It means that I don't have to worry about it afterward and can forget about it. I don't have to go back to fix tons of bugs. The software should run by itself as much as possible. It takes much less time to develop over the entire life cycle of the software, not just the initial sprint.

One corollary is to use simple and proven technology and libraries.

Another corollary is to have crash-friendly design, i.e. software that can crash in any time and recover at the next restart.

Third is to make software self configured and no configuration. It makes operation and scaling very simple.

For the longest time I felt guilty for using short INLINE CSS on certain elements because of the whole "markup and styling must be kept separate".

My reasoning for this was that maintainability was SOOO much easier. Usually I use it on tiny simple elements that have only a tiny amount of unique css (maybe a unique background for each) and repeat a lot on one page (but no other pages). So I really don't want to create 20+ unique IDs in the css, triple the code size, and all for what?

It's a nonstandard definition of optimization in many of these case but I still feel like this all falls under "Premature optimization is the root of all evil"

Also importantly apparently the full(er) quote is "say about 97% of the time, premature..." Because even Knuth knew that sometimes you should design for that herd of buses.

Please forgive this comment which is totally unrelated to the article, but does anyone else have trouble scrolling this page in Chrome on OSX? I often encounter pages that give Chrome fits, but I can't quite figure out what's causing it.

It must be the comments, they are lazy loaded a la disqus, so you start scrolling and it stutters as you scroll.

In fact be proud of code that is good enough.

my guideline is to make things happen with as few lines as possible.

You must love Perl.

one way to think about this is code as a "consumable" instead of as a "durable" good.

There is an essential difference between a paper cup and a glass.

Say you are throwing two outdoor parties a year, but otherwise will not serve more than 8 people. The best solution, if you start with nothing, is to buy a set of dishware for 8, and then throwaway plastic cups and plastic knives twice a year ad hoc.

Code could be seen as similar. There is one-off bad solutions, that are no better than a paper cup. You can't iterate on them (like washing a plastic cup) because it starts to fall apart.

Then there is glassware. It's more expensive, but durable.

So, one approach is to look at your resources and your immediate and expected future needs, realize that better code is more durable but more expensive, but that there is nothing wrong with "consumable" code that you can't wash more than once or twice before it starts being "a mess".

Just because "code is forever" we tend to think of it as not being consumed after being used for its purpose, but due to the nature of engineering, in fact thinking of it in terms of just that is quite appropriate, in my humble opinion.

Once you realize this difference, you can make strategic investments into durable and consumable code. You usually can't fortify a paper cup, nor turn a plastic cup into a glass one, though, so often this is a decision to make several times over the lifetime of your "household"! :)

For personal use, if you have very little money (time/resources) there is nothing wrong with starting with paper, buying plastic and then glass or ceramic, spending, overall, three or four times as much money as if you had just bought a beautiful antique set of dishware for yourself to begin with. Often, though, that is not the real situation: realistically, you could "do without" for a while, and then buy a durable good you won't replace.

These are difficult investment decisions for households, individuals, and companies.

Don't discount renting, either! In this case, that could be analogous to licensing someone else's software.

> Say you are throwing two outdoor parties a year, but otherwise will not serve more than 8 people. The best solution, if you start with nothing, is to buy a set of dishware for 8, and then throwaway plastic cups and plastic knives twice a year ad hoc.

Not really. Only if you've no time to look for good dishware (surprise party?), can't afford it this instant (or fall in the common pit of not doing the math for the long term) or you absolutely do not have space for the “extras”, which is fairly rare.

To extend the metaphor, this “disposable” code tends to end up in unexpected places and stick around polluting the ecosystem forever.

Or, like some people, using disposable stuff every day and probably having trash all over the house to show for it.

That said, disposable code is fine. The metaphor isn't exact, but does point to some things to be cautious of.

Plastic cups and plastic knives don't break like your fine china though. At parties, this is a good thing.

right, but forget fine china: some people would prefer to give their guests real plates, and do so when they have 10 guests, but don't have 100 plates, so when 100 guests show up on their lawn they use disposable plates. The reason doesn't matter. See my comment above about why it's hard to think of code as being a consumable good AT ALL (for any reason).

My point wasn't really about tableware, it was about disposable versus durable goods.

That then points to a reasonable solution: use disposable stuff for rare events but have a good set for everyday use.

i guess i was being incredibly unclear then because that was my whole point. :/

a lot of code is for rare disposable events and doesn't need to be built like a ship; more like a paper airplane.

right; very inexact metaphor.

It's only helpful because code doesn't LITERALLY get consumed, in the sense that if you want to run a piece of code 1000 times (ever) you need 1000 copies, and you have one less every time you run it, or every one hundred times you run it a copy disappears or whatever, like a bag of chips that disappears when you eat it, batteries that get used up (after one full discharge with nonrechargeable batteries, after a while with rechargeable ones), or paper cups as in my example. Literally it's (in reality) very durable, it just becomes inappropriate or stop working or being effective because of the context, not the bytes themselves, which don't deteriorate. It doesn't deteriorate or disappear like a consumable good does, you don't need to buy another bag of it when you run out, it seems to be a bag of something that never runs out: so it seems obvious that if you're going to build a bag of something that never runs out, why not make that something absolutely perfect, you know?

It leads to engineers buiding everything like a ship (durable good) and coding nothing (or sys-oping) nothing like something consumable that they cannot see as using forever.

of course the metaphor is inexact:

any line of code you have on a reliable medium is in the same condition (literally the same characters, literally 0 difference) as when it was last written or updated.

Consumable goods get consumed or deteriorate, but code seems to "be forever" i.e. subject only to the license imposed on it and not its physical quality; this quality leads to overengineering, given the mentality that since it will be available forever, you might as well code it "for forever".

What I'm suggesting is that sometimes it's useful to look at code as though it were bags of manure (consumable) instead of a lot of land you or apartment/house you own in perpetuity.

This is just for resource allocation/investment decisions an engineer is making. A typical example is that it is VERY hard for an engineer to say, "I will write this in this one language/framework that I know really well, and is totally inappropriate, in 10 minutes, it will be like a paper cup. Then I'll direct resources elsewhere, and when I need a glass I will throw this cup away."

The tendency is to say "I can't fortify this paper cup, it doesn't scale, so 'why build sand castles'..."

I am saying that a consumable good (sand castle code) is often very appropriate given resource restrictions. It is also often VERY inappopriate, as when people do not own a dishwasher and tableware, but buy a bag of plastic and a bag of paper plates every week and throw it out every week. (People do live this way.)

It's a delicate balance; this is one tool in many in your arsenal for deciding where to allocate your resources. I should also specify that renting a durable good might be more akin to subscribing to a web-based service or whatever. Again, all these are analogies to help you make good time and resource investment decisions, and think of something in terms that are appropriate. It's easy to throw out a paper plate if you bought it once as hosting a very rare party that temporarily overloaded your capacity. It's very hard for engineers to realize that sometimes it's time to write that same paper-plate code and throw it away without ever washing it (investing more coding into it or trying to engineer a scalable architecture into it after the fact). Things to know about what you're doing up-front so you can make informed decisions and stick to them as your "household" evolves.

All too often the difficulty is convincing Management to stop treating plastic cups like glass ones.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact