Yeah, shipping something real is important, so long as you refactor long-term. Technical debt piles up. A car payment is manageable; a car payment, medical bills, and an upside-down mortgage can cripple you.
Wisdom from comp.lang.perl.misc from the mid '90s, that's served me well my whole career: "_Always_ throw away the first version."
It provides two major benefits: 1) it lets you give yourself permission to take unwise shortcuts and h=just hack it together til it works, because 2) you've agreed with yourself it's going to get a ground up rewrite, which you'll approach with a much better understanding of the problem (and the solution and the customers and...)
Ahhh yes - we all start out with the best of intentions, but I still somehow occasionally end up with what _I_ thought was prototype/demo code out there in production.
(I'm much more careful these days about either writing properly secured "demo code", or at least hard-coding non-routable ip addresses into it - with comments pointing out the lack of security right where someone would need to be looking to change those hard-coded addresses…)
There isn't always a long term, I don't think. For example, I am an engineering PhD student. I am currently writing a small application to automate some simulation tasks through the COM interface and output the results to a database. I don't really intend to share the code with anyone, and I only need it to be good enough so that I can add small features without too much trouble.
I started with some really ugly C# code, learned some more about OO, and tried again. Now I just have some bad C# code, but its doing what I need. I'm definitely a better programmer now than I am when I started, and maybe if I find myself a free week sometime later, I'll rewrite the whole thing again.
Note I said -refactor-, not -rewrite-. I am not a fan of rewriting apps: developers always underestimate the complexity involved, and then business rules change along the way, and it's just a mess.
Refactoring is when you rework a specific section of code, usually that you're about to work on, or just worked on, to be cleaner. It's a long-term approach, and it's how large applications stay maintainable over the long term. For your specific example, you'd leave your app alone. You wouldn't rewrite it, or touch it, until you find a bug, or need to add a new feature, and then you'd rework anything you need to to add the feature cleanly.
On a related note, Refactoring by Martin Fowler is a great book, and I'd recommend everybody pick up a copy. It's a beautiful hardcover and looks great on a shelf, too.
all very true, the one exception being API design. once it's published and used, it can quickly become a nightmare of legacy code that haunts you well beyond its expiration date...even if versioned. often this does, in fact, translate to needing proper rest routes from the get-go, etc. (which can just be mod-rewritten of course), but releasing a prototype API can be quite scary especially if your service turns out to be extremely popular and is plagued by its own too-early success.
api iteration is something most of my projects go through as i use my own systems and refactor the internals to reduce the almost inevitable initial complexity as it begins to piss me off.
to a degree, i suppose the adage still holds true, but moreso as "bad api design on product A leads to better api design on product B (or v2)"
This is why using your API two or three ways before publishing it is so important. There's a quote that goes something like, "If you build a plugin system with one plugin, that's all it'll ever support. But if you build it with three, it'll support them all."
But then, this is really just doing what the author advocates, privately, within your team, before releasing the API.
It's amazing how many people consider dogfooding, before going public, to be a waste of time. Not only do you discover a whole bunch of bugs and difficult uses you didn't anticipate, it ends up being stronger when the public or partners are eventually let loose on it.
The internet is rife with v1 APIs that don't have version designators and are deprecated in favor of a version two that was released to "clean things up" but which never really supplanted the version one.
Having multiple modes of work is critical in the field. It's an unfortunate reality that the theoretical complexity of programming projects is limitless. Unfortunately that means you simply can not have sufficient resources to do things correctly. This is where the balance of not just approaches, but also development methodologies comes in to play.
Whenever I'm working on my personal project I do things that many modern developers would balk at. I'll design and redesign components left and right, I'll leave components in a half completed state, and I'll write the bare minimum of tests to ensure the system works. This is the risk that comes from approaching programming as an art-form. However, as any other artist, I will not go out there to show off my project until it's done, so I don't have to worry too much about pissing people off with my erratic style and questionable intermediate decisions.
On the other hand programming as a profession must be wholly different. If I am getting money for code the expectation is that people want to see whatever they're paying money for. In this case I don't have the opportunity to screw around and try to match the perfect set of ideas to the problem set. Instead I will give myself two or three iterations to do what I've been paid to do, and then I'll clean it up so it satisfies the technical requirements.
Fortunately the latter style of programming tends to lend itself a lot better to TDD, agile, responsiveness and all those other terms we love to throw around until our clients give us that blank stare. At the very least projects you get paid for should have some sort of spec, so you'll be able to say for certain whether you have or have not done what's expected of you.
Whenever I'm working on my personal project I do things that many modern developers would balk at. ... On the other hand programming as a profession must be wholly different.
Even when working professionally, different approaches fit projects of different scales. Processes and tools that keep you sane and productive if you’ve got multiple small teams to co-ordinate over a multi-year project could be absurdly over-managed and inefficient for a couple of developers at adjacent desks who are spending a month writing a bit of in-house automation software.
(Insert obligatory citation of The Mythical Man Month here. It’s about more than just why adding people can make late projects later.)
1. Make it right.
2. Make it work.
3. Make it fast.
Once code works, it's all too easy to move on and leave it the way it is. It also usually takes significantly more time to take code that works but is god-awful and clean it up than to take code that is well-factored and make it work correctly. The former usually involves rewriting large fractions of the code (frequently breaking it somehow in the process), while the latter usually just requires minor tweaks after automated testing.
"Make it right, then make it work" seems to me to take about twenty percent more effort than just "Make it work", but half the effort of "Make it work, then make it right." And dramatically less effort as the time between "making it work" and "making it right" grows.
Very true, but it can sometimes be hard to convince the boss that we need to go to step 2 when the system is working. I try to apply at least some parts of "right" the first time.
The other half of it is learning to engage more with other people's code and get inside their head.
The stereotype of the young coder, when given no oversight, is to skim over something, declare it crap, and start rewriting without even trying to maintain the original. Sometimes that is the correct move, but since they haven't engaged with long-term maintenance before, they can't tell the difference.
I look at a lot of my old code and think that, as I should. You should be learning from every piece of code you write so that when you're done writing it's already obsolete based on your new set of knowledge.
I look at this from the other angle. I don't think we're getting much better. I think we rarely reduce the amount of bad code we write without outside critique. New code looks like better code because it is more similar to your current style and preferences, and nearer to you in memory than that old code was.
To really analyze whether your code is good or not you must do what you did with that self-identified bad code: step away for months, forget everything about it, and re-visit it to modify it in some way. Or get someone else to do it for you now and tell you why it sucks.
Code written "in the zone" is more likely to be bad, because when you're "in the zone" even bad code makes sense to you.
True. I always feel happy that I am writing good code until I leave it untouched for a month and get back to it to realize what a mess I made. The best way I have found to tackle this is to put comments everywhere, even at places that seem to be too obvious.
Part of this may be due to the fact that many developers today are using programming languages and platforms that are in many ways vastly inferior to those we were using a decade ago, or even two or three decades ago.
Take JavaScript, for example. It's a clear step backward in almost every respect from languages like C++, Java, C#, Python and Perl. JavaScript is full of inexcusable, inherent problems that just plain should not exist. It has basically no standard library, and what does exist is not good at all. Third-party libraries help slightly, but they're rife with problems caused by a lack of proper namespacing or other modularity-enabling language features, and the numerous different ways to fake badly-needed class-based OO functionality. The developer basic development tools (editors, debuggers, and so forth) are lacking in so many ways, and the various runtimes aren't much better.
The only thing it has going for it is that it's widely available in web browsers. That's it.
When using an inferior language compared to what we were using in the 1980s, 1990s and the early 2000s, it wouldn't surprise me at all if inferior software systems are produced. It's just not feasible to build robust systems upon such a shaky, rotten foundation.
I've been programming for a number of years, I still follow this mindset. I'll write out a function and then later realize I need to use a subset of it somewhere else, so I refactor it out to its own function. Or I'll realize I've now extended the same class twice, so I create a better abstraction. It's much easier to create well designed code when you actually see a need for something in the source and do it, instead of starting first with some fancy design pattern you saw someone else use on another project. I mostly focus on getting the small, granular stuff right the first time and keep an eye out for any larger problems being created in the code.
What a great point! This can happen to even us "seasoned" developer types. I remember learning how to write TDD after years of writing code without it. I was paralysed by trying to make all of the right decisions. Should I write a test here? Am I going to have the right coverage? Is my test to broad, not broad enough? Did I write my code correctly. I would always forget that refactoring, especially consistent, incremental refactoring, is actually the best way to go. We learn best by doing, by making mistakes, and by incrementally bettering our skills and our code. A lot of developers would be so much better if we would only remember that.
I have different modes of working, and switch between them depending on the context.
When I wish to learn something new, I choose a new technology stack. I read a lot about it, and try my best to code in an idiomatic and clean style for that tech. I might have a couple of iterations right from the start, where I just want to get something to work, and then change it to be clean.
Then there is the kata-mode, where I do something that is familiar to me, but just try to do it better than before. Just to improve my existing skill set when I don't have the energy to do something new.
Then there is the project mode where I got to get paid. This is where the shortcuts are made. When I have time, I will code a version that works with some planning; and I'm talking at micro level, macro level planning has been done already. Then I commit it. And then I refactor it, if I'm happy, I will amend the previous commit. But when the pressure gets higher and there is just no time, the second step is dropped. This is where the technical debt starts accumulating, but at least it works and the project progresses. The first moment I get more time, I try to do some refactoring, but at this point, there never is time or reason to fix all those TODO's and broken windows.
It's a balancing act for me; and at work I have seen both extremities of this balance. And those who are the most experienced, can switch between different modes depending on the deadline; and I see that as one of the most important skills to have in our craft.
This is something I relate to a lot. I've been paralyzed many times by being unsure if I'm coding the Best way. In the last 5 or so years I write some awful code but every piece of terrible code taught me how to write good code. The bad code worked, it did what it was supposed to do but it ended up causing complexity and making it hard to add on to down the line and that is what helped me learn to things right.
It has nothing to do with bad code not working. I think its when bad code works that you learn. You should have seen my pet project when it was started in 2011 (which I'll shamelessly plug, https://writeapp.me). It was a series of scripts all string together. Then it was rewritten using a framework when the first version couldn't be extended. From there the models, views, and controllers have been refactored many times over each time making the app more extensible, less buggy, and from a code perspective, more "right".
It's a one step backward, two steps forward kind of situation. Each time you hit a wall you go back and not only fix one problem but move forward helping yourself avoid other common pitfalls.
And it's really bad when you look at a project after a 1 month break and have no idea what's going on. If I know I'm writing bad code, I leave a fun treasure map of comments. Jibberish to others, but takes me back to why I would write a function like that.
Great post Nathan. Definitely something I am going through right now. I keep asking people "Is this the right style?" when really my bigger problem is getting it to work :)
But how often does one actually fully understand the problem on the first try? Doesn't one sometimes take the wrong approach one or more times before and learn some things the hard way before he truly understands the actual problem he is trying to solve?
Great post. I agree that technical debt only applies to those who understand its significance. When learning, the feedback loop needs to be tighter. Experienced programmers are disciplined enough to determine the appropriate balance.
This has bugged me too. But I've come to accept these as just over-sized tweets, which helps. I sort of see these as a thread-starter for a discussion here, rather than an article you'll google for 5 years later to show someone.
I would tend to agree with you on this. I was expecting something with more insight.
It's been my experience that for a lot of software, the bad code stays bad code, especially if its being written by people who really don't care about the craft.
I guess we have a lot of perfectionist's on HN who might salivate over every variable or function name (I say this in jest, as I used to be guilty of this).
The point isn't that people improve as you go. The point is that if you're like this author (and many people, including myself at times) are, then when you start working on something, you obsess over the exact, perfect, super-over-engineered version of a something; and you'll rehash that inconsequential something while many prior versions would have been just fine.
Instead, when you've gotten to X ability level, go ahead and make something at X ability level, don't let learning about a new, better way of doing that something stop you from doing it.
There was a saying that I'm going to butcher here from PHP:
"While many people obsess over the little details that don't matter and quibble over language wars, the person that knows PHP just gets things done."
This article is in that similar vein where it's more important to ship than to quibble about the things that don't matter that much.
Instead, when you've gotten to X ability level, go ahead and make something at X ability level, don't let learning about a new, better way of doing that something stop you from doing it.
There is a time that this is good advice, and a time that it is bad advice. It is good when it helps you get over analysis paralysis. It is bad when it helps people justify their turning the opportunity to gain 15 years of experience into having gained one year of experience, 15 times.
There was a saying that I'm going to butcher here from PHP: "While many people obsess over the little details that don't matter and quibble over language wars, the person that knows PHP just gets things done."
Being snarky, PHP stands as an example of where you should take a step back, and learn better ways to do it.
It is good when it helps you get over analysis paralysis. It is bad when it helps people justify their turning the opportunity to gain 15 years of experience into having gained one year of experience, 15 times.
Absolutely. I agree 100%. It should definitely be a case-by-case basis.
So it is with software.