This is REALLY bad for me in Cocoa/Cocoa Touch apps. Objective-C is so verbose and writing classes is so "mechanically cumbersome" (having to write .h and .m files and duplicate stuff, for example) that I sometimes lose interest before I ever really get off the ground.
The trick I use to beat this tendency is to totally eschew good software engineering principles at first. Typically all of my code lives in main.m (and I usually start out in CodeRunner rather than XCode). Interfaces are done sloppily in Interface Builder or entirely in code. This really helps me get started.
Before long, of course, I have this unwieldy spaghettified mess of a single source file. By that time, though, I have some stuff WORKING, so doing some refactoring into a good architecture for the code I have while keeping the working stuff working is kind of fun.
I will code very thin "frameworks" if it becomes obvious that it will save time over the long haul, but there is no "this might be useful someday" code. If it doesn't move the project forward, it doesn't get done.
If your most precious resource -- time -- is limited, it forces ruthless prioritization.
There are also things that I have that technically work, but are ugly, but are never getting fixed. I learned from them and it's fun to look back at how much better I got over time. I could probably go back and turn everything into nice classes and remove big blocks of commented-out code, but...it'd be pointless. I'd rather work on something new and do it better the next time. Tackling exciting new problems is what makes coding interesting for me, and I'd hate to lose that "just" to be proper.
I like to think of my lang dirs in my homedir (ruby, c, python, js, php, etc) as language-specific "coding sketchbooks" where I'm developing recipes that I might borrow from later. It's kind of similar to being a chef, I suppose -- you finely craft and refine dishes for your day job ("staging/production"), maybe you're lucky enough to get some paid time to work on your ideas ("dev"), and you dedicate one night a week to culinary experiments on your own time, maybe with friends or family ("passion").
I feel like a good programmer is like a good chef, and if you lose the passion, you mostly stop getting better.
So, for example, I may imagine that data sources for a page will come from several places: the db, Google maps, an api like Twitter etc. And I will create stub classes to capture this concept, but the methods will just return hard-coded elements. Eventually if the project goes anywhere I can build out the underlying methods as necessary.
I find this helps me feel like I'm doing real engineering but not get too lost in the abstractions.
What I also like to note to myself, mentally anyway, is that it is far far far... more rewarding to start and finish an app or project than it is to be "almost" finished, but what you may consider perfect code.
The ship date for a chip he was managing was just a few weeks away when he got a call from a supplier informing him that a key component would be delayed. Worried, he went to one of the engineers who designed the chip and told him of the problem.
The engineer was not concerned; "I thought they might not deliver that component, so I left extra space around it where we can add these additional parts that do the same thing."
Joel was happy, but surprised, and he took a closer look at the chip. "But you didn't leave extra space for other components on the chip."
"Yeah," the engineer replied, "I just didn't think those would be a problem."
One of the harder jobs in engineering and design is anticipating problems. That engineer took the time to think carefully about the problems that might arise with each of the components on the chip, prioritized the risk, and only spent the time to really carefully architect the parts of the circuit that were most likely to be trouble.
It seems that software is the same way; the best coders will assess the risk of all the code they write, and only spend their time and energy to protect against the problems that are most likely to arise. That leads to code that is better than "good enough" but is not over-architected. Of course, that intuition is really hard to develop and impossible to perfect.
It comes every time a boss or client says "oh, and now we need it to do XYZ. And you can't rewrite or start over. We need it tomorrow. Build on what you have - reusable software is our goal." and so on.
No one is going to go to a bridge engineer after a two lane bridge is up and running, and that took 2 years to build, and say "ok, add 4 more lanes by next week. Oh, and you have no budget". No one in their right mind would ever dream of doing that. But with software we face it all the tim.
Basically, many of us are conditioned to have to extend on top of whatever our first iteration is, so we try to make the first iteration extendable. I'm not saying it's right or good, but I've fallen in to this trap too many times in my career and seen it happen to too many other people to think it's not a contributing factor.
Occasionally "immediately" really does mean "in the next 5 minutes". Oftentimes it actually means more on the order of days or a couple weeks. "Scaling" can mean different things, and anything beyond "scaling" web requests alone will likely mean a shift in business operations that can't/won't happen overnight either.
In most situations where I've been in where people request something, but the rest of the business unit really isn't ready to handle the change. "We need the new data reports now" but when you make the change you realize the other company who pulls the data to process it is on vacation and won't be back for two weeks, and if you make the change now, then it'll break everything.
However, this can be a terrible mistake if your users are not resilient to downtime or lost data (e.g. in the case of Facebook games). Even just a few hours of downtime or slowdown can cause a significant, permanent drop in users.
"Always write code as though it will be maintained by a homicidal, axe-wielding maniac who knows where you live ..."
There are two ways to write poorly maintainable code. One is poorly organized spaghetti code and the other is overly architected code.
Larry Wall also once said, "a programmer can write in assembly using any language."
If the code is designed properly, the only requirement I have is that I can go back into my code and change it easily to what I now need it to do. It should be malleable like silly putty. If I can do that easily, without requiring massive rewrites, then this means that the code can change as my requirements change, and that to me is a good design, and "good enough" code. So don't worry about optimizing too early, but make sure the code can be optimized easily when you need it to be.
If I want to make code changes that requires massive rewrites when my requirements change a little, then it means I've coded myself into a corner, and I've done a poor job designing the code.
However I disagree with "that's the nature of software engineering. You never stop learning and evolving."
The reason: burnout. My guess is that at least 10% of developers begin to lose enthusiasm for coding after a few years and then at some point either change jobs, become managers, or just have very little motivation to learn having seen the futility of it all. They may be forced to continue to learn, but may do so at a slow pace.
Why? You write code and after years or less, it can be thrown away or unused without much of a thought. You see that many of those driving projects really don't have some sort of higher purpose, and other than some perceived business need, must of it is just "wouldn't it be nice".
I feel that it is sick for a person to continue blindly learning new technology just for the sake of it. You need to have a reason. Jobs was not my favorite person in the world, but one thing he did right was to believe in what he was doing and why he was doing it. Without this, any evolution is worthless.
I know a few coders who have spun their wheels for a decade or possibly longer because they're still idealizing wheel reinvention. The reinvention is the easy part - after all, someone already did it, so you're just learning what they did, "the hard way." What's hard is learning to leverage the ecosystem as much as possible while bringing in original ideas; as in entrepreneurship, there are no tutorials for that.
Throughout the design of his aircraft, Charles knew that his life hung on the details. For example, rather than use an all-metal design, much of the exterior was cotton fabric (to reduce weight). His design was pragmatic, practical, and brilliant.
For the most part, software developers do not create systems where the modularity, efficiency, and stability are paramount to the success of a business or the safety of the people who use the programs. Developers often create systems for data entry and data analysis. It is the data that allows a business to take flight, as it were.
You can replace system front-ends in a fortnight. Dirty data, however, can skew results and impart inflexibility in the system. Bad data can ground a business. These days I care about the software, as my mind reminisces about the minimalism and beautiful design imparted upon the Spirit of St. Louis. Yet I care much more about the quality of the database and the cleanliness of data.
Software and hardware are both destined to hit a limit, in any current configuration, no matter how it's built or put together.
Whatever computer we buy, it has a limit. The day will come that the capability we originally had will not be able to power what we need. We decide how much (and how far) to invest into the future to stay on a machine. This can be a benefit sometimes, other times, not. Sometimes we need a computer to be good enough to do a certain task, other times not.
Building software has a similar shelf life. All software, no matter how it is, or isn't architected, will have it's limits because of it. This can be a benefit sometimes, other times, not. When those limits are hit, you'll have to deal with it. Throw more horsepower at it, or refactor.
If there's code that isn't updated often, and doesn't need to be super performant, good will be the same as great code.
When starting a new project, I find myself more and more asking the questions:
- How long will I need this codebase to do what it does?
- Will the codebase grow?
- How soon/often will it grow?
- Will additions be trivial / non-trivial?
Most often I now just start with an ultra lightweight MVC framework to keep my coding semi-organized and primed to re-factor, but not much more. I have a set of scripts that will initialize an entire project how I like and I can quickly start hacking on a new project/idea in a few minutes.
The less I obsess over every small architectural detail and let my decent habits of being reasonably kind to my future developer self, I find myself having fun while being responsible.
It sounds like that setup would really cut down on the mental overhead need to start a new project, and also cut down on obvious errors (typos, forgetting to add something, etc).
* Java (Android)
Then if its 'good enough', leave it at that.
If it needs work later, anybody can do it - you didn't make it too dense or compact or concise to easily reinterpret or diagnose.
Its not just a good idea, its pretty much an obligation if you are paid to create it. Keep personality out of it, complete it on time and under budget, move on.
And really the over-abstraction is wasted time. I've seen things added because "we might need them some day". These things take days or weeks to add and then never are used by customers even after many years have passed.
The over-abstraction seems to be a drug to some developers. They can't stop doing it.
There's no shame in code that's 'good enough' but I think there's a danger in this article of missing an important point: you can't classify your output if you don't know its context and goals. If you're coding something that definitely won't be used again then make it 'good enough' for this use case, if you need it to be extended by others over the next few months then make it 'good enough' for that use case, if you have a contractual obligation to get it out to the client 'now' then stop thinking and start doing!
But paralysis is costly. If you lack the required knowledge to take a sensible decision, then take a gamble: never hold up making a low-risk choice because you don't know enough. The opportune time to learn what the right choice was is once you've made it and can evaluate the outcome with data.
The other thing to note is that complexity on its own isn't bad, complicatedness is . The knowledge you've gained shouldn't be making your code more complicated. You should spend your time on creating an arsenal of simple solutions to complex problems. That is perfection (and you will never reach it.)
Optimize your time by evaluating after instead of planning before; over time create simple solutions to complex problems and use these as shortcuts to act decisively.
Good enough is the enemy of at all.
Easier said than done :)
The only argument I could really come up with involved too many coding buzzwords to be taken seriously and we moved on to the next task.
That's kind of stuck with me. Whenever I'm thinking about a code change or "clever" design I just try and see if there's a justification beyond something involving words like "abstract" or "cohesion".
Once your product vision is clear, you can try to see as much into the future as you can. You know where your product can/will go based on what problem you are trying to solve. Once you get into "what-if" territory, you know you have ventured too far.
Therefore, I truly believe that engineers should be aware of the business needs and the product roadmap/vision to make such decisions. Engineers can then decide where(and how much) flexibility should be added. Most of the future-proofing is done for scenarios that may not exist out of the imagination of the engineers. They should know what can be possible and what cannot. No system can be designed to handle all scenarios without adding untold complexity.
So a smart programmer will strive for 'quick perfection', establish respect in some other way to counter this, or if they really want to sink that low, fight back with similar tactics. Smart programmers can also create review traps if they can guess what colleagues will attack them on.
Its also good if you beat up another programmer on the first day so the others know not to mess with you ;).
I once had an argument with a Wall Street Java developer who was made the "lead" of one of our team projects. He decreed that every single class have an interface so that we can be generic and not tightly couple any of the components to concrete classes. I agreed that in some instances where functionality, i.e. methods that can be represented by different classes with the same method signatures made sense but not every single class needs an interface (if that is the case just go with a beautiful dynamically typed language like Python and avoid the code bloat). He got management on his side and we went off and built an overly engineered Straight Through Processing solution. It was a sheer nightmare to debug and the code bloat made me scream one day when we had a serious production issue. Even our manager (who finally had to look at the code when most of us were out on vacation once to answer some user questions) was flabbergasted at the amount of code he had to read through in order to answer the most trivial of questions. One extreme example was an interface for trade references. Our trade references were always strings with a date and some numeric value concatenated to it. The "engineer" decided that we needed an interface for this and added one interface and concrete class for our trade references. I told him that all classes needing trade references instance variables could just have a String instance variable named tradeReference or something like that and he went on to give me a design pattern lecture. We argued for nearly 20 minutes about this silly thing as he kept insisting that the future was unknown so we have to future proof the code from unforeseeable changes. When he said this I asked him to remove the Crystal ball plugin he had in Eclipse for predicting the future and get real. He got angry and we had a team call to waste yet another hour of developer time to discuss this. In the call I mentioned that our trade references scheme had not changed in 8 years and was unlikely to change... I lost the debate anyway. The ratio for most of the code base from interface to concrete class was largely 1-1 thus not justifying this code bloat approach.
Experienced developers (at least I think) seem to have these crystal balls in their heads or IDEs and usually try to be clairvoyant when it comes down to building a product. We need to get out of the business of overly engineering and just do as my friend said: "build for today's requirements". It is called software for a reason: it is soft. It can change (most likely will), can be refactored, redesigned, and/or incrementally made better or more abstract to accommodate changes. I am in no way saying no design, just limit it and get to work. A successfully built product is more satisfying then the imaginations of your head and the "perfect" engineering/scaling solution that never materializes. Users will like you, you will like you, and the team will get an andrenaline boost with each and every release keeping the spirits high. Remove the Crystal ball plugin from your head/IDE and stop trying to be clairvoyant and be a developer.
I'm a full time C++ developer, which might be a bit better in this regard
than Java, but not that much, and a hobby Haskell programmer, and one of
the greatest things about Haskell is it's brevity. It makes rewriting a
lot less painful, so you're not avoiding it that much.
I think that the over-engineering happens simply because it's a pain to do serious refactoring when working on large enterprise software in general, never mind what language it's written in. The sad truth of our profession is that the customer requirements may change quickly and drastically, requiring us to rewrite large portions of our code, and very often we find ourselves thinking "If I only engineered it that way instead of this way, I wouldn't have so much trouble right now". This is why we strive to create the most robust, flexible solution that will be able to handle any future customer requirement. So we basically turn our code into a framework that, we hope, will allow us to respond to change quickly. Unfortunately, we can never predict everything that the users might want, so this whole approach falls down like a house of cards when a user requirement comes in and we need to change a large portion of the code. I believe this is true for a sufficiently large app written in any language, Haskell included.
Refactoring tools might be nice and will help you here and there, but
there's a difference in the abstraction abilites of a language like Java
compared to language like Haskell.
It's not only about the amount of code, but also about the complexity of
the code, when building abstractions.
Yes, a refactoring tool might help you dealing with the complexity, but
it's still there and makes it more difficult.
I never understood the point of using a less capable language and then
using a tool to compensate it, e.g automatically generate code for it.
I definitely agree with you on this one :). Sure, it's better to use a language that lets you have less complexity even as your codebase grows quite large. You mentioned Haskell. Since I don't have any experience with it, what do you think is the reason that it's not used very often for building large enterprise applications (or maybe it is, and I just don't know about them)?
Well, I don't know if it's even clear why other languages are used
for enterprise software?
I don't think that their technical or whatever superiority was the main
reason. Sometimes it seems that everything that is needed is to push it
with a lot of marketing into the mainstream and then just let it go.
At some point there're more libaries for a language, most people use that
language, universities are teaching it, so that's then the main reason to
use a language.
Java might been there, pushed into mainstream, at the right time, with the
right features, which made it less complex and less error prone (garbage
collection, no memory pointers) to use, compared to C/C++.
But perhaps there's something about "object orientation", how it's
implemented in Java, which makes it for people easier to grasp, if I read
all the hate about these strange scheme/lisp courses in universities, but
perhaps they're just already used to much to other languages.
On Haskell, at the beginning it looks very strange, especially compared to
languages like C/C++, Java or C#, but I think that most of the felt
strangness is a matter of habit, because most of the mainstream langugages
aren't that different.
I don't think that learning Haskell was that much harder for me than
learning to program in C++ or Java. Sometimes people seem to forget the
challenges they had, when they learned programming for the first time.
In the case of the enterprise solution that will be around for the next 10 years, I would agree with your tech lead.
Every time you make a code change for something that is out of dev budget, you face a budget overrun in the project that was interrupted.
If you choose to deliver something that just works and as soon as possible, your total costs tend to balloon over the lifetime of the system.
Question to ask is: how long would it take for a new dev (someone who has never seen the code) to change the tradeReference naming convention to include the asset type, or some conditional tag, lets say to conform to reporting regulations or even an expanded business mandate?
Interfaces do help here, because the new guy can make a localized change, write a small unit test, and commit the code to source control before you can say "rebuilding search index".
Keep in mind that you do not know if something is well architected until AFTER its been in production for at least a year and had features built into it for another year or two, and has added new members to the team, and has lost a few of the original team members in that time,
I believe that most people who are able to look back and claim that they have delivered at least three large projects (>500k LOC c++ or >200k LOC java) or product releases that meet the above criteria, would agree with your tech lead.
To give some context on the trade reference I referred to, it was an internal tracking within the STP system used solely for tracking state and for communication between IT and the business. It was a simple date + numeric value used in our STP system for users to use in our Struts web and C# front end to check trade state through the STP flow and communicate with us if issues arose.
We did have lots of interfaces where it made sense and relied highly on object composition to represent financial concepts more richly and for inject-ability via Spring and Unit testing (makes writing tests easier when you mock things out). Asset types, security identifiers, etc, were represented correctly from an OO perspective and were a part of the xsd layer/interface between us and the trading systems. To us these were read-only values we just passed through for STP.
You are right about interfaces and unit testing but this is one case I highlighted of many where I think the lead was going over board. The internal trade reference was the same for 8 years and still the same to this day (which gives it another 3 years since I left for a total of 11 years). It never had more than one concrete class. It is no big deal on its own, but when combined with the other interfaces that only have one concrete class, it just bloats the system for no good reason.
Design and architecture is good for the reasons you have mentioned and more but it can go over board as is the case here IMO. There were other instances of that in our code base but that would take an entire blog post to cover some of the atrocities this engineer created because of his forecasting ability.
For me, the understanding happened in similar order:
- First, as a beginner, I solve all problems with minimum effort possible. The goal is the product.
(It's not real programming, but working with CMS that involves writing code sometimes.)
- Then I see the way to make much cooler and more “custom” products—with a web framework. In order to be able to do that, however, I need to start doing real programming.
- Learning programming, I find that what I was writing before was pure crap. I also forget that the product is the end goal, and care instead about writing code.
- Lots of LOC but few finished projects after I discover that code actually doesn't matter much. Instead, other stuff does—like speed, communication, measurement.
- Learning to make and deliver products with minimum possible effort—that's where I am now.
You can go further with this thinking and suggest maybe it isn’t even useful to ask “is this good enough?” and instead ask “is this sensible given xyz?” or ”will this be worth doing?” and forget about what is means to be good enough or assess the quality of your work in absolute terms.
One corollary is to use simple and proven technology and libraries.
Another corollary is to have crash-friendly design, i.e. software that can crash in any time and recover at the next restart.
Third is to make software self configured and no configuration. It makes operation and scaling very simple.
My reasoning for this was that maintainability was SOOO much easier. Usually I use it on tiny simple elements that have only a tiny amount of unique css (maybe a unique background for each) and repeat a lot on one page (but no other pages). So I really don't want to create 20+ unique IDs in the css, triple the code size, and all for what?
Also importantly apparently the full(er) quote is "say about 97% of the time, premature..." Because even Knuth knew that sometimes you should design for that herd of buses.
There is an essential difference between a paper cup and a glass.
Say you are throwing two outdoor parties a year, but otherwise will not serve more than 8 people. The best solution, if you start with nothing, is to buy a set of dishware for 8, and then throwaway plastic cups and plastic knives twice a year ad hoc.
Code could be seen as similar. There is one-off bad solutions, that are no better than a paper cup. You can't iterate on them (like washing a plastic cup) because it starts to fall apart.
Then there is glassware. It's more expensive, but durable.
So, one approach is to look at your resources and your immediate and expected future needs, realize that better code is more durable but more expensive, but that there is nothing wrong with "consumable" code that you can't wash more than once or twice before it starts being "a mess".
Just because "code is forever" we tend to think of it as not being consumed after being used for its purpose, but due to the nature of engineering, in fact thinking of it in terms of just that is quite appropriate, in my humble opinion.
Once you realize this difference, you can make strategic investments into durable and consumable code. You usually can't fortify a paper cup, nor turn a plastic cup into a glass one, though, so often this is a decision to make several times over the lifetime of your "household"! :)
For personal use, if you have very little money (time/resources) there is nothing wrong with starting with paper, buying plastic and then glass or ceramic, spending, overall, three or four times as much money as if you had just bought a beautiful antique set of dishware for yourself to begin with. Often, though, that is not the real situation: realistically, you could "do without" for a while, and then buy a durable good you won't replace.
These are difficult investment decisions for households, individuals, and companies.
Don't discount renting, either! In this case, that could be analogous to licensing someone else's software.
Not really. Only if you've no time to look for good dishware (surprise party?), can't afford it this instant (or fall in the common pit of not doing the math for the long term) or you absolutely do not have space for the “extras”, which is fairly rare.
To extend the metaphor, this “disposable” code tends to end up in unexpected places and stick around polluting the ecosystem forever.
Or, like some people, using disposable stuff every day and probably having trash all over the house to show for it.
That said, disposable code is fine. The metaphor isn't exact, but does point to some things to be cautious of.
My point wasn't really about tableware, it was about disposable versus durable goods.
a lot of code is for rare disposable events and doesn't need to be built like a ship; more like a paper airplane.
It's only helpful because code doesn't LITERALLY get consumed, in the sense that if you want to run a piece of code 1000 times (ever) you need 1000 copies, and you have one less every time you run it, or every one hundred times you run it a copy disappears or whatever, like a bag of chips that disappears when you eat it, batteries that get used up (after one full discharge with nonrechargeable batteries, after a while with rechargeable ones), or paper cups as in my example. Literally it's (in reality) very durable, it just becomes inappropriate or stop working or being effective because of the context, not the bytes themselves, which don't deteriorate. It doesn't deteriorate or disappear like a consumable good does, you don't need to buy another bag of it when you run out, it seems to be a bag of something that never runs out: so it seems obvious that if you're going to build a bag of something that never runs out, why not make that something absolutely perfect, you know?
It leads to engineers buiding everything like a ship (durable good) and coding nothing (or sys-oping) nothing like something consumable that they cannot see as using forever.
of course the metaphor is inexact:
any line of code you have on a reliable medium is in the same condition (literally the same characters, literally 0 difference) as when it was last written or updated.
Consumable goods get consumed or deteriorate, but code seems to "be forever" i.e. subject only to the license imposed on it and not its physical quality; this quality leads to overengineering, given the mentality that since it will be available forever, you might as well code it "for forever".
What I'm suggesting is that sometimes it's useful to look at code as though it were bags of manure (consumable) instead of a lot of land you or apartment/house you own in perpetuity.
This is just for resource allocation/investment decisions an engineer is making. A typical example is that it is VERY hard for an engineer to say, "I will write this in this one language/framework that I know really well, and is totally inappropriate, in 10 minutes, it will be like a paper cup. Then I'll direct resources elsewhere, and when I need a glass I will throw this cup away."
The tendency is to say "I can't fortify this paper cup, it doesn't scale, so 'why build sand castles'..."
I am saying that a consumable good (sand castle code) is often very appropriate given resource restrictions. It is also often VERY inappopriate, as when people do not own a dishwasher and tableware, but buy a bag of plastic and a bag of paper plates every week and throw it out every week. (People do live this way.)
It's a delicate balance; this is one tool in many in your arsenal for deciding where to allocate your resources. I should also specify that renting a durable good might be more akin to subscribing to a web-based service or whatever. Again, all these are analogies to help you make good time and resource investment decisions, and think of something in terms that are appropriate. It's easy to throw out a paper plate if you bought it once as hosting a very rare party that temporarily overloaded your capacity. It's very hard for engineers to realize that sometimes it's time to write that same paper-plate code and throw it away without ever washing it (investing more coding into it or trying to engineer a scalable architecture into it after the fact). Things to know about what you're doing up-front so you can make informed decisions and stick to them as your "household" evolves.