It takes over 800 days for the more efficient solution to gain over the less efficient one. Think about this, 800 days in which the less efficient solution is less expensive. That is two years and two months.
Let us plug in more realistic numbers for the cost of labor, shall we? Let us say that the labor cost is 80$ per hour (which is still cheap). At that point, it would take close to 5 years.
This piece makes some excellent points that any start up needs to stop and take a minute to ponder.
I think he is assuming way too many things to be constant, and then using that to justify his thesis of addressing efficiency concerns only just-in-time.
The biggest concern I have is that he account nowhere for growth of that new feature, and the increasing hosting costs as a result of that growth. Part of efficiency is how fast is it now; the other half of efficiency is how will the performance change based upon an increased load.
Part of the risk of putting out a new feature is you don't entirely know what the uptake will be. It could fall flat, it could grow exponentially within the first week. Or, as this author assumes, it could enjoy a rapid uptake of a certain amount, and then level off just as quickly. I'd argue this last possibility is almost never the observed pattern.
The other part of the risk is that, if you go the inefficient route, you may not have the time later on to even build bad hacks within the inefficient code to make it speedier. This is a bad situation to be in, you end up getting screwed by your own unexpected success, and users get pissed because now your site is unreliable.
Yes, spending insane amounts of time on efficiencies probably aren't worth it. Yes, you need to do cost-benefit analysis to decide what amount of time is. My point is that there is a lot more to consider than the author gives credit for, and I don't think you are as likely as he believes to reach the same conclusion he did. Cash flow is important, but it is also important to remember that things don't remain flat or grow linearly just because you assert that they do.
I think it is worth pondering, especially for anyone in danger of being so perfectionistic that they are looking like they might never ship. I got a link to the Duke Nukem Forever story from one of the comments. I am currently feeling like that about some of my goals -- like I piddle and fiddle and tweak and never accomplish anything. Maybe I am just having a bad day (heck, that's very likely -- I am treating a sinus infection), but this article meant something to me personally. Yes, you need to not just fuck it up royally. But, like with Duke Nukem Forever, you really can err terribly in the other direction as well.
Thanks for your very well-thought out and meaty comments.
There are many choices like this - put time in now to save time later.
Recently I decided to take the time to make our system configuration for a new user really easy (I was doing it by hand in a slow way before) because I know that once we have many users I won't have the time to fix the admin interface around the time demands of doing it by hand.
There are lots of decisions where you don't know up front the cost of an inefficient solution - perhaps the feature is underused and it doesn't matter - you've wasted time making it fast. Perhaps you become a success overnight because of the feature but now your server is fried. It is a tough call - either way, you should spend some time thinking about how you would throw more hardware at the problem should it arise.
Gridspy is still pretty tiny load wise, but I have put a lot of thought into scaling plans, keeping my options open.
I want to echo the "linear inefficiency, lol" meme.
But more importantly, the costs of inefficiency aren't clock cycles, it's the opportunity cost.
It depends on the situation, obviously, but inefficiencies can have a very real cost in terms of lost sales or customer dissatisfaction. The ROI will vary on a multitude of variables, but that's my point: these graphs simply don't reveal the complexity of costing models that really come in to play.
Growth in the modern age is rarely linear. The real risk in creating an "inefficient" solution is when you've done so at the cost of extensive technical debt not just mere inefficiencies, and this technical debt blocks your ability not only to switch to an efficient solution but to stretch your inefficient solution just enough to keep your business running during periods of growth. Failing to take advantage of an order of magnitude (or more) growth opportunity can leave your company at a competitive disadvantage or even kill it outright.
So build an inefficient system if you think it's a better solution in the short-term. But be careful that you know the difference between mere inefficiency and a horrible, grotty, dailywtf level hack-job that may seriously limit your company's future.
The given graphs don't show linear growth; they show the cumulative cost over a certain period of time, assuming that one is running a single EC2 Windows high memory instance for all of that time. The latter graphs add in one-off development costs.
That's my point. I think most developers appreciate the long-standing disease in this industry to wishfully imagine that a hacky rush job is no different than appropriate engineering that is merely inefficient compared to a solution that isn't currently necessary.
It's the difference between choosing not to use a steel beam for support and using a wooden beam on the one hand or using a step ladder sitting on top of a stack of half a dozen phone books. One is still sound engineering, but not suitable for higher loads in the future, the other is just a mistake waiting to come back to bite you in the future and isn't even suitable for the task at hand.
Let us plug in more realistic numbers for the cost of labor, shall we? Let us say that the labor cost is 80$ per hour (which is still cheap). At that point, it would take close to 5 years.
This piece makes some excellent points that any start up needs to stop and take a minute to ponder.