> Lots of people do this, which is fascinating to me, to handle the negative n case
It makes sense, because performance differences between that and the "optimal" solutions are usually negligible, and the "optimal" solutions aren't usually optimal when you take into account readability and maintainability.
Embedded development is significantly different than developing for the web. Buying more servers is usually more efficient than taking twice as long to develop and update code. Interviewing with people who refuse to acknowledge this is extremely frustrating.
> Interviewing with people who refuse to acknowledge this is extremely frustrating.
I agree. Let me be clear, I'm just having a bit of fun posing this challenge on my blog. In no way do I count minor inefficiencies in implementations against candidates. Now, if they come up with an n^2 solution I might be concerned...
I know how web development works, I understand throwing more servers at it usually is the easiest solution.
But also now that mobile is becoming hot, it is more like embedded systems than web development, so when I'm interviewing mobile candidates we definitely discuss things regarding runtime and memory tradeoffs.
Coming from an embedded background gives you a very different perspective on things. I too have a strong embedded programming background. This, among other things, makes you keenly aware of the CPU and resource cost of your code.
If you've ever spent a day hunting for clock cycles to trim you know how that one can affect run time dramatically with a little attention. After a while this sort of thing becomes second nature.
It pre-computes all that can be pre-computed outside the loop and then, once inside the loop, it uses a calculated index to address the array. The index only requires a comparison and, if going in reverse order, a subtraction, which are computationally cheap. If absolute performance was the goal --at the expense of a few more bytes of code-- this could be tightened further.
One of the problems with programmers that don't have a low-level background, and, in particular, those who grew-up, if you will, with object-oriented programming, is that they don't generally have a connection with the cost of their code.
Object oriented programming is great for the programmer, but the computer doesn't need it. It exists purely for the programmer's benefit. The computer would execute functionally equivalent non-OO code just fine and much, much faster. Case in point: Objective-C classes. I forget where the stats are but I remember that in some cases things run 400 times slower if you use certain "NS" classes. Four hundred times slower is a pretty serious price to pay for human convenience.
I wrote a genetic evolver for an iOS app I worked on. The initial code was done in full Objective-C on purpose. I wanted to see just how efficient the code could be made "using the Force". I then re-coded in nice-clean C. Wow! I don't have the data in front of me but I remember that it went from having to wait for a solution to executing hundreds of generations (and arriving at a solution) almost as quickly as you retracted your finger after pushing the "Go" button.
If we apply this to web programming it means that one could save a tremendous amount of money in server costs and provide a far more responsive system at the same time. Server costs go beyond the mere monthly cost of the hardware. Calculations should include MTBF, maintenance, power, cooling, etc. If you are using AWS these costs are all part of the package, of course.
So, no, in my case, I don't take it lightly. I don't like it when people suggest that we "throw another server at it". First I want to see that the code is as tight and fast as can be. Then we look at throwing hardware at it. Of course, there is a point where hunting down clock cycles becomes more expensive than it is worth. The location of this point on the timeline is different for each business and project.
> I don't like it when people suggest that we "throw another server at it". First I want to see that the code is as tight and fast as can be.
I understand that you don't like it, but that doesn't mean it's actually more efficient. Except in very rare cases, it usually saves money in the long run to add additional servers. The correct way isn't always the best way.
Performance differences are anything but negligible. In the case of the "billion elements", you are now spending the majority of time for reversing the array twice.
As a programmer you should always know what kind of cost you are incurring, both asymptotically and down to cycles.
Trying to condense all the cases into one (using reverse or local state like 'step' from OPs version) is what hurts readability and maintainability.
Also, the "buy more metal" argument is hardly applicable for implementing this one way or another. Its an argument for using whatever platform you are most productive in.
> As a programmer you should always know what kind of cost you are incurring, both asymptotically and down to cycles.
No, as a programmer, you should be able to determine what kind of cost you are incurring when it matters, and much more importantly, you should be able to determine when it matters and when it doesn't.
And even in the case of a billion elements, the difference in time is still negligible. Developer time is almost always better to optimize for. There's a reason Facebook and Google weren't coded entirely in C.
It makes sense, because performance differences between that and the "optimal" solutions are usually negligible, and the "optimal" solutions aren't usually optimal when you take into account readability and maintainability.
Embedded development is significantly different than developing for the web. Buying more servers is usually more efficient than taking twice as long to develop and update code. Interviewing with people who refuse to acknowledge this is extremely frustrating.