Similarly, if your software runs in 1% of CPU on a typical customer's machine, spending 10x the time and resources to make it run in 0.1% is not laudable. Context is everything.
No, but spending 1.2x the time and resources might be. That's not an unreasonable proposition; something written in Go, C# or Scala can run 10 to 100 times faster than the equivalent code in Ruby, but it certainly doesn't take 10 to 100 times longer to write the code.
It also conserves resources; you're reducing the amount of the world's non-renewable energy that your app consumes by up to 90%. You're also freeing up your users' machines to do more things simultaneously while running your program, potentially increasing the user's enjoyment of the system in the case of a desktop app.
Part of this applies to the craft of software too. One can be sloppy and churn out functional websites by the dozens. At the superficial level, the goal to extract the most from a 500MHX Pentium-III might seem brain dead, with little or no pay off. But it pays back by instilling an attitude of deeply learning your craft, that learning does pay back although that specific well tuned web server might not. You dont have to build all your servers that way, but if it doesnt hurt you a little when you know exactly what you could have done to extract some more juice out of it, you will not reach that level of understanding. If you are impatient or sloppy, you will build impatience into your product, and it will show.
Besides it is always easy to over value ones time, it sometimes correlates with conceit.
@sitkack what makes you think I was talking about the 50s. Going by the downvote I seem to have touched a nerve. You are right Japanese products around that time were synonymous with bad quality, not just in US. Much water has flowed under the bridge since then.
There are actually two books that were written about this very topic, one called "The decline and fall of the American Programmer", which was essentially making your argument, that American quality going down meant we would get eclipsed, and "The Rise and Resurrection of the American Programmer", written by the same author, saying "oops, I was wrong", because "good enough" is the winning strategy, not quality at all costs.
Note that at the time of those writings, the 'sloppy' guys were Microsoft, which is still way way more process than your typical RoR shop today has. In the days of shipping new software to websites three times a day, pragmatism rules all.
As the saying goes: "Premature optimization is the root of all evil."
By your measure, the Zero was a low quality plane, but served the MBAs in the Japanese military well for meeting its objectives. The AK47, one of the most successful machines in the world has extremely sloppy tolerances and it is exactly those lenient tolerances that enabled its success.
We always have to be cognizant of economics, over building, polishing or engineering a system is a waste. Calling it a craft doesn't make it any more acceptable. Proing-up and being good at what you do doesn't mean you need to put burnished wood knobs on your software.
I think, you think tolerance means fits that rattle, thats not what it means in engineering. Loose tolerance may just as likely lead to interference fits.
Forget AK-47, there are more egregious examples, consider the Blackbird, SR-71 one of the fastest military aircrafts to have graced the skies, a marvel of engineering: it had just wide gaps between the parts of its metallic skin, at the joints. Now going by your logic one might think SR71 became successful because of loose tolerance and "good enough". It was quite the opposite, the clearances were deliberate and necessary for its success. They were there to account for thermal expansion. For the AK it was necessary for rough use and low to zero maintenance.
In fact not having clearance for SR71 or the AK would have constituted intellectual sloppiness: going with cookie cutter decisions and not optimizing the product for the use case. What looks sloppy to an amateur might actually be the result of perfection by an expert.
May I recommend Zen and the Art of Motorcycle Maintenance to you.
Further, I think you are being obtuse and defensive and railing against something I have not said. I am not quite sure why. I have never claimed all products need to be finished to the point of "burnished wood knobs". My commentary was on personal growth as an engineer. If it doesn't bother you somewhere to turn in a product that you know you could improve with little effort, you are not going to be a quality engineer, and will not be able to produce a quality product when one such is desired. I am categorically not saying that each item that you deliver has to be the epitome of some arbitrary quality standard.
The cultural tradition that I was talking about was not about adding cost to the product by unnecessary finish. Often people do not finish the product even when it would not have taken much effort. This is rationalized with the logic that the finish would have little immediate value, because it is already good enough for the job, but far from "good". The other reason is sometimes the craftsman just does not have the skill and "good enough" is a good argument to take cover under.
Quality comes and goes.
All of these lazy, low tolerance sweeping generalizations need to go!
An example might be working nine hours per day rather than eight to produce something that only uses 10% of the resources (and that extra hour is unpaid). For some people, that extra hour would be a waste, but for others, it would bring them greater satisfaction knowing they'd delivered a more efficient product, even if it brought them no extra revenue.
I mean don't get me wrong, I'm working on a project right now that I'd love to do 'right'. Spend weeks doing a literature study to catch up on and really understand the current state of the art, then implement three or four different approaches to really get a feel how they work under real world conditions. After that I would try to pick the best aspects of those algorithms and try to write some really smart code that analyses the input and picks the best algorithm based on the data. Then I would tune that code until it is as fast, stable and memory efficient as possible and finally put a really slick UI on top of it. Trust me, nothing would give me greater satisfaction.
Unfortunately I got the project late last week, deadline is in less than two weeks and I have two other projects I'm working on in parallel. So I'm going to end up grabbing some off the shelf solution, tweaking it until it works well enough, wrapping it in up in some hacky shell script and make up for inefficiency by throwing more hardware at the problem. It's not optimal, but deadlines are deadlines, and I'll have a contend myself with the satisfaction of getting a job done well enough on time, rather than a job done perfectly.
Maybe we're thinking of different things here. What I had in mind was more along the lines of for instance choosing Go or Scala instead of Rails or *.js and getting a 10x speedup at the cost of simply adding a few type declarations. There are situations like yours where deadlines leave little choice, but there are also situations where a bit more leeway is available.
If you're founding a startup in an emerging consumer market, you'd be making a mistake to use something like Go or Scala rather than Rails or Node.js, because your whole success is dependent upon finding out what particular combination of design, features, and experience emotionally resonates with fickle consumers' minds. That takes a lot of trial and error; anything that slows down that experimentation process is going to cost you the market. Once you've found the market you can get VC and hire experienced technologists to curse out your technology choices and rewrite it in Go or Scala or Java or C++ or whatever.
If you're CloudFlare and billing yourself as "the web performance and security company", however, building out your architecture in a performant and reliable language makes a lot of sense. You know your value proposition: you want to use whichever technology stack lets you execute against that value proposition most effectively.
Twitter got a lot of flack for being built on a "dumb" Rails architecture, but I'm also certain they would not have succeeded had they done anything else. Remember the context of their founding: Twitter grew out of an idea lab that grew out of Odeo, and at the time of their founding they were an idea that was so marginal that nobody would've bothered with it had it taken more than a weekend or so to prototype. When it turned out that people liked it, then they could afford to hire people to rewrite the software into something scalable - but those people would not have jobs had the initial concept not been proven out first.
I work with both a RoR code base and a Scala code base. There is no difference in productivity that I can tell between the two. I think it is a out-dated assumption to believe that dynamic languages like Ruby or Python are more productive than modern statically typed languages. The only cost is one of learning; many more people know Ruby than Scala. This is an artifact of history (i.e. Ruby is 10+ years older than Scala; crappy CS educations that only teach Java).