The 'premature optimization is evil' myth (2010) 94 points by jerf on Mar 14, 2016 | hide | past | web | favorite | 109 comments

 The longer Knuth quote is “We should forget about small efficiencies, say about 97% of the time; premature optimization is the root of all evil”In the inevitable meme transfer in the telephone game[1] and shortening of memes to smaller soundbites, the "small efficiencies" part is left out.To me, "small efficiencies" was trying to "optimize" your old C code from...`````` x = x + 1; `````` to...`````` x++; `````` to...`````` ++x; `````` ... because you read/saw that a C compiler created tiny differences of assembly code based on post- vs pre- -increment operators which results in a 0.00001% runtime difference.Knuth isn't talking about being ignorant or careless with choosing bubble sort O(n^2) vs quicksort O(log(n)). Or not placing an index on a lookup key of a 1 terabyte table (that's a 1-hour full table scan vs millisecond b-tree lookup). Those are not "small efficiencies".If one leaves out the "small efficiencies" as a conditional, regurgitating the "premature optimization" is a cop out for not thinking.
 > Is a cop out for not thinkingCommonly phrased in startup world as "It's OK, we're just building an MVP."
 Which is wrong (and fails) mostly because the startup hasn't spent enough time verifying if a solution really solves a problem, and not that their MVP really required the extra performance.
 > quicksort O(log(n)).Quicksort is O(n log n) average case and O(n^2) worst-case.
 Yes, I saw that error after the edit window closed so I couldn't fix the typo. There has to be an extra "n" because qs has to touch every element at least once so a baseline complexity of O(n) is unavoidable.Hopefully, it didn't detract from the point that Knuth was talking about premature micro-optimizations and not design/architecture/algorithm optimization. Some inexperienced people are repeating "premature optimization" to try and win internet arguments instead of using it as nuanced advice to avoid wasting time.
 >to try and win internet argumentsis pretty much the antithesis of>to avoid wasting time.
 It depends on how you implement median selection. You can get O(n log n) worst case if you want.
 True, though it's usually not worth the hassle. Most production implementations of quicksort just drop down to heapsort for the current sub-array if the stack gets too deep.
 "premature optimization is the root of all evil"-> I've found this super useful in projects. This doesn't mean you don't spend time on design or architecture, but it means many engineers have a strong tendency to jump on optimization opportunities too quickly (the more junior the stronger the tendency), AND this causes bad architecture choices that come to bite you hard later on. Keeping it simple and un-optimized is often better than early optimizing, not just because you save time and it's not worth it (hardware is cheap), but also because you keep your architecture elegant and the real bottlenecks will be different from what you thought they were and will come up later.It's absolutely valid, and wisdom that's often hard earned.
 Precisely - I think of this quote as more about how profiling and micro-optimizing your code should come last - but basic stuff like choosing the right data structure for the job should be something any programmer should jump at.
 I had a friend once tell me he worried about double quotes vs single quotes b/c string interpolation checking. This would be the 97%.
 Well, worrying about that is IMHO quite reasonable. Once. "Let's see how big an improvement that is...aha, near-nonexistent. Okay, moving on; perhaps use single quotes for new code if that still worries you."
 In case anybody was wondering, your C compiler almost certainly will not generate different assembly code for ++x and x++. (This is easy enough to test if you have a compiler handy.)
 Pretty verbose way of (yet again) reiterating that Knuth was essentially correct, but that many people misunderstand or misapply what he was saying. Joe says as much again in the conclusion.To me that makes the "myth" part of the title more than a little click-baity, which is unfortunate.Knuth is right: premature optimization is a bad idea, full stop. That doesn't mean that there aren't performance related activities you should be undertaking at various stages of implementation, that either aren't optimization, or aren't premature, or both.
 `````` > Knuth is right: premature optimization is a bad idea, full stop. `````` A bit of a "no true Scotsman" though, isn't it? Any optimization that is a good idea to do now is "not truly premature", whereas everything else is actually premature.
 But that's basically every rule or guideline in the software world.You should do X always, unless it does not make sense.How do you know the difference ? With enough experience or enough ignorance. Knowing when you are in one category or another for a specific topic is the tricky bit.
 I'm going to have to agree with ska's other comment[0] and say that it's knowing the difference between good design and optimization. Being able to design a performant system means choosing designs which are inherently fast. Squeezing the last few percent out of bubble sort makes no sense when you should have gone with, say, insertion sort in the first place.Once you have the right algorithms, data structures, and system architecture in place and working, it's going to be fast enough and you can choose to spend time optimizing only where absolutely necessary. Even then, you should default to getting order of magnitude better performance via a better design rather than tweaking inefficiencies.
 I think Joe was commenting that many developers and tech leads tend to overestimate what optimization is premature and disregard appropriate forethought about performance.Saying "this bad thing is bad" is a bit of a tautology, but there is some meaning to be gleaned from the statement.
 In my experience, many developers (and tech leads) can't separate the idea of optimization from design, which is I think the core problem. But it is equally true that many of the waste inordinate amounts of time on premature or just plain misguided optimization.
 > many developers (and tech leads) can't separate [o]ptimization from design...."brilliant--a few words that belongs somewhere more conspicuous.'make it work' then 'make it fast' makes sense of course, but who wants to optimize code in quadratic time that could have been originally written w/ linear time complexity.
 How do you distinguish optimization from design?
 You can't optimize a system that does not exist yet. You can't effectively optimize a system you can't measure.Design activities allow you to think about how a system may and should behave. Optimization activities involve analysis of how a system actually behaves, and making changes typically involving performance trade offs, then analyzing those changes.These activities can (typically should) be iterative in nature over a complicated projects development, and they do feed into each other somewhat.What is true as a counterpoint to Knuth's maxim, is that if you do not think about the implications of your design early enough, you can easily end up painting yourself into a performance corner it may be difficult or impossible to optimize your way out of. If you've picked a fundamentally inappropriate data structure or algorithm, you may be in trouble well before you realize it. This is also why software developers would often benefit from more targeted design prototypes, earlier on.And yes, for some of this there is no easy replacement for experience.Another thing to think about: Optimization almost always costs you something, at the very least time, but often code maintainability, portability, generality etc. It is usually a trade-off (but not always).Often good design work will improve your system in many respects at once.
 Often by abstracting the algorithmic implementation details from the system organization and logic. Then the individual algorithms can be interchanged or modified during optimization.There is of course some scale dependence to the use of these terms; the 'design' is of a larger system than the individual algorithms that compose it and can be abstracted and optimized.In cases where the scale is the same, i.e. variations of the design itself will have substantial impact on its performance characteristics, then optimization and design can't be readily distinguished and Knuth's aphorism is less clearly relevant.
 He is refuting a version of "premature optimization is the root of all evil" that I have never heard in practice:"Mostly this quip is used defend sloppy decision-making, or to justify the indefinite deferral of decision-making."I have never heard it used in this context. Sometimes I've heard it used as a gentle way to suggest to someone that they are going off in the weeds and need to refocus on what they should be focused on, but usually I've just heard it used as it was originally intended by Knuth.Optimization often involves making code less clear, more brittle, or with a more pasta-like organization. Frequently optimization requires writing code that if looked at out of context, doesn't make sense or might even look wrong.When these sorts of optimizations need to be made, they should be made only as needed (and documented). It shouldn't be done without knowing whether or not a particular code path is even a bottleneck in the first place, and it shouldn't be done if speeding up that particular bottleneck wouldn't make the software better in any tangible way. That's all the phrase means.
 In a lot of circles, especially where web developers are involved, you'll get called out for premature optimization for spending any mental energy worrying about memory usage or bandwidth. The idea is that computers are fast, so we can just do whatever we want, and worry about it if it becomes a problem. The result is that it becomes a problem, then gets patched up to meet whatever bare minimum performance standards the company has (or the deadline arrives and it's released unoptimized) and we end up with the absurdly heavy and resource-greedy software we see today.
 From a business perspective, this is probably the right decision. Businesses are interested in marginal gains, i.e. they want performance improvements only if those will lead to more customers or customers that pay more money. Any extra effort is better spent on things that will get more customers or more money per customer.From an overall social welfare pesrpective, there is something to be said for going above and beyond the customer's minimum standard. Indeed, most of the work that goes on at big companies is of this type (it's no coincidence that the original author works on the compiler team at Microsoft). Just understand the context in which you work - if the company is going to go under with its current customer base, it's irresponsible to focus on things that are not going to get more customers - and that it can be hard to measure the effectiveness of work that doesn't directly lead to new sales.
 > From an overall social welfare pesrpective, there is something to be said for going above and beyond the customer's minimum standard. Indeed, most of the work that goes on at big companies is of this typeSo do you want to say that MSFT should invest less energy in speeding up the code they produce, because it won't get them more customers or more money?
 From which perspective do you want an answer? A user of Microsoft products, a developer at Microsoft, a shareholder of Microsoft, or an executive of Microsoft?As a user, of course I want Microsoft to invest more in speeding up their products. I also want them to be easier to use, and have all the features that I want, and cost less, and release more frequently.As an engineer, yes, I do want them to invest more in speeding up products. Performance tuning is fun, it's an extra skill that can go on my resume, and it helps me take pride in my work.As a shareholder, hell no are you going to indulge those prima donna engineers and their perfectionist tendencies. I invested in this company to get a return on my investment, and that means more revenue. It's clear that shaving CPU cycles isn't going to get more customers; Windows has been dog-slow compared to competitors ever since the Amiga came out, and it hasn't hurt us so far. That effort would be better invested in getting into new lines of business and building more solutions to capture the enterprise market.As an executive - it's a complicated question. I want our products to be faster, but it's also clear that our customers want them to be easier to use, and have a lot more features, and cost less, and release more frequently. All of those are in conflict with writing performant code with a fixed number of highly-trained engineers. And if I hire more engineers, the code often gets slower, as global optimization opportunities get lost in the communication gaps between workers. OTOH, performance often has intangible benefits in brand loyalty, in increased usage, in word-of-mouth, and in PR. I'd love to be able to quantify these benefits and trade them off against each other, but the point of intangibles is that they are intangible. I'm responsible to shareholders, however, and my gut feeling is that increased performance will not be the deciding factor for most customers.(As just plain old me, the question is completely academic: I am neither a user, employee, executive, or shareholder of Microsoft, so I don't really care what they do.)
 Thanks. I like your context-dependent approach.And tangentially, I still wonder why MSFT thinks that forcing Windows 10 down the throat of the existing users can be considered something that will "get them more customers" if that should be the ultimate goal of the company.
 Forcing Windows 10 down your throat is a reaction to the phenomenon of Windows XP, where they just couldn't make it die. I'm sure they decided to never let that happen again. It reduces the cost of supporting old customers, while simultaneously ensuring they have a solid base for new products that depend on new infrastructure.
 Thanks, that sounds plausible, that it's the result of their fear of Windows XP.
 Yes, this is where, in practice, I've seen the adage used incorrectly. Even when Knuth isn't quoted directly, the idea that "premature optimization" is inherently a bad thing has led many a web developer down the path of terrible architecture decisions. So I understand Duffy's point exactly.It's not that all issues are preventable. It's that you can prevent a lot of them with just a little extra work up front.It's a misuse of Knuth's original point, which has much more to do with how brittle and incomprehensible "optimized" code can become.
 I prefer the development order of make it work, make it correct, then make it fast. Expresses a similar idea, but in terms of priorities instead of "don't do that at all".
 > Expresses a similar idea, but in terms of priorities instead of "don't do that at all".The programming language affects a lot the criterion. Some are known to be "correct" (like Haskell), others "fast" (like C) ...Given an infinite amount of time, I suppose the three can be reached in any language.
 I get into discussions like this all the time. With a little thinking ahead you can avoid getting yourself caught in a difficult situation later on. It doesn't mean you should implement the optimization but at least you should allow for it. That could mean a simple API wrapper that can later on be optimized or to do an analysis what would happen if the traffic increased 10x.Unfortunately looking further than the end of the next sprint is disallowed in a lot of people's minds. It's pretty sad that a lot of software "engineers" actively avoid any kind of engineering.
 It's not sad, that kind of "thinking ahead" leads too oodles of junk in the code for "someday" features that never get implemented. If your programming style requires thinking ahead to make a feature possible later, then you're better off fixing your style and learning to program better so it costs the same to make the change when it's needed as it does now, then you can put off the change until it's actually needed.Premature architecture is a code smell. Putting in scaffolding for later is a code smell.
 I am not talking about actually producing code for "someday" features. I am talking about working towards a goal. Most projects know pretty well where they will be in one or two years (not everyone is Instagram who goes from 0-100 in a year). Based on that knowledge you can make reasonable decisions and trade-offs now.However, I admire your ability to write code without any forethought now that can be used perfectly in whatever form it will be needed later.
 > the cost of change should be the same later as nowThat's patently false in any code I've seen; and saying "well make it so, can't be so hard" is just proof by handwaving.
 And claiming you must stub it out now because it might be needed later is straight up pulling it out of your ass guessing. Beyond that, I was very clear, if the cost of change is more later, you're doing something terribly wrong and trying to cover it up by stubbing out features before they're needed to avoid the real problem that you write code badly in a way that makes stubbing out stuff ahead of time necessary.It's new code, so you can absolutely write it without extra scaffolding for "shit you might need later". If you don't need it now, you don't need it yet.
 "We aren't talking about making decisions"I was talking about decisions and not writing code and was pretty clear about that.
 No, you weren't, and I quote> That could mean a simple API wrapper that can later on be optimizedThat's code. That's what I'm responding to, you insulted those who believe in not writing code before it's necessary, called them sad, and now you want to deny the very thing being replied to? Really? Forget it, you don't know how to discuss something.
 You know the meaning of "simple", angry man?
 More than you know the meaning of angry apparently.
 Well the salient word in the phrase is "premature". As opposed to optimization after careful observation and measurement, which everybody agrees can and should be done.
 But at the point where you're ready for careful observation and measurement, you've already designed and written your solution, and the amount of room you have for optimization is constrained by the thing you just built.If you want performance, you have to design for performance, not just build the first thing that comes to mind and try to optimize it later.
 It's a cost optimization. How much engineer time does it take to shave 0.2 seconds off of an action that's got a 0.3s animated transition anyway? How much engineering does it take to care about the memory footprint of a website users are going to close in 5 minutes anyway?Most of the time, the answer is "Too much, not worth it". Some of the time the answer is "Let's do it". Knowing which situation you are in is key.Ideally, I should write code for readability and maintainability and let the compiler and runtime worry about optimizations.
 No opinion about the rest of the argument, but a 300ms animated transition is looooooong. It will be noticed by basically everybody and annoy a good number of them.The only good reason I can think of is that you're somehow stuck with 300ms+ delay anyway, so you provide an animation so that the users don't think "WTF? I just clicked on it and why is nothing happening?" But if you can shave off 0.2 seconds then you can probably get rid of the animation altogether!
 You'd be surprised by how many people think those 300ms animated transitions are a good thing.I think they're terrible.You'd also be surprised by how many people will completely misunderstand your UI and get confused by things popping around magically, if the transitions are too fast or inexistent. You have to build a UI/UX that your typical user will enjoy and be able to use, not a UI/UX that your nerdy friends are going to love. (unless they're your target users)
 The only good UI animations I've seen were in Metacity (window manager for Gnome 2). It would move windows instantly, but also provide a transparent trail showing the path they would have taken if they had been animated traditionally. It let you continue working without delay if you knew what you were doing while still helping beginners.
 And that’s when people start using script blockers and block many CSS features just to be able to load + display webpages in ping + 16ms.
 >In a lot of circles, especially where web developers are involved, you'll get called out for premature optimization for spending any mental energy worrying about memory usage or bandwidth. The idea is that computers are fast, so we can just do whatever we want, and worry about it if it becomes a problem. The result is that it becomes a problem, then gets patched up to meet whatever bare minimum performance standards the company has (or the deadline arrives and it's released unoptimized) and we end up with the absurdly heavy and resource-greedy software we see today.IMHO, this is exactly the kind of thing Donald Knuth was approving of.Given how cheap CPU cycles are, how expensive developers are and that faster code often means more 'unsafe' code, 97% of the time it's more economic to just have the resource-greedy software.Plus, I've seen more than my fair share of premature optimizations that ended up actually causing performance problems and stupid bugs.
 > Given how cheap CPU cycles are, how expensive developers are and that faster code often means more 'unsafe' code, 97% of the time it's more economic to just have the resource-greedy software.I agree. I run sloccount in my build system. It gives a cost for manpower. For example:`````` Total Physical (SLOC) = 6,944 Development (Person-Months) = 18.36 Schedule Estimate (Months) = 7.55 Estimated Average Number of Developers = 2.43 Estimated Cost to Develop = \$ 206,696 `````` http://www.dwheeler.com/sloccount/http://vmorgulys.github.io/stackcity/sloccount.htmlWe should count the energy spent by inefficient programs (multiply the number of devices). It could balance that equation as the Moore's Law is ending.I guess Debian (or another distro) if more energy-efficient than Windows (or Android).Edit: beautified sloccount output
 Interesting. I always interpreted it as, "Don't sweat the details yet- you don't even know if anybody wants what you are building. Find that out first."
 The mindset of "fail fast" wasn't a thing when Knuth wrote that statement.
 This isn't "fail" so much as it is acknowledging that neither you nor your customers will know what they like until they have something to play with.
 My point was that Knuth wasn't considering that possibility when he wrote the quote. In those times it was assumed that if you were writing software, there was a darn good reason for it. The idea that you might write software that doesn't fulfill a compelling need is a rather modern invention.
 Except that optimizing cold code is writing software that doesn't fulfill a compelling need.
 > web developersThat's the problem.
 `````` > Optimization often involves making code less clear, more > brittle, or with a more pasta-like organization. > Frequently optimization requires writing code that if > looked at out of context, doesn't make sense or might > even look wrong. `````` I agree so much with this. It's better to write a sub-optimal implementation for a component which is clear & can be replaced with a more performant version later, when the bottlenecks have been identified + the interface is well-defined and -tested, and the team can accept the maintenance burden of non-obvious code.This does mean you can't skimp on good design, making your project a collection of modular, replaceable components.
 Further, very often we write code that isn't the code we need to write. I have wasted countless hours on needlessly efficient implementations of the wrong interface, or, on the other end of the spectrum, coming up with the perfect interface for code that will never be used again. Because I like puzzles and optimizing is fun and wiring up business processes is not very fun at all.But that doesn't mean it's a good way to write software. Accept that most of what devs do is fairly mundane, and focus your mental effort where it's actually needed and you'll be a better developer than anyone who obsesses over the performance/elegance/extensibility of every line they write. Obviously it's a spectrum, and it takes balance, but I know which side I'm currently on!
 `````` > I have wasted countless hours on needlessly efficient > implementations of the wrong interface, or, on the other > end of the spectrum, coming up with the perfect interface > for code that will never be used again. `````` SAME HERE
 Examples on either side abound.I know I've certainly done enough optimizations that left the code clearer and shorter (despite often being more verbose with naming), while being more performant due to shedding whatever messy, tortured approach was in use before. Enough small gains like this have come out of code where I was the original author that it isn't a case of conceit.Clear and simple code can frequently beat clever code, regardless of which metric you choose to apply from the "performance or understandability" bag.
 Simple code is hard tho
 I've heard it in both contexts. When I first started getting into Rails I had to take over a project from a contract firm who very blatantly coded for "make it work" with no regard for the stress that certain things caused on the database. It was code that should have been a giant red flag to just about anybody, but was defended as "it would have been premature optimization". This was a piece of code that would, 1 one BEAUTIFUL line of Rails code...execute 50,000 queries on a single page.The problem is that when people hear that quote, without knowing it's original intended usage, they are able to use it as a "just get it done" excuse. This often happens when people are too shielded from what's going on in the database via an ORM layer like ActiveRecord for example.You don't need premature optimization...but you do need competent optimization. If you can see a blatant red flag that you're going to avoid by taking a little more time to do something a different way...do that.
 >one BEAUTIFUL line of Rails code...execute 50,000 queriesSomething is off about that.
 It was using an inline loop to execute operations on related data 3 associations over...before paginating.
 Still calling bullshit. You have to purposefully intend to get performance that bad in order to make ActiveRecord do the wrong thing that blatantly. The actual query execution is extremely lazy, that is, it doesn't execute the query until all the final chained method has been added to the planner.
 Oh man, I wish I still had that code.These guys were militant "all logic in the objects" types so when they had to create a dashboard page, instead of just doing a scope with a couple of joins and the proper criteria; they went off of the base object, got the first set of associations, checked to see if it met the criteria by looping through the results and calling the object methods (which made associated calls to evaluate their comparisons under the hood) before finally converting the entire result set of about 20,000 objects into an array so that it could be sorted and the trimmed to exact number of records that were supposed to be displayed on that particular page.It was brutal. You would have thought it would require real, serious, effort to pull off that level of scary.I don't have the code to offer, but I can cite a couple of blog posts that I wrote about it a while back. It was so bad that I not only start blogging more because of it but I also taught a class to try to teach people both Rails AND PostgreSQL so they couldn't get into a situation of learning one without the other.Here's the posts...The Drawback to Web Frameworks (2013) - http://www.brightball.com/ruby/the-drawback-to-web-framework..."In order to look up what the status was on a particular object related to a user, they used some beautiful looking Rails code to find, filter, and combine results (and THEN paginate). The problem is that after Rails goes one level deep from a single record it starts performing single queries for each record in each relationship. That meant, for example, that in order to retrieve the most recent objects for a user who had over 18,000 in his account history that upwards of 50,000 queries were executed. The results of those 50,000 queries were then loaded into the web server's RAM (and SWAP), processed/sorted/filtered, and THEN paginated just to show the first 100 results. It was appalling and that is only one example."And here's the class that I taught to try to stop it from happening ever again. :-)
 Then you should spend some more time on HN or reddit and you will definitely hear this. It's bound to pop up sooner or later in topics where programming languages are discussed.Here's a comment where someone says that performance doesn't matter at all if you only have a certain number of users and then they backtrack and qualify their statements: https://news.ycombinator.com/item?id=11245700I have the opposite impression - that many devs are lazy and don't think about optimisation at all. They write slow code by default and hide behind a misquoted Knuth.Your definition is off by the way, writing fast code and doing optimisation doesn't necessarily mean that the code will be less understandable or become brittle. e.g: using range va xrange in Python 2.x when iterating over large ranges - that's a difference of literally one letter. There are of course other examples.
 >using range va xrange in Python 2.x when iterating over large ranges - that's a difference of literally one letter.Which literally makes no noticeable difference 97% of the time.I wouldn't necessarily complain if you did that to my code as part of a broader refactor but if the range is small you're not doing anybody any favors.
 Your definition is off by the way, writing fast code and doing optimisation doesn't necessarily mean that the code will be less understandable or become brittle.OP saidOptimization often involves making code less clear, more brittleSeems you're arguing against a straw man here.
 Not quite.* OP is arguing that optimisation is usually problematic from multiple perspectives. Here's the full quote:Optimization often involves making code less clear, more brittle, or with a more pasta-like organization. Frequently optimization requires writing code that if looked at out of context, doesn't make sense or might even look wrong.* I am arguing that these assumptions cannot be made so easily.Yes, they qualified the statements with "often" and "frequently", but the tone is clearly negative. It shouldn't be, and we shouldn't shun writing fast code out of the belief that it's at odds with readability or robustness.
 > Optimization often involves making code less clear, more brittle, or with a more pasta-like organization. Frequently optimization requires writing code that if looked at out of context, doesn't make sense or might even look wrong.I don't agree with that. It may be true for very low level micro-optimizations, but isn't usually the case for higher level optimizations that give the best performance improvements.A lot of times code can be sped up significantly just by using a different data structure or caching a value that's already computed somewhere else. For example, I've seen production C++ where the code was using a std::vector to keep a list of items in sorted order and remove duplicates. It was trivially converted to a std::set and saved several seconds of run time.
 I think it probably also matters a lot whether or not there's a clear solution. If there's a better than average chance that the optimization strategy is known, and will work then it's probably fine to hold off on it. If the way you are writing the program doesn't lend itself to clear solutions for the performance bottlenecks then that's an issue that should be dealt with right away or you risk throwing out a whole lot of work later on.
 Avoiding premature optimization most definitely is not an excuse to be sloppy or dumb.
 I posted this article less for the negative "countering the myth" that the comments here seem to be responding to, and more for the positive description of how exactly you write code in a thoughtful manner while not overdoing it into "performance uber alles".I tend to think of it more as not painting myself into a corner than necessarily getting it perfect the first time. It's amazing what some thought, maybe a day in the profiler per couple of months of dev work to catch out the big mistakes (and as near as I can see, nobody ever gets quite good enough to be able to never make such mistakes), and some basic double-checking (like "are any of my queries are doing table scans?") can do for performance, long before you pull out the "big guns".
 If it were positioned as an article about writing thoughtful code then I doubt the comments would be as focused on the claim in the headline. Knuth's point was that even thoughtful programmers could get caught up in pursuing performance in areas where it ultimately didn't matter, and even more critically, that even thoughtful programmers could be guilty of discarding a clear, comprehensible piece of code in favor of something terser, and less accessible due to a perceived performance benefit.
 "If it were positioned as an article about writing thoughtful code then I doubt the comments would be as focused on the claim in the headline."Sorry, I did not mean to delegitimize those points. I understand where they are coming from.
 I found this deeply ironic given the article's premise:"are any of my queries are doing table scans?"It makes me grind my teeth when developers apply brute force thinking like this. There are many times when full tables scans are fine - e.g. used on a small table, or when a query returns more than a few percent of the rows in a table.imo, blindly hunting out full table scans is a textbook case of premature optimization.
 Has it occurred to you that, in context, there's no way I meant blindly checking?Besides, this may "grind your teeth" but I see the opposite at least a full order of magnitude more often, if not two. I'm far more likely to get someone asking me "What's EXPLAIN?" (when working with MySQL) than to see someone going nuts making sure they have 0 table scans.
 I think the "premature optimization is evil" heuristic exists is not to avoid doing efficient things but to avoid prioritizing optimization over design. Yes, you want linear or logarithmic runtime complexity and NEVER quadratic, but you won't use mutable datastructures in scala until you know that there is a space complexity issue for instance. Then, and only then, do you optimize to reduce memory usage as it hurts your design quality.I think the title is a bit misleading because it's a good heuristic and you agree with that too.
 Reminds me of:1. novice - follows rules because he is told to2. master - follows rules because he understands them3. guru - transcends the rules because he understands that rules are over-simplifications of reality
 Except this is railing against a bastardized version of a rule. Leaving out the "small efficiencies" allows the rule to be applied in contexts where it clearly was not intended.
 I can't agree more with Joe Duffy's viewpoint.In case you're interested in a graphical representation [1] of some common latency costs, someone at UC Berkeley put together an interactive chart with the original Numbers Every Programmer Should Know from Jeff Dean's (Google) large scale systems presentation.
 It's good to invest time in making decisions and coding them when it actually ends up making a positive difference to your work. Otherwise, by definition, it is premature optimization. (of performance, design or otherwise.)For instance, you often figure out you don't need a piece of code only after you've written and tested it, or after your thought process about the design has evolved. When you delete that code, it doesn't help anyone that a couple of hours ago you've invested five minutes in picking the "right" data structure for the implementation. The right data structure for unstable code is the one which lets you work with it and takes up the least of your time. As your code becomes more stable, it could then make sense to invest time in picking and coding a better data structure; it's less efficient to do so prematurely.
 > First and foremost, you really ought to understand what order of magnitude matters for each line of code you write.And that is: amount of time that will ever be spent in it across all deployments and execution instances, versus how long it takes to develop, taking into the cost of that CPU time and development time.You could spend, say, \$100 of development time such that the total CPU time saved over the entire installed base of the code over its lifetime is worth \$5.Secondly, even if the saving is greater than \$100, that means nothing if it's not recouped! That is to say, suppose you spend \$100 to optimize something, and the entire user base saves \$200 worth of CPU time over the next 25 years, when the last installation of the program is shelved. Only, oops, the users never paid a single penny more for the improvement. Moreover, suppose the improvement was only marginal and in some relatively obscure function, so that it didn't help to sell more of the program to more users. So in the end, you're just out \$100.> Mostly this quip is used defend sloppy decision-making, or to justify the indefinite deferral of decision-making.Here is the thing. An program optimized for performance is "bad" because it's hard to change its organization later (for instance when it needs to be optimized). It's harder to debug, too.We consciously avoid optimizing code in order to have the code in a state that is easier to work with.But we must ensure that we actually achieve this. In effect, we should be actively optimizing for good program organization, rather than just focusing on a negative: not optimizing for performance.Or optimizing something other than performance, and good program organization.A really bad approach is, for example, "optimizing for the minimum amount of time I ever have to spend learning effective use of my programming language, libraries, and existing frameworks in my project".You get code that isn't performance optimized, avoiding the "root of all evil", but it's garbage in other ways.
 I think his example using LINQ vs loops is not realistic - if you're using arrays like he is who's going to use LINQ with that ? The only reason I would specify a concrete type like that is if I cared about performance - otherwise you'd just specify IEnumerable/IList/IReadOnlyList or whatever and then use LINQ because it's cleaner. Use abstract interface when you don't care about performance at all - and IMO over 80% of the code is like that - initialization code, edge cases, stuff that gets touched less than a 0.01% of execution time and spending the time to optimize is simply not worth it.He starts the article by judging laziness - after spending a lot of time on stuff that ends up being irrelevant in retrospective I wish I was more lazy about this stuff.
 > First and foremost, you really ought to understand what order of magnitude matters for each line of code you write.Isn't that exactly what the phrase means? Understanding where it is important and where it isn't? At least that is what I always thought.
 TL;DR: be careful with the word "premature". Knuth's quote is still correct.
 There are things you can do to scale well, that you tend to have to learn from longstanding error, that don't take a lot of time up front. You don't spend much time on them, and these efforts bear fruit later. Then there are things that do take a lot of time that are unwarranted. You have to avoid these.Knowing the difference is key, and this is why senior engineers should be in charge of making architectural and design choices up front, and on an ongoing basis. Of course, most businesses can't attract such people, as scalability is not common knowledge outside major internet cities :(
 Given that performance is not such an huge issue as it used to be I believe that nowadays premature flexibilization is really the root of all evil: http://product.hubspot.com/blog/bid/7271/premature-flexibili...
 Designing for big-O performance is a good thing to do while writing code. Optimizations beyond that are typically an anti-pattern.
 As the author emphasizes, that depends on the speed requirements of your software. There are places where nanoseconds matter, just as there are places where tens or hundreds of milliseconds don't.
 A lot of confusion could be saved by reframing the discussion. Instead, talk about whether the performance characteristics of a particular choice are understood or not. If not, then don't optimize until it is either understood to be a problem through measurement or some other form of discovery.
 Rewritten: Keeping performance in mind when considering design alternatives is never premature.
 In the case of MVP & Prototype development and maybe even the long run:Clever architecture will always beat clever coding.In the early stages premature optimization can engage too much clever coding and architecture.There's no shortage of time spent building and optimizing a stack that largely introduces overhead to quickly iterate and solve a problem. I guess this perspective also keeps in mind you should likely throw away the first version of whatever you build because it uncovers how the architecture should be, and where, if anywhere the clever coding and optimization should be.It's not to say optimization isn't worth thinking about. It's not just worth obsessing about at a scale perspective, and experienced developers develop clever architecture approaches and habits that buy their designs breathing room as they may grow.The fundamental issue here is every piece of software is meant to break at a certain capacity, just like hardware. As the author very eloquently mentioned, understanding what you may come back to revisit and develop often may be one thing, and other areas you may not end up touching again, and may be worth a different type of design thought.The mentors I have worked with have balanced the thought of being kind to your future developer self in the present, and that can mean not under, or over-engineering a solution.Quite often the architectural design needs to be proven and verified before building a lot around it. Spending more time on the schema and architecture to ensure this is where I've found massive gains in baking in optimization to the bread with little development overhead other than planning and thinking a bit more.Quite often if I want to dive in to build a throw away prototype, I'll stop myself and think of a plan. When I'm hesitant to build without a plan, I often let myself prototype lightly to aid development of a plan.Developing for the simplest common denominator in the early stages to allow as many people to participate in the learning and direction of the solution is extremely critical as well. When problems reach the 10-100 million row level there will be a lot more to figure out than just optimizing it.Quite often technologies get caught up in optimizing technical design and code, and not users, problems or solving them. Maybe users need to be the focus for Technical developers, and technical understanding is something to focus on for non-technical developers who trivialize technical matters.
 "Premature optimization is the root of all evil" is like "don't ever roll your own crypto." It's "talking down" advice intended for programmers considered less knowledgable than the advisor.Personally I think "talking down" advice is harmful and goes very much against the pro-learning pro-self-education mindset of our industry. People either ignore it, in which case it accomplishes nothing, or they obey it and it stops people from learning or trying new things. It's also subject to a lot of misinterpretation. The "premature optimization" quote is often misinterpreted in practice to mean "never optimize or think about performance at all."A better version of the premature optimization quote is:"Don't sacrifice correctness, capability, good design, versatility, or maintainability to optimization until you already have something that works and you know what you need to optimize."Another nuance on optimization is: "optimize through better algorithms before you micro-optimize." Micro-optimization means tweaking out a for() loop or implementing something with SSE, while picking a better algorithm means picking something with O(N) over something with O(N^2). Picking a better algorithm is often something you do "prematurely" during the design phase, while micro-optimization is best left until the end.A better version of the crypto quote is:"Don't attempt to implement any kind of production crypto code until you know enough about crypto to know how to break crypto at the level you are implementing, and label any crypto experiments as experimental and don't try to pass them off as production or as trustworthy. Also make sure you are up on the state of the art and can name e.g. the last few major attacks against a major crypto implementation and can describe how they work."If you can't meet those criteria then no, you should not be implementing production crypto (though you are free to play around). But that advice also tells you what paths you need to go down if you want to learn enough to attempt crypto and how to recognize when you might know enough to attempt crypto. Can you explain exactly how BEAST, CRIME, POODLE, and DROWN work? Can you tell me why crypto must be authenticated and why you should encrypt-then-MAC instead of MAC-then-encrypt? If so, then maybe you're ready to swim in that pool. Otherwise, learn.
 The original premature optimization quote is not at all talking down. 3% of my code is pretty close to what fraction benefits from microoptimizations, and it is about "small efficiencies." It is useful advice for a novice and does not become less true as one gains in art."Don't optimize" would be the talking-down version."Don't optimize prematurely" is naturally tautological. "It is wrong to do X prematurely" is true regardless of X; if it isn't wrong to do X at this point in time, then doing X now isn't premature. It's closer to the pop-culture version of the advice, and like any tautological advice can always be wielded against someone. It's worse than "talking down," which can reduce the mental load on a novice, it's actually not-useful at any stage, as it provides no advice on when optimization is premature and when it isn't.The pithy version of Knuth's quote might be "Don't microoptimize until you can tell the difference between the 97% of code that doesn't need it and the 3% of code that does" which is in line with pretty much the entirety of your comment.
 The important thing is to use the right algorithm for the right task.
 `````` > I am personally used to writing code where 100 CPU cycles matters. `````` Not me, bucko. I'm used to writing code that, if it needs to go fast, I buy more CPU time and run it in parallel.
 And that’s when you discover that (a) electricity isn’t unlimited, (b) ressources aren’t unlimited, (c) money isn’t unlimited, and (d) maybe you should just save for the sake of efficiency.
 a) and b) can be so cheap relative to the total cost of the application and / or the value that application produces that they might as well be unlimited.Essentially, if you are running into electricity / resource constraints on, say, an e-commerce website, then unless your design choices were absolutely hideous, then you are having a Very Good Problem.Many programmers can spend their entire careers on building and maintaining such apps. The infrastructure costs are outclassed by their salary by several orders of magnitude. A decent website costs millions to develop in total and hundreds monthly to host. It can bring in several million in revenue every year.All this and it's still a sideshow to the main business. A site I maintain does \$3 million in business every year, whereas our retail partners do 7.
 Electricity prices aren’t globally the same, in some regions in Europe they’re over \$0.40/kWh.And renting servers from AWS can end up being more expensive than paying another dev and using dedicated systems.
 I'm pointing out that a compiler & language designer probably has a different concept of inefficiencies than a webapp dev.
 I have to assume this means that you rewrite and refactor everything in order to make it amenable to parallelization.
 yes, and that's not free

Search: