
Measure Performance Before Optimizing - coliveira
http://coliveira.net/software/day-16-measure-performance-before-optimizing/
======
gfodor
Does anyone actually not do this? Do people really just blindly optimize code
without first measuring what parts of the code need optimizing?

Also, the article makes the claim that most of the time the bottleneck in web
applications is data access time. Yes, that's usually true when you start
optimizing. However, the interesting thing about optimization is that it
causes bottlenecks to _shift_ , so you may start by optimizing the data access
code and then see the hotspot shift into the very string manipulation code
that this author claims is rarely the culprit.

I just had this experience recently, actually, optimizing a algorithm in PHP
that requires a lot of data to be analyzed quickly. The first pass of
optimizations focused on paralellizing the data access from memcache, getting
it so the minimal number of queries were sent to memcache and the database,
etc. But then the bottleneck shifted into the PHP code to parse and analyze
the data itself. Optimization is a process of chasing these bottlenecks across
the stack until you're either satisfied with the result or decide that an
altogether new approach is necessary to remove algorithmic complexity that is
inherent to your design.

~~~
mattmanser
You've never heard two programmers arguing over which method of doing x is
quicker?

I've mainly worked commercially in .Net and the worst and virtually pointless
one has to be using stringbuilder versus simple concatenation.

I've also seen people get crazy with passing around things like SqlConnection
objects as they didn't realize .Net handles all that in the background.

Or people agonizing over small names for css classes while the asp.net
viewstate is the same size as the whole of the html. Ugh.

The bottleneck shifting to the code as you're describing has been in my
experience very rare, usually the problem with the DB is poor indexes/missing
FKs or escalating row/page/table locks which don't translate into a bottleneck
in the actual data processing code.

~~~
btilly
I've never worked in .Net, but I know that in Java there is a _huge_
performance difference when building a long string incrementally. If there are
n pieces, using stringbuilder is O(n) while concatenation is O(n * n).

When you're building a large web page, the difference is worth paying
attention to.

------
cheald
This seems obvious to me, but I've seen people, time and time again,
"optimize" pieces of an application which doesn't actually have much
significant impact on the application, and which increases the complexity of
maintenance substantially.

If you can't tell me how much of an optimization a given change is, with
empirical numbers (4.6% faster view code, which is 64% of the pageview, which
is about a 3% overall performance increase), you don't have any business
making that change.

Even when you have good theoretical numbers (current approach is O(n^2), new
approach is O(n)), measure it. "Bad" algorithmic complexities don't always
mean "slow", and if I'm going to get 3/1000s of 1ms runtime improvement for
six hours of optimization work, and you're not working on a game engine where
micro-optimizations make and break you, _the change isn't worth it_.

------
ifesdjeen
Article delivers a very smart thought also seems to give no facts or real
advises. You can always blame stuff and use common sense rules such as don't
optimize until you know what to optimize and don't bring too much complexity
to your system. Although both of these are applied differently when developing
for different systems.

Article is extremely abstract and same though may have been delivered through
140 symbols of Twitter message.

More facts and details please.

