This article, though, was a nice walk-through that reminded me that this stuff is probably quite straight-forward if I would just sit down and give it a shot. It got me to read the Wikipedia page on the Master theorem , which seems easy to apply and genuinely useful for figuring out what's going on in complex recursive code. I think I'll finally click "buy" on Carmen's book , which has been sitting in my Amazon shopping cart for years.
The alternative? Learn to love recursion trees. It addresses both issues. It's intuitive and it's general. As a side effect, the master theorem becomes an almost obvious fact; the three different cases correspond to geometric sums with increasing, decreasing and constant terms, with the caveat that since we are working asymptotically, increasing/decreasing/constant must be interpreted in the looser asymptotic sense. A geometric series with increasing (decreasing) terms is dominated by its last (first) term, corresponding in the recursion to the last (first) level of the tree. That makes it trivial to estimate the sum: just estimate the cost at the appropriate level and you're done. Only in the constant term case do all the terms play a role, but then you get your estimate by just multiplying the number of terms (the tree depth) by the constant value the terms have in common.
Each comparison of objects provides one bit of information.
So k comparisons provide k bits of information, or 2^k possibilities.
Therefore to distinguish between n options, we need at least log_2(n) comparisons.
There are n! ways of ordering a list of length n.
log(n!) ~ n * log (n)
So to sort a list of length n, we need at least O(n * log(n)) comparisons.
> log(n!) ~ n * log (n)
Random aside: while this is usually proven with Stirling's approximation, there is a much simpler proof. For the upper bound, n! <= n^n because 1 <= n, ..., n <= n. For the lower bound, n! >= (n/2)^(n/2) because you can drop the first n/2 terms without increasing the product and estimate the last n/2 terms as being >= n/2. That gives the double inequality (n/2)^(n/2) <= n! <= n^n. Take logs and you get (n/2) log(n/2) <= log(n!) <= n log(n). What makes this proof tick is that while (n/2)^(n/2) is a really shitty lower bound on n!, the shittiness evaporates once you take logs.