

A Beginner’s Guide to Big O Notation - cdl
http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/

======
tsahyt
Something I just _have_ to point out after reading through the article and the
comments there:

O(log_2 n) is the very same as any O(log_a n) where a is an arbitrary base.
That is because log_a n = log_b n * c, where c is just a constant factor for
any a and b and therefore c gets swallowed by Big O notation. I'll leave out a
proof here but it's quite simple to deduct anyway. The implications of this
are interesting: It basically means that it doesn't really matter how many
subproblems a divide and conquer algorithm creates to recurse on, it always
_scales_ equally well, disregarding the factor.

However, the most important thing here for a programmer is, that Big O
notation does not say anything about actual performance, since it swallows all
constant factors. And those factors _are_ important. That is exactly why we
try to optimize the innermost loop in, say, a O(n^3) algorithm.

------
hypnocode
Besides brute force decryption, does anyone have some good examples of O(2^N)
algorithms?

Also: <http://www.kurzweilai.net/software-progress-beats-moores-law>
Algorithmic improvement account for more of the acceleration in computing
power than physical factors, such as transistor density.

~~~
tsahyt
SAT for instance has only been solved with O(2^n) algorithms so far. The
tricky thing here is that we don't quite know whether it could be done any
quicker (google up P=NP for a more detailed description).

