

A concrete illustration of practical running time vs big-O notation - gus_massa
http://blogs.msdn.com/oldnewthing/archive/2009/06/12/9728292.aspx

======
scscsc
For a long time now, people have complained about Big Oh notation. The problem
is not the notation, it's the people. The notation is absolutely brilliant.
People do not understand how to use it and complain that "in practice" it
breaks.

Big Oh is simply a mathematical tool that is helpful to abstract away
uninteresting details. But what people do not realize is that by using this
tool, they perform some abstraction of their own -- it can be used to count
anything, and the most important thing to establish is what you count. Most
importantly, you want to count time & space used by an algorithm. But you
don't always want to count the time used by an algorithm as the number of
instruction performed. For example, we may compare comparison-based sorting
procedures by the number of comparisons they perform. Why? Because we are
smart and realize that this (# of comparisons) is a good way to compare such
algorithms.

For some algorithms, you might want to count the number of memory accesses you
make. Or the number of level-i memory accesses in a memory hierarchy. Or the
number of network packets. Get it?

~~~
michael_dorfman
I think that they main stumbling point for many people is the fact that Big-O
notation is aimed asymptotically; in other words, it is only concerned about
the performance of an algorithm (for example) when you have a _bazillion_
items to process. If you're dealing with real-world (limited) cases, you need
to pay attention to the constants-- those "uninteresting details" get
interesting very quick.

~~~
scscsc
You get to choose which details to ignore. Making the wrong choice will of
course lead to invalid results.

No model is perfect, but some models are useful.

