There are also some Python operations that have a large constant time - operations which are much slower, but still O(1), in Python than they are in lower level languages. In these cases, choosing algorithms which reduce the number of operations of that type can be a bigger win than a simple big-O analysis would suggest and allow pure Python code to compete better with code written in lower-level languages.
Better algorithms are the right answer the fast programs more often than faster code. There is room for both, but the "if the code isn't fast you may as well not bother with a good algorithm" claim is generally false.
Sure but why not do both??
Take the most computationally intensive parts, use a different language and use an efficient data structure in that other language. That might seem like a lot of work but fortunately there are handy-dandy libraries easily available to let you do this with even less effort than a simple language extension; just toss your data into SQLite or Kyoto Cabinet and let them deal with it. As is standard practice in, uh, many, many applications?
Your point about speeding up the program by improving the algorithm is valid - but it's only valid when the initial algorithm is doing something more or less unique. If the program is doing which a data base does, you should use a database like SQLite or a data store like Redis, Kyoto Cabinet, etc.