Don't know if this was mentioned in the answers (haven't read them all):
Don't assume that once you make your query run perfectly it will stay that way for ever. Once row count grows, your query might start to perform very poorly. Happened to me just recently, when a query was running in the 0.0001s range, but over night crawled to a halt and took 1 minute(!) to execute. Always pay attention to your querys and monitor if anyone of them is starting to slow down.
Your second point (overnight performance hit) needs to be explained to managers often in projects that didn't spend the money in tools or proper performance testing.
Don't expect some gradual curve of loss of performance. Once a table exceeds the cache size an ok performing query can go very bad with no warning (say 100x longer). This might mean that you really need to do a production patch of some type. Plan for this before it happens.
The MySQL slow query log could be of use. Of course you don't get a stack trace this way but it's essentially free (as in built already and not a huge performance hit).
Problem I have with this log is that legitimately slow queries, in my case usually those where I have concatenated 100's or 1000's or inserts into one query, make it hard to use in a meaningful way.
It's possible I'm using it wrong, did get much past trying it out. Fairly easy to roll my own rough timer in a DB wrapper, haven't got around to finishing this yet though.
Don't assume that once you make your query run perfectly it will stay that way for ever. Once row count grows, your query might start to perform very poorly. Happened to me just recently, when a query was running in the 0.0001s range, but over night crawled to a halt and took 1 minute(!) to execute. Always pay attention to your querys and monitor if anyone of them is starting to slow down.