Don't assume that once you make your query run perfectly it will stay that way for ever. Once row count grows, your query might start to perform very poorly. Happened to me just recently, when a query was running in the 0.0001s range, but over night crawled to a halt and took 1 minute(!) to execute. Always pay attention to your querys and monitor if anyone of them is starting to slow down.
Don't expect some gradual curve of loss of performance. Once a table exceeds the cache size an ok performing query can go very bad with no warning (say 100x longer). This might mean that you really need to do a production patch of some type. Plan for this before it happens.
It's possible I'm using it wrong, did get much past trying it out. Fairly easy to roll my own rough timer in a DB wrapper, haven't got around to finishing this yet though.
There is likely some overhead for this feature, as it needs to make more system calls (getrusage).