I imagine it's applicable to other realms of programming as well, like web programming. We should question if we really need something on the page, or if there's just some way to fake it or get away with doing less. Or same with scheduling notifications to be delivered. Does it really need to be real-time? Or can we just cheat a little since the tolerance is much higher?
Sure, we call this caching. Imagine you have a blog with comments. You can generate the page for each user when he wants it, and then it's a real view onto the data that you have available. Or, you can generate it once a second and just serve the stale version -- not much can happen in one second. Making that compromise is the difference between handling 10 requests a second and 10,000 requests a second. Almost always a good tradeoff.
(Doesn't NYTimes do this?)
This can be used to add fancy animations, add an actual embedded video over an image of an empty video player, add comments to the end of a blog post via AJAX. Most people won't even realize you're giving them static content initially.
Not taking this into account causes unusual UI lag, frequent dropped initial clicks, fast responses from the server but slow page load times, or visual "chunking."