Only if you have distinct keys. Most Redis use I’ve seen “at scale” has involved putting things in hashes/sets/lists.
(I’ve wondered for a while now whether it’d be possible to implement expiry in Redis for sub-key data, without introducing too much overhead to data structures that don’t need expiry. I’d think it’d be “just” a matter of having an alternative data representation that holds expiries for each element in the data structure, and then re-encoding the data structure over to that data representation when at least one element has an expiry — because if one does, then it’s likely that many will.)
And Apache and other web servers support it out of the box.
If your use case allows it consider ditching Redis and rather static cache your generated HTML/json files on the file system while serving. And let Apache serve from there as long as those cached files exist. You get all these features for free.
As an extension, for use cases where this is not feasible (eg. you have clustering, non-trivial permission checks or multi-entity payloads etc.) it is quite convenient to have nginx with openresty talk to redis.
I would note that while Nginx itself doesn’t natively support Redis (and so you have to script things yourself in that case), it does have full native support for memcached (or rather, the memcached wire protocol, spoken by several backends, though sadly Redis isn’t one of those.)
Basically, you can hand off to memcached as if handing off to an upstream, and then if memcached doesn’t have the key, continue on trying to resolve the response through your actual app-server upstream. (This is essentially the same thing you’re suggesting doing through scripting, to be clear; it’s just easier to implement for the memcached case.)
This type of middleware-level “read-aside” caching is really neat, because it enables push-based response caching: rather than having a read-through cache that expires items out of itself, where an expired item potentially leads to a thundering herd of clients all trying to grab the expired item and so generating independent requests to your backend (that can hopefully be coalesced into a single recomputation, but that’s unlikely in a distributed environment), you can instead just keep the cache always-populated, and every once in a while — perhaps on a schedule, perhaps in response to discovering new primary-source data — recompute your denormalized/report resource, and atomically overwrite what was in the cache with the new data.