Starting with a "slower, but flexible" datastore like a traditional relational database, monitoring which access patterns need a boost, and then optimizing or introducing a new datastore is almost always a solid plan of attack.
1) store an appropriate write-whole/data-mining-friendly format
2) ASAP, for each major view, write out a Redis-style O(1)-to-read data structure
3) think carefully about backup and replay strategies
The best reference I have found for this pattern, and it isn't great (too big-SQL-centric), is "command-query-responsibility segregation":
In what way is it too expensive for side projects? It's the easiest data store to compile and run that I've used.
Hosting any DB offsite comes at a cost, and it does not appear that any one database platform has an advantage over another in terms of a service provider.
That's quite a difference.
$125 ($90) is the price of the large instance, which offers 1.7GB of storage.
So anyway if your data needs to be composed to be served, you are not in good waters. -- antirez
The issue being that youre methods of collection and retrieval will change over time and your data model needs to support that and still make sense for existing data.
remember all those stories about DB denormalisation, and hordes of memcached or Redis farms to cache stuff, and things like that? The reality is that fancy queries are an awesome SQL capability (so incredible that it was hard for all us to escape this warm and comfortable paradigm), but not at scale.