

60,000% growth in 7 months using Clojure and AWS - lkrubner
http://www.colinsteele.org/post/27929539434/60-000-growth-in-7-months-using-clojure-and-aws

======
lkrubner
To me, this was the big surprise, and the big revelation:

"This led to Decision Two. Because the data set is small, we can “bake in” the
entire content database into a version of our software. Yep, you read that
right. We build our software with an embedded instance of Solr and we take the
normalized, cleansed, non-relational database of hotel inventory, and jam that
in as well, when we package up the application for deployment. Egads, Colin!
That’s wrong! Data is data and code is code! We earn several benefits from
this unorthodox choice. First, we eliminate a significant point of failure - a
mismatch between code and data. Any version of software is absolutely,
positively known to work, even fetched off of disk years later, regardless of
what godawful changes have been made to our content database in the meantime.
Deployment and configuration management for differing environments becomes
trivial. Second, we achieve horizontal shared-nothing scalabilty in our user-
facing layer. That’s kinda huge. Really huge."

~~~
heretohelp
I work for a food startup (Nutrivise) and we do something similar'ish. Our
dish database is baked in and deployed as a fixture. We _should_ consider
distributing it to the frontend web servers instead though.

There are some other fun hacks, like local redis instances for caching search
space stuff.

