As described in the blog post, large paths of our database schema have grown in an organic fashion. These virtual partitions allow us to prepare our database access for splitting out groups of tables into separate clusters in the medium term, with sharding those clusters being a next step and a long term solution to handle our database growth.
We could enable the linter in production to silently log problematic queries without actually affecting their execution.
If we used separate db users as you're suggesting, any query that we didn't catch beforehand (e.g. via our CI builds) would cause noticeable problems for our users, which is something that we want to avoid.
Additionally, switching to a separate user account would require holding open twice the amount of connections to each database server (old db user plus new db user), which probably would be fine but is still a lot of additional connections at our scale.
Would it double the number of connections? If you are doing the same volume of total work and not looking to increase concurrency, wouldn't you end up with something closer to two connection pools with ~1/2 the size compared to one connection pool with your old size?
I think his suggestion was to use service-specific users with limited grants only in the development environment to catch queries that access tables of other services. The production environment would keep using users with grants on all services‘ tables
I see a lot of value of using separate users in both dev/test and production environments. That gives you an "easy" way to physically separate your database/schema into multiple databases with minimal changes to your application other than pointing your connection pools/config to the new database endpoint. We do this often - separating the task of breaking up our monolith databases with two phases - logical then physical.
That's _exactly_ what XSS is about. One possible way to exploit things like this is if I send you a link to a website, that embeds the target page through an iframe with javascript output injected. I could then have the JS steal your cookies/session or worse.
This is similar to how the Rubinius project has been managed for a long time: After your first pull request gets merged, you'll be added to the repository as a committer.
I'd even say that the biggest contributor to this difference is the non-blocking nature of Node. Using e.g. EventMachine would probably close the Gap between Node and Ruby even more.
While the rails benchmark tries to compare different template engines in the same language, framework, and with blocking IO, you're comparing these results to rendering under "pure" Node.js (no framework that slows you down) with non-blocking IO.
Don't take this the wrong way, tho, I'm absolutely blown away by your numbers!
Running the same benchmark against DocPad a node.js framework, getting good speeds too (1147req/s). Benchmark: https://gist.github.com/3906050
Worth noting the tests are being run against a local copy of - http://docpad-kitchensink.herokuapp.com/ - which does a lot more stuff than the basic haml rendering.
Yeah I'm pretty sure the rails benchmark is IO-bound. If that is the case it is surprising there is any noticeable difference at all between the three different Ruby template engines.