Hacker Newsnew | comments | show | ask | jobs | submit login

I don't want to be that guy making the pointlessly critical comment, but what is the use case for an in-memory database on the other side of an ethernet link? Even if you're in the same datacenter it's going to be slow, no?

I get that startups need to move fast in order to validate ideas, but redis is hardly a chore to set up.

Either way, congrats on the redundancy.




Hi Jonnie,

This is a very valid comment, we always recommend having your Redis server as close as possible to your application server. We're following the lead from Heroku, which offered this Alpha feature for Postgress in US-WEST https://status.heroku.com/incidents/460 .

We've been offering custom HA (master, slave setups) for a while. This is the first step towards giving our customers more choice for their availability zones.

Redis 2.6 Sentinel is exciting development and something that we're experimenting with now it's in the latest Redis release.

Feel free to send me any questions to ben@redistogo.com

-----


One argument I can see is if you're already on e.g. a small VPS and going up to the next tier is more expensive than moving your redis instance off somewhere else. Or, not specific to this but to the idea in general: moving a resque queue and workers into EC2 to handle widely-varying traffic and faster scaling than ordering more machines at a datacenter. Then redis is just a nice atomic list manager that everything already integrates with.

-----


A disk seek (e.g., if you have more data than RAM and are using virtual memory) might take ~10ms. A packet round trip within the same AZ in EC2 is <0.5ms.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: