Hacker News new | past | comments | ask | show | jobs | submit login

Reads-wise I've done tests with up to 48 cores... It scales just about linearly. Writes don't scale much past a single core but really nobody asks for this so I haven't worked on it.

For fun I did a test benchmarking misses, getting 60 million rps on a single machine :) For my high throughput tests the network overhead is so high, that to discover the limit of the server the benchmark client has to be run over localhost. Not terribly useful; most people's networks will peg well before the server software. Especially true if your objects aren't tiny or if you batch requests at all.

I've yet to see anyone who really needs anything higher than a million RPS. The extra idle threads and general scalability help keep the latency really low, so they're still useful even if you aren't maxing out rps.

You can see tests here too: https://memcached.org/blog/persistent-memory/ - these folks might dismiss this testing as "not a cache trace", but I don't feel that's very productive.

Specifically to the cache traces though, that's just not how I test. I never get traces from users but still have to design software that /will/ typically work. Instead I test each subsystem to failure and ensure a non pathological dropoff. IE; if you write so fast the LRU would slow you down, the algorithm degrades the quality of the LRU instead of losing performance; which is fine since in most of these cases it's a bulk load, peak traffic period, load spike, etc.

I've seen plenty of systems overfit to production testing where shifts in traffic (new app deploy, new use case, etc) will cause the system to grind to a halt. I try to not ship software like that.

All said I will probably try the trace at some point. It looks like they did perfectly good work. I would mostly be hesitant to say it's a generic improvement. I also need to do up a full blog post on the way I test memcached. Too many people are born into BigCo culture and have never had to test software without just throwing it in production or traffic shadow or trace. I'm a little tired of being hand-waived off when they run in one use case and mine runs on many thousands.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: