Hacker News new | comments | show | ask | jobs | submit login

I was confused about the memcached problem after moving to the cloud. I understand why network latency may have gone from submillisecond to milliseconds, but how could you improve latency by batching requests? Shouldn't that improve efficiency, not latency, at the possible expense of latency (since some requests will wait on the client as they get batched)? And while maybe efficiency is valuable, why would that be an improvement for a problem they didn't have before?



Sorry that wasn't clear. The latency didn't get better, but what happened is that instead of having to make a lot of calls to memcache it was just one (well, just a few), so while that one took longer, the total time was much less.


I actually did some (simplistic) examples of this in a small presentation to illustrate the performance improvements of batching memcached requests, if anyone's interested: https://speakerdeck.com/robotmay/a-simple-introduction-to-ef... (slides 11 to 14)


That's a better explanation than mine. :) Thanks for the link.


As long as nothing's blocked, latency could go up 'a lot' (sub-ms -> ms, maybe 1ms->2ms with batching) without meaningfully impacting overall throughput.

I can definitely see millions of networked memcache calls being a bottleneck, and if the batching adds another ms per req on average, but removes the bottleneck, then they can serve a lot more users at a cost of 1ms per req.

Is there anything in TFA that would support my theory? I don't know. I don't care enough to endure InfoQ. (I did for a Rich Hickey talk once, lo these various months, and yea it were a minor inconvenience).

Edit: whoa jedbergo!


Though it might not be the situation in the video, introducing batching can decrease system latency. I made this little graph to show how this can work:

http://imgur.com/Vr6BqDq




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: