

Amazon AWS Throttling Non-Elasticache Memcache Traffic - 23david

We've been trying to set up memcache/couchbase boxes on Joyent East and Serverbeach Virginia today and noticed that we get a max of 300 reqs/sec per virtual instance when we send memcache requests from Instances on AWS East.<p>It doesn't matter if we change the security group on AWS or if we change the port of the memcache traffic. the traffic itself seems to be inspected and throttled.<p>This is a MAJOR problem. Basically all AWS customers will need to start testing to see if their specific type of traffic is being throttled.<p>Within the AWS cloud we see 3000+reqs/sec between boxes. But once they leave amazon it gets really slow. From Amazon West to Joyent East and Serverbeach Virginia we saw a further drop to a total of between 6-12 reqs/sec using the memcache protocol.<p>Sending data over ssh tunnels still works fine. A scp file transfer from AWS East to Joyent East transferred at over 22MB/sec.<p>To replicate this:<p>Set up a box outside of AWS running memcache. Default settings are fine. And on Amazon set up a box with php installed and the memcache pecl extension.<p>And then run this php code. replace 111.11.11.1111 with the IP of your memcache box:<p>&#60;?php<p>$memcache = new Memcache;
$memcache-&#62;addServer("111.11.11.1111", 11211);
$time_start = microtime(true);
for($i = 0; $i &#60;= 1000000; $i++) {
    $memcache-&#62;get("d.{$i}aaaaaaa");
    if ($i % 1000 == 0) {
        $time = microtime(true) - $time_start;
        $items = $i;<p><pre><code>        $rate = $items / $time;
        
        echo (number_format($rate, 2))."\n";
    }</code></pre>
}<p>?&#62;
======
RyanGWU82
I run dozens of memcache and hundreds of app servers in EC2, and we haven't
seen any evidence of connection throttling. Isn't this just network latency?

If your PHP code can make 300 requests per second, then each request takes
about 3.33ms. That's actually quite good if it's going over the public
Internet to another data center. If you're getting ~10 requests/sec. across
the country, that's about 100ms per request, which again is typical of cross-
country latency.

If you need response times faster than that, you need a direct network
connection between your app server and your memcache server, and you
definitely need to avoid transcontinental network hops.

------
andrewgross
Switch memcache to UDP for testing and see what it looks like, curious if TCP
connection startups are causing issues.

