Hacker News new | comments | show | ask | jobs | submit login

> There is no reason - beside manually managed infrastructure - to not decrease the instance size (number of machines or c3.xlarge to c3.large) if you realize that your EC2 instances are underutilized.

CPU/Memory aren't the only measures of underutilization. If you require high instantaneous bandwidth throughput, then the networking capacity available to your instance roughly increases with the size of your instance. This includes both EBS as well as other Network traffic.

Table with Low/Medium/High: https://aws.amazon.com/ec2/instance-types/#instance-type-mat...

Example benchmark with c3 instances: http://blog.flux7.com/blogs/benchmarks/benchmarking-network-...

If you're more concerned with just EBS network throughput, check out the table on this page instead: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-ec2-...




This is super important - I've run a number of 'oversized' instances just to get better networking. AFAIK none of the cloud providers offer a way to create a 'network' optimized instance, though containerized deployments should help with under utilizing the rest of the instance.


This was a big issue for me too. Google Compute Engine seems to have no throttling like AWS does, or at least it's a much higher cap. On an n1-standard-1 I routinely get 250+ MB/s, and near 1 GB/s for n1-standard-4 and larger. The only high bandwidth AWS instances I've found are the few expensive ones that spec 10 Gbit.


According to this page, you can get ~14Gbit networking on Google Cloud's n1-standard-8

http://googlecloudplatform.blogspot.com/2015/11/bringing-you...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: