
Low Cost EC2 Instances With Burstable Performance - jeffbarr
http://aws.amazon.com/blogs/aws/low-cost-burstable-ec2-instances/
======
growt
Maybe I did something wrong with the setup but the disks (type gp2) are really
slow compared with linode and digitalocean:

    
    
      ubuntu@aws:~$ dd bs=1M count=1024 if=/dev/zero of=test   conv=fdatasync
      1024+0 records in
      1024+0 records out
      1073741824 bytes (1.1 GB) copied, 23.1199 s, 46.4 MB/s
      ubuntu@aws:~$ sudo hdparm -tT /dev/disk/by-label/cloudimg-rootfs 
    
      /dev/disk/by-label/cloudimg-rootfs:
       Timing cached reads:   23292 MB in  1.99 seconds = 11704.62 MB/sec
       Timing buffered disk reads: 232 MB in  3.02 seconds =  76.70 MB/sec

~~~
api
EC2 is generally very expensive for CPU. RAM and storage are okay but CPU is
crazy.

Anyone know what recommends EC2 over Digital Ocean, Vultr, Linode, etc.? Are
they more reliable? Enterprise features? Network bandwidth? Cause right now
they look hugely overpriced.

I've hosted on Digital Ocean and Vultr for some time and my uptime is great on
both. I run constant ping testing and I do see little glitches from time to
time between data centers, but that could be network weather on the global
backbone. (I have a geo-distributed architecture so there's stuff running at
five different locations.)

~~~
personZ
_EC2 is generally very expensive for CPU. RAM and storage are okay but CPU is
crazy._

I'm leaning towards becoming an EC2 apologist on here, but just running quick
benchmarks on a t2.micro versus both the $5 and $10 Droplets.

sysbench --test=cpu --cpu-max-prime=40000 run

$5 Droplet ("2.0Ghz", bogomips 4000)- 99.4981s

$10 Droplet ("2.4Ghz", bogomips 4800) - 88.3740s

(I can't find any actual documentation detailing the $10 option being faster,
so perhaps this is just random luck on instantiation)

t2.micro ("2.5Ghz", bogomips 5000) - 69.5248s

Now of course the t2.micro won't let you run that around the clock, which for
many workloads is entirely fine: as a standard blog host and the like, or the
overwhelming majority of server implementations, bursty CPU is exactly what
most natural workloads look like.

Add comparisons of the CPUINFO for each-

Both droplets (identical cpuinfo flags) -

flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36
clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 popcnt
hypervisor lahf_lm

Amazon t2.micro-

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good
nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic
popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm
xsaveopt fsgsbase smep erms

Of particular relevance is that the Amazon instance (E5-2670) exposes SSE4 and
AVX to your VM, which for many workloads could dramatically increase its
advantage.

I guess the whole point of this is that the vague CPU terminology that the
various cloud vendors use is seldom really comparable. However to your core
question, Amazon becomes a value proposition when you are using all of the
parts -- S3, load balances, elastic IPs, shared volumes, availability zones,
security zones, VPCs, private networks...it is all multipliers to the value of
the platform.

~~~
personZ
As a quick addition on this, the m3.medium -- running on the same processor
but governed differently -- takes 160 seconds to run the same benchmark (after
repeated runs).

Amazon used to promote their instances via the somewhat comparable ECU
metrics. Now, however, unless I'm missing something, you need to try to
determine by narrative, because 1 vCPU is very much not equal to 1 vCPU on
other instance types.

~~~
justizin
They still use ECU, but I'm not sure it's comparable across generations of
instances, e.g. an m3 with more ECU than an older m1 instance of similar size
seems at times to be slower.

------
jasonkester
Any thoughts on how this compares to simply spinning up a standard instance
for a few hours then turning it off when you don't need it?

I run a service that needs to run about 72 hours worth of processing each day,
and it all needs to happen during a 3 hour window. That's a natural fit for
spinning up a couple dozen instances then killing them when they finish.

I'd love to see a comparison of what would happen if I kept the same amount of
compute power on standby 24/7 using this new instance type.

~~~
6cxs2hd6
It seems like this fits two needs, for smaller companies and/or people just
getting started with EC2.

1\. Laziness. Which I don't necessarily mean in a pejorative sense. Maybe
someone just doesn't have time, yet, to learn/configure/maintain spinning up
an instance for limited times.

2\. Single instance. To spin up an instance, you need _another_ computer. If
you want that "manager" computer to be an instance at EC2, too, now you need
two instances. With this approach, you can set up just one instance and get
much of the same economic benefit.

EDIT: Also...

3\. Predictable cost. If your manual spun-up instance turns out to need to run
for 4 hours instead of 2, you get a bigger bill. With the t2 instances, you'll
get a slower compute (if you run out of "credits") but not a bigger bill.

Again, this probably appeals most to small/new customers?

~~~
IanCal
> Single instance. To spin up an instance, you need another computer.

I think you can do this with cloudformation, having it respond to the size of
a work queue, however:

> Maybe someone just doesn't have time, yet, to learn/configure/maintain
> spinning up an instance for limited times.

This is why I can't answer the question above for certain, I got about that
far in documentation and went off to find a simpler solution (for me, tutum:
[https://www.tutum.co/](https://www.tutum.co/) )

------
zrail
Super interesting. If I did the math right, a 3year heavy reserved t2.micro
instance comes out to $4.48/mo, which is cost competitive with Digital Ocean.
The proof will come in the benchmarks, but this may become my preferred
hosting solution.

~~~
keytomouse
It's $77 for 1 year reserved if I'm reading it correctly. That's $6.44 per
month for an instance with double the RAM of the DO $5 instance. The specs
look like the size that DO is charging $10 for currently. For a 3 year
reserved instance it's $4.48 a month for double the size of the DO $5
instance. There's also a free tier, so the first year is free to try it out.

DO was competitive with EC2 on price but not on features (and certainly not on
security), now with the price advantage gone...

EDIT: corrected calculation

~~~
adamors
The price of an EC2 instance doesn't include data transfer though. For the $5
DO instance you get 1TB of traffic for free.

The price advantage is definitely not gone.

~~~
keytomouse
How many customers use more than 1GB of outbound traffic per month for a $5
server? Data transfer in is free on EC2 and the first 1GB outbound is free too
according to the pricing page.

~~~
jlawer
Isn't the first GB of your account traffic free with AWS vs DO giving you 1 TB
per droplet?

While 1 EC2 instance may not use more then 1 GB (which is a very low quota
unless your CDNing everything), if you have a couple of instances your almost
certainly going over that.

------
bowlofpetunias
> The T2 instances use Hardware Virtualization (HVM) in order to get the best
> possible performance from the underlying CPU and you will need to use an HVM
> AMI.

I've always used paravirtual AMI's, as I understood that gets the best
performance for a Linux box.

Given that I try to use the same self-baked base AMI's for various purposes
(and instance sizes), I would either have to mix and match or switch
everything to HVM. However, I have no clue what the practical consequences of
that would be.

Can anybody enlighten me?

~~~
caw
HVM gives the best performance because you can take advantage of certain
hardware features through the hypervisor. It's basically more direct access to
the hardware, which makes it faster as you don't have as much hypervisor
overhead. Amazon's "enhanced" networking and SSDs need HVM to get a good chunk
of performance.

Yes you'd have to build new AMIs with HVM. It'd be easiest if you had some
kind of configuration management so you didn't need as many AMIs baked. When I
build machines I use a script to handle the creation and mounting any extra
volumes on a machine that I have as "nonstandard". I have only 2 custom AMIs -
one for PV and the other for HVM. You'll need to have at least both, because
if you wanted to use certain instances (t1.micro, m1.small come to mind) you
can only use PV.

------
gfunk911
This seems really amazing. This workload pattern matches almost all of my
small projects.

------
sdfjkl
This looks like it's a reaction to (and effective solution for) the problem
with t1 instances that made them largely useless (or a gamble at best) due to
sharing a CPU with instances that run at full load all the time.

------
Andys
Also known as Amazon Droplets

------
Vieira
This looks a nice way to experiment with CoreOS as it is not supported by DO
or Linode but an AMI is available.

------
philsnow
> This deceleration process takes place over the course of a 15 minute
> interval in order to provide a smooth and pleasant experience for your
> users.

The thrashing will increase gradually until user experience is pleasant.

------
Nakatomi_Plaza
Any recommendations for software builds? I usually go with c3.4xlarge for
building Android platforms but wondering if there are alternatives out there.

------
aurelianito
I thought that cost was dominated by memory utilization instead of CPU
utilization. How can AWS manage to pull this off?

~~~
jewel
That was my understanding too. At $9.50/mo, a server with 96GB of RAM would
bring in $912/mo.

A quick click around dell finds that a mid-range 1U rackmount server (R320)
with that much RAM costs $3,135.

So a back-of-the-envelope calculation makes it seem workable, especially for
high-RAM low-CPU configurations, which is what this is.

There are other tricks that they might be employing, such as swapping out part
of RAM to SSDs behind the scenes, as well as compressing RAM contents. On low-
load servers like these, typical usage would imply that RAM would be mostly
static.

------
NhanH
What would be the recommendation for cloud provider with high IO throughput
(as oppose to memory/ cpu throughput)?

