
New AWS I3 Instances for Demanding, I/O Intensive Applications - manigandham
https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/
======
snewman
Wow. This is a _massive_ price drop from the I2 instances... so massive that I
question whether I'm reading it correctly:

    
    
        i2.xlarge: 4 vCPU / 14 ECU, 30.5 GB RAM, 1 x 800 SSD, $0.853 per Hour
    
        i3.xlarge: 4 vCPU / 13 ECU, 30.5 GB RAM, 1 x 950 SSD, $0.312 per Hour
    

Slightly less CPU, slightly more SSD, 63% cheaper. Or for another comparison:

    
    
        r3.xlarge: 4 vCPU / 13 ECU, 30.5 GB RAM, 1 x  80 SSD, $0.333 per Hour
    

r3, unlike i2, is listed as "current generation", but also seems strictly
worse than i3.

~~~
aisofteng
I interviewed with Amazon several months ago (and turned down the offer, so I
have no affiliation); as part of the casual conversation with the people
there, it was mentioned to me that the company hadn't offered instances
optimized for IO up until that point due to other priorities but had been
recently focusing on it with the I tentions of releasing something like this
soon.

I was under an NDA on the topic and so never mentioned it; now that it's been
released, i am free to say that this has been worked on for a while and that
these sorts of gains in performance were not only expected but a goal.

(No disclaimer: I didn't take the job and none of my work or personal projects
run on AWS.)

~~~
Johnny555
I don't think it takes any special NDA protected knowledge to say that a new
instance type was "worked on a for a while", given how rarely they release new
instance types (the i2 family was announced in Dec 2013). Likewise, the i*
family is their "I/O intensive workload" family, so again, it's clear that
gains in I/O performance was a goal when designing i3's, the i2 family was
definitely showing its age, even compared to a modern laptop.

------
ilaksh
Once I found out about NVMe providing up to 2x-10x better performance than
regular SSDs, I immediately started wondering when Cloud/VPS providers would
offer something based on that technology. I think that before U.2 (or M.2?)
was a thing, they were stuck with really expensive PCIE cards. Now I am
guessing that something like U.2/M.2 NVMe subsystems can make the price more
practical.

But back in September I asked on the Linode forum and also on the Uservoice
for Digital Ocean about NVMe storage. Basically I got crickets on Digital
Ocean, and "don't be an idiot" from one Linode user.
[https://digitalocean.uservoice.com/forums/136585-digitalocea...](https://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/16344982-provide-
nvme-backed-storage)

[https://forum.linode.com/viewtopic.php?f=7&t=14058&sid=4a574...](https://forum.linode.com/viewtopic.php?f=7&t=14058&sid=4a5743fcbf1599232af2d5b20ea97a16)

But I think that as more and more people get computers with M.2 built in, they
will eventually start to wonder why their PC's disk is 2x-10x faster than
their server's disk.

There are a couple of little providers like UltraVPS, ControlVM, and Gignode
that at least say they have NVMe available. I have been thinking about
UltraVPS because it looks like a deal.. of course don't know if it really does
perform or is reliable.

Basically my thing is that from my experience the faster IOPS/sequential read
translates into less database tuning or zero database tuning depending on the
application I am coding.

Just to throw another wrinkle in here, I feel like pretty soon the inexpensive
VPS/cloud servers will not only have to move to NVMe, but within a few years
the expectation may be to include some kind of GPU hardware somehow as a
standard thing. Just because neural network stuff seems so important for so
many applications these days, and unless we are going full peer-to-peer
internet or something I am going to need that on most servers going forward
pretty soon.

------
nemothekid
Are these comparable to Google's Local NVMe SSD? On Google, the drives alone
are as expensive as these i3 instances.

375GB*3 drives on GCE will cost you 0.3431/hr over a month (incl. auto
discount), which is already more than an i3.xlarge, and you still have to pay
for CPU/RAM.

~~~
boulos
Just from the IOPS listed in the text, they're about half the speed per drive
as ours (3.3M/8 => 400k) but as before they let you stripe together _way_ more
than we do. I'll be curious to see what happens with wear and garbage
collection... (we're pretty conservative, these things come in powers of two
and the resulting output size of bytes means they're more aggressive or
optimistic).

Definitely an awesome update!

Disclosure: I work on Google Cloud.

------
manigandham
16 cores + 122GB + 3.8TB for less than $1k/month is impressive.

On GCP: 16 cores + 104GB + 8x375GB (for 3TB total) = $1470/month or
$1226/month with continuous use discount.

~~~
brianwawok
Maybe less so when you do the math and find you can buy a similar machine for
something like 4-5k, and use it for 3 or so years?

~~~
manigandham
The on-prem vs cloud argument is sufficiently tired. Obviously you should do
the math and figure out what's best for you, but that's not really related to
AWS releasing new VM types.

If you're running on AWS for whatever reason, i3 VMs are great news.

------
examancer
If only they would have waited for Ryzen :-P

------
jlgaddis
I'd like to know what kind of motherboards they are using that support up to 8
NVMe drives!

~~~
wmf
Nothing special; each Xeon socket can provide 5 PCIe x8 slots. Don't try this
in a 1U though.

~~~
jlgaddis
Yeah, sorry, brainfart. I have only ever used NVMe drives in M.2 so I
completely spaced that you can put 'em in PCIe slots too (which is probably
more popular, actually?).

Thinking about it, that (2 x NVMe in PCIe) will probably be the first upgrade
I do to my new workstation in the future. It looks like those only use four
lanes each and I've got enough still available (80 total) to support way more
of these drives than I expect to ever need on my desk.

 _ETA:_ PCIe v3.0 x16 == 15.75 GB/s. I expect we'll see, at some point, PCIe
cards/adapters that support up to 4 NVMe drives (in one slot). Hell, they
might exist already.

~~~
wmf
Ask and you shall receive: [https://www.servethehome.com/the-
dell-4x-m-2-pcie-x16-versio...](https://www.servethehome.com/the-
dell-4x-m-2-pcie-x16-version-of-the-hp-z-turbo-quad-pro/)

------
mmontagna
Wow if these run as fast as they say one could replace tens of GCP machines
with a single i3.xlarge, or you could go with a 2-4xlarge just for giggles and
still come in far under budget.

~~~
manigandham
10's of machines? i3's are definitely dense in performance but GCE can be
stacked pretty well too.

The biggest issue beyond pricing though is that GCE has a limited total disk
bandwidth regardless of the number of local SSDs attached.

~~~
boulos
We should probably make our documentation more clear. You should see fairly
linear scaling as you go from 1x375 up to 8x375. Is that not the case? (We can
only go to 8x375 today, but that's not "regardless" of how many drives you
attach, that's "Can only attach so much").

Disclosure: I work on Google Cloud.

