
16 TB and 20,000 IOPS EBS Volumes - scottilee
https://aws.amazon.com/blogs/aws/now-available-16-tb-and-20000-iops-elastic-block-store-ebs-volumes/
======
mrmondo
Still slow on the scale of things these days. I just provisioned a few servers
with over 500,000 IOP/s / 2000MB/s read and write each, 100% SSD with 3-10
year warranties and they use bugger all power. Very low running cost and
maintenance overhead and cost less than 8k a unit (1u chassis, redundant
power, 32GB RAM, 2x 6 core Xeon v3) and I can guarantee the performance is
consistent and there when We need it.

I'm all for outsourcing hardware hosting ('cloud') to save costs and to allow
for quick provisioning of new instances - but went you need raw power and in
cases where it's inefficient to horizontally scale - the latest generation of
PCIe NVMe SSDs are really very impressive and in a recent evaluation we
performed of our storage - it was actually going to work out significantly
cheaper to A) host our high speed storage ourselves and B) buy SSDs and do
away with rotational drives.

~~~
tgeek
Can you completely snapshot those volumes at any time, recreate them and
attach them to new servers? Could you take these snapshots and easily copy
them around the world?(again assuming you could snapshot). Are those SSD's
automatically replicated to 2 different storage devices behind the scenes to
give you near-instant failover? When they go boom are you then driving out to
the datacenter to replace them (assuming you have replacements and don't need
to wait for them to arrive). Can you do all of this without any upfront cost
or excess in capacity??

Probably not. At all.

EBS is NOT harddisks inside a server. Comparing them to such is missing out on
all the things that makes it a SERVICE and not disks you buy from
Newegg/PCmall/<insert vendor here>. Yes there are disks you can buy to
physically put in a server and they are super blazing fast. In fact AWS has
those in their i2 instances and they get hundreds of thousands of IOPs as
well.

This isn't even comparing apples to oranges, its apples to space monkeys.

~~~
mrmondo
Yes we can and do snapshot them, at several levels actually - I don't think
that's a particularly hard thing to do so I'm not sure why that's relevant.

Yes there is replication both to separate disk arrays AND seperate physical
servers with live failover and load balancing - again nothing new here?

No we don't send out storage to other countries - in fact that would be
illegal, and if we were to do so our clients would suffer as Australia's
international peering is pretty woeful.

We also gain on-disk compression and encryption on a LUN by LUN basis as we
require it, storage is automatically provisioned to new application instances,
all the software is 100% open source and mature, we don't have to phone a
large corporate that doesn't really care about us, we pass security audits
because we can prove where things are and how they're configured.

By the way, none of this is your 'new egg' gear you referenced, we use Intel
DC P3600/P3700 PCIe storage. Oh and as a bonus - there's no licensing or
monthly invoices that need attention.

Is shared hosting / hardware outsourcing / cloud computing amazing - yes! Of
course it is!

But you must remember it is their intentions to sell their product as the only
right answer and to tell you what you should care about. In some cases it
applies and in some it doesn't. The danger in jumping on the bandwagon and
becoming an Amazon 'fanboy' (I'm really sorry for using that term - I hate it)
is that you quickly become silod from external opperuntities and security /
high vertical performance solutions.

If I was in a small team of devs working on launching a web app that's going
to be targeted at an international audience, my growth is highly
unpredictable, our future uncertain and our skill set focused on developing
great software - I wouldn't think twice about using AWS/Rackspace etc...

But when you understand your environment well, when you have a limited budget,
when you have a predicable customer base with strick security requirements and
when you're pushing databases pretty hard - would I use AWS? No, it's not cost
effective for us, nor is it legally (and perhaps morally) viable. Do we waste
lots of time looking after our hardware? No! It's 2015 - hardware is _easy_.

~~~
tgeek
You say LUN, are these SAN devices, or is it direct attached storage? Was the
replication, load balancing, and snapshotting all something that you set up
and manage yourselves?

\--edit-- Ahh you've been editing your comments so the thread is a bit out of
wack! (no problemo)

Newegg:
[http://www.newegg.com/Product/Product.aspx?Item=N82E16820167...](http://www.newegg.com/Product/Product.aspx?Item=N82E16820167241)
;) (yes its not the same as some of the much higher end stuff).

Fair enough, but again, your comment is about hardware that you are managing,
that you've built, thats glued together from a lot of different components,
both software and hardware, and this post is about a cloud service that
doesn't even compare. So your initial post comes off a bit as trolling for the
sake of trolling.

I've done my fair share of rack-n-stack, and I've now spent the past few years
"in the clouds" as it were. Wouldn't go back for anything, but I dont think
this makes me a fanboy. Sure there is kit that you'd only ever be able to
build/buy yourself (for now at least), but most ppl will never need more than
100k IOPs, let alone 500k+.

\--edit again-- In regards to security, if you think you are a capable of
running an infrastructure more secure in a datacenter yourself, than on one of
the major 3 cloud provider's infrastructure ( AWS, GOOG, MSFT ) where they
have some of the best sec teams in the world, then you are probably not as
deeply aware of whats possible in cloud from a security standpoint. Banks,
Medical institutions, government agencies, and so forth are all trusting their
infrastructure on the cloud, across many countries in the world.

~~~
mrmondo
Yeah sorry I didn't want it to end up sounding like a threaded argument - and
I was sort of brain dumping as I go.

Hardware wise - We use standard servers (super micro), packed with several
tiers of SSDs (Intel for the high end, SanDisk for the lower end).

Software wise, again all off the shelf, well understood tools: Debian Linux,
DRBD, iSCSI, LACP, LVM, Puppet.

Our compute servers are blades with Debian VMs running Docker containers Of
our applications.

Edit: something we've gained greatly from that isn't off the shelf is that we
moved to running very modern Linux Kernels - we have CI builds triggered as
new stable versions are released and they are stock standard except that we do
patch them with GRSecurity and ensure SELinux is enforcing.

All this doesn't cost much time to manage at all - we don't even have a
storage admin and to be honest - if we needed one we'd be doing something
wrong - apart from physical failure (which is very rare these days) there
really isn't anything to do with storage - it's almost boring!

~~~
mrmondo
I actually have to get some sleep now - it's after 1AM here in Aussie, I
wanted to stress that I'm absolutely not against using cloud hosted services -
just that they're not the answer to all situations and there's a lot to be
gained from ensuring you don't get sucked in to too much of the 'Spin' that
vendors provide.

~~~
tgeek
np! at the end of the day we all hate on call.

cheers.

~~~
mrmondo
I used to, but we rarely get a single alert out of hours these days - if we
do, we're probably doing something wrong.

------
yetihehe
At OktaWave[1] 20k IOPS with 2GB bandwidth is only the second tier and you can
have up to 200k IOPS with 2GB/s bandwidth. Only size of single volume is max
300GB, but you can raid0 them.

[1]
[https://www.oktawave.com/pricing.html](https://www.oktawave.com/pricing.html)

------
yeukhon
Did the IOPS (and better yet, enhanced IOPS) help any big user? DataStax and
Eleasticsearch Inc recommend to stripe instance stores (doing raid0), which in
the cloud is not the worst idea as long as you have multiple nodes and proper
ring designed. That being said, now I lose the ability create snapshot because
the volumes are not EBS. From my experience the IOPS isn't very stable.

~~~
ceejayoz
Provisioned IOPS are quite stable, in my experience. That's their entire
point.

~~~
mrmondo
But with AWS' provisioned tiers you're kind of losing out on your cost
savings...

------
OneOneOneOne
How do they define 20000 IOPS?

~~~
cthalupa
[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-
ch...](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-
characteristics.html)

"Amazon EBS measures each I/O operation per second (that is 256 KB or smaller)
as one IOPS"

------
sp332
That's within an order of magnitude of RAM. You're getting 16TB of RAM!

~~~
vidarh
I think you severely underestimate RAM bandwidth on modern servers.

~~~
sp332
OK, maybe not top-of-the-line RAM, but some servers do have RAM that slow!

~~~
ak217
PCIe or SATA attached SSD random access latency is around 0.2ms. Typical RAM
latency is 100ns, maybe 200 in a NUMA cross-node access. That's a 3 orders of
magnitude difference. Add another order of magnitude for network-attached SSD.

Bandwidth wise, a single DDR3 channel has around 10 GB/s (and a typical server
has 4 to 8 of them). A single half duplex 10GE link (the most you can
provision and effectively use on EC2) is 500 MB/s. So, generally 1 to 2 orders
of magnitude.

~~~
sp332
Ok the latency is high. But I was seeing numbers in the 300,000 IOPS range for
ramdisks.

~~~
ak217
A naive guess is that most of that is filesystem overhead. Were you using
tmpfs?

~~~
sp332
It wasn't my benchmark, but that's my guess too.

