
Highly Available Block Storage - dineshp2
https://www.digitalocean.com/features/storage/
======
wiremine
Spun one up and ran some quick numbers on a 100GB volume:

root@ubuntu-1gb-nyc1-01:~# time dd if=/dev/disk/by-id/scsi-0DO_Volume_volume-
nyc1-01 of=test.dat bs=1024 count=10000000 10000000+0 records in

10000000+0 records out

10240000000 bytes (10 GB) copied, 58.0655 s, 176 MB/s

real 0m58.248s

user 0m2.608s

sys 0m41.604s

Some quick observations:

* Easy to add one when creating a droplet; by default they let you create volumes with these sizes: 100GB, 250GB, 500GB, 1000GB, 1.95TB; it's also really easy to create your own size.

* You can resize in any increments; took about 4 seconds to go from 100GB to 110GB with no downtime; you obviously need to resize/manage the mounted volume yourself.

* [Edit 1] Deleting the droplet does NOT destroy the volume. Worth keeping in mind when you spin them up/down.

* [Edit 2] Remounting an existing volume to a new droplet was quick and painless.

~~~
Thaxll
Don't use dd for those tests it's really bad, especially on VMs.

~~~
pepr
What should be used for I/O throughput tests instead?

~~~
e1ven
I usually use Bonnie++ when testing disk performance.
[http://www.coker.com.au/bonnie++/](http://www.coker.com.au/bonnie++/)

~~~
notacoward
The only thing worse than dd is bonnie++. Please, folks, use fio, or at least
iozone, with multiple threads and/or a queue depth greater than one.

~~~
onnoonno
Are you sure? Bonnie++ has byte-wise and block-wise tests. Yes, the byte-wise
tests are CPU bound (as expected), but I have not seen that for the block-wise
tests on any machine so far?

------
bjacobel
Reminder that just a few weeks ago DigitalOcean rolled over on one of their
customers and took down 38,000 websites after receiving a claim of
infringement from the NRA against a parody site hosted on surge.sh:

[http://motherboard.vice.com/read/nra-complaint-takes-
down-38...](http://motherboard.vice.com/read/nra-complaint-takes-
down-38000-websites)

~~~
corobo
Reminder that you have to act on abuse notifications sharpish. You're
providing a service, it's on you if you ignore abuse notifies.

"We received notice on behalf of a trademark holder that a customer of
DigitalOcean was hosting infringing content on our network. DigitalOcean
immediately notified our customer of the infringement, and the customer was
given a five day period to resolve the issue. The infringing content was not
removed within the specified period even though several notifications were
issued. Per DigitalOcean’s terms of service, a final reminder was issued to
our customer and, when no action was taken, access to the content was
disabled. The infringing content was subsequently removed by the customer and
all services were restored in less than two hours."

~~~
Someone1234
That statement is problematic, namely this:

> The infringing content was not removed within the specified period even
> though several notifications were issued.

You don't have to remove content under the DMCA, you can also file a counter-
notice which gets the content host off the hook and then the matter goes to
court[0].

But that also assumes DMCA which, if memory serves, was not in play here. It
was a trademark complaint, which DigitalOcean has no responsibility to
resolve.

Ultimately DigitalOcean's response, even with that statement, seems at odds
with how the law is actually written. The other party also claimed they did
respond to DigitalOcean, they just never removed the legal parody material
which is their right.

DigitalOcean's understanding of the NRA's rights is more expansive than the
law itself. Effectively their trademark policy is to automatically side with
the trademark holder, irrespective of fair use[1] (see page 9+).

[0]
[https://en.wikipedia.org/wiki/Digital_Millennium_Copyright_A...](https://en.wikipedia.org/wiki/Digital_Millennium_Copyright_Act#Title_II:_Online_Copyright_Infringement_Liability_Limitation_Act)

[1]
[https://apps.americanbar.org/litigation/committees/intellect...](https://apps.americanbar.org/litigation/committees/intellectual/roundtables/0506_outline.pdf)

~~~
franey
It's against DO's terms of service to "use the Services in violation of the
copyrights, trademarks, patents or trade secrets of third parties", which
appears to be the issue here.

[https://www.digitalocean.com/legal/terms/](https://www.digitalocean.com/legal/terms/)

~~~
kbenson
> Effectively their trademark policy is to automatically side with the
> trademark holder, irrespective of fair use[1] (see page 9+).

Did you miss that? It's not infringement if it falls under fair use. They were
not following their TOS because they did not confirm the content was
infringing a copyright

~~~
fweespeech
Expecting a hosting provider to wade into fair use waters hand-in-hand with
you is generally unwise.

The vast majority will not unless you are a large customer with your own legal
staff on retainer to provide the appropriate legalese/notices/etc.

~~~
kbenson
I don't expect that, but I do expect that a company not immediately kowtow to
an infringement request if there is some ambiguity as to whether it's
infringing.

That said, the original reporting on this and the statement from DigitalOcean
are at odds (Motherboard's update with DO's statement), and since I haven't
verified either, I'll retract any specific support for either side of this
particular instance.

~~~
fweespeech
> I don't expect that, but I do expect that a company not immediately kowtow
> to an infringement request if there is some ambiguity as to whether it's
> infringing.

That requires money for a lawyer to evaluate it. If the customer has their own
legal staff that does this and relays that opinion to the host, as well as
being large enough to cover any legal costs DO might incur, DO would be fine
with it.

You are basically saying you are entitled to using DO's legal staff and
financial resources in addition to the hosting you've paid for.

~~~
kbenson
> You are basically saying you are entitled to using DO's legal staff and
> financial resources in addition to the hosting you've paid for.

No, what I'm saying is that DO _must_ already do this to some degree if they
are handling requests, as otherwise I could send letters claiming
trademark/copyright infringement for any number of things and get many
customers shut down. If they have internal guidelines for what they do in
cases when trademark/copyright infringement, I expect they follow those. I
also expect that those policies do the minimum legally required of them.
That's not because it's cheaper and garners good will from customers (it
does), but because to do otherwise is taking sides in a legal situation
without being an appointed arbiter of the law. Not only is this excessive, but
it's anti-customer.

If DO is doing what they think they must by law, I have no problem with that,
as long as that is clearly explained. In the case we were previously talking
about, the statement from DO (at the motherboard article) is somewhat
ambiguous as to why they did what they did. _Per DigitalOcean’s terms of
service, a final reminder was issued to our customer and, when no action was
taken, access to the content was disabled._ Was the take down required by law,
or was DO overly aggressive in handling it? Without a statement as to why,
(and I think that given some people's assertion that they went beyond what was
legally required of them), their reasoning is somewhat ambiguous, and harder
to call into question. If they clearly define they enforced their TOS based on
what they believe is a legally required of them, then we can look at the law
and their actions and evaluate whether that's true, and if it's not, DO can
learn from the experience or be called out as a company that is capricious in
their execution of the law.

What it boils down to is that "We received a complaint infringement. We
enforced our TOS and shut down access to the content in question." leaves a
lot open for assumption. I would be much happier if it was "We received a
complaint infringement _and as we believe is legally required of us_ we
enforced our TOS and shut down access to the content in question." It's a
small change, but it allows customers (and critics) a much clearer view on how
DO handles situations like this, and allows for the public to make an informed
choice on whether they think DO was correct in their actions (whether they
really were legally required to do so). It's subtle, but I think it's a very,
_very_ important distinction.

------
dastbe
Don't be confused: the article makes the mistake of comparing DOs new block
storage service with other companies object stores. EBS is the competitor to
this, not S3. Same for gce persistent disks and azure drives.

Unfortunately this means the pricing comparison is just wrong.

~~~
priteshjain
Ohh

------
mwcampbell
I think this might be a mistake. Ever since Joyent's commentary on one of the
big Amazon EBS failures in 2011 [1] [2] [3], I've been suspicious of all
network-attached block storage. Then again, I haven't heard of any big EBS
failures recently; I wonder what changed.

[1]: [https://www.joyent.com/blog/on-cascading-failures-and-
amazon...](https://www.joyent.com/blog/on-cascading-failures-and-amazons-
elastic-block-store)

[2]: [https://www.joyent.com/blog/magical-block-store-when-
abstrac...](https://www.joyent.com/blog/magical-block-store-when-abstractions-
fail-us)

[3]: [https://www.joyent.com/blog/network-storage-in-the-cloud-
del...](https://www.joyent.com/blog/network-storage-in-the-cloud-delicious-
but-deadly)

~~~
boulos
Network block storage isn't inherently broken, the initial EBS implementation
was frankly just unreliable.

We've not had anything like those dark days with Persistent Disk. It's still
true that having your storage across the network opens you to _networking_
failures taking out your storage, but the gain in durability and maintenance
pays for it (in our case, live migration would just be crazy with local
spinning disks, we tried it didn't work).

Disclaimer: I work on GCE, and we want your business ;)

~~~
ngrilly
> in our case, live migration would just be crazy with local spinning disks,
> we tried it didn't work

It looks like Exoscale does live migrations with locally attached SSDs.

~~~
regularfry
Then they'll run into the same problem anyone does doing that: migrating
reasonably-sized block devices across a reasonable network takes an
unreasonable length of time.

Been there, done that, don't want to sit staring at consoles waiting for live
disc migrations ever again.

~~~
ngrilly
My guess is that they live migrate VMs only before a planned maintenance.
Let's say they use a 10 Gbps network, and they use only 1 Gbps of bandwidth
for migrating data, then migrating a 200 GB disk would take something like 1
hour, which sounds OK. Can you share some details about your experience?

~~~
regularfry
You're missing a subtlety: it's a _live_ migration. You've also got to migrate
the data the VM's writing while the migration is going on. Depending on how
your network's set up, it might well be possible for the VM to saturate out
the migration process. Plus this won't only be planned maintenance, it'll be
to get off dodgy hardware, too, including spannered RAID controllers where you
_really_ don't want to risk hanging around. Add in that you're likely going to
be moving sets of discs at once (so possibly a couple of TB at a time), rather
than individually, and you're very quickly looking at spending a day (or a
night) at a time watching in case there's a network blip which means you need
to restart any of them. This does not lend itself to a peaceful, happy
existence.

It doesn't take doing this very often to make you realise that this is
fundamentally backwards: you want the disc data already present on more than
one storage server so that if one goes pop, you're not stuffed. Once you've
done _that_ , you can make the observation that hardware RAID is no longer
necessary, and save yourself a layer of complexity.

~~~
ngrilly
Thanks a lot for your detailed answer. I understand your arguments, but Google
Compute Engine Local SSDs support live migration [1], which seems to prove
it's possible, despite the difficulties. Any advice about this?

[1]
[https://cloud.google.com/compute/docs/disks/#data_persistenc...](https://cloud.google.com/compute/docs/disks/#data_persistence_on_local_ssds)

~~~
regularfry
Yeah, it's possible. Just not pleasant. What they're doing (and how unpleasant
it is to operate) will depend on exactly what they mean by:

> Your instance might experience a short period of decreased performance

and how tolerant they are of unplanned reboots. I suspect what it means is
that they'll throttle your VM (or maybe pause your IO) so it can't interfere
with finalising the migration.

That being said, this is Google. They've probably thrown more man-hours at
this than anyone else would think sane. I'll note that this is in the context
of automatic migration away from "maintenance events" (whatever that covers) -
it sounds like they think they've automated away a lot of the reasons we were
having to keep an eye on things, but they're still vulnerable to hardware
failure (obviously).

------
Mister_Snuggles
This is EXACTLY the thing that I need for one of my droplets! I love how there
is nothing "special" about it - it's just a disk that you can attach to a
droplet. I'm sure that under the hood there's some kind of magic going on, but
it looks like it's nicely abstracted away. This is what I hoped block storage
would turn out to be - here's a block device, use it like one.

As soon as this rolls out to the region I've got that droplet in, I'm going to
pull the trigger on it. I might even spend the effort to migrate my droplet to
a supported region just to get this.

~~~
mikeash
Same here. My $5/month droplet is sufficient for all my needs, except it's a
bit limiting on storage. $2/month for an extra 20GB, doubling my storage,
would be great. I don't want to migrate anything and I don't care about more
CPU or RAM, I just want more space!

~~~
hashmp
Totally agree, this was their main limitation. Glad they have now solved it.

------
3pt14159
I have been asking for non-SSD on DO for a long time now. My heart jumped when
I saw the HN title, only to be dashed on the rocks.

What are us data nerds supposed to do? We want to take 10 terabytes, run a
batch process on it, keep the 20TB, then continue with about 5GB of working
data until the next month's terabyte comes in, then we want to batch through
the 21TB. Right now the price slider doesn't even go up to 21TB, and clicking
on the "need more storage button" doesn't go anywhere, but I'm assuming it
would be $2100 / month which is more than 3x as expensive than vanilla S3.

~~~
Veratyr
You're looking at the wrong market.

At Hetzner, you can rent a dedicated server with 2x3TB drives for about
$25/month ($0.005/GB, to scale to multiple servers, use Ceph) and a few larger
machines for under $100. At OVH you can buy object storage for around 1c/GB
and rent a few dedicated servers for less than $100/month, or use their cloud.

If you go to the _really_ low end, time4vps will give you a 1TB VPS for
2EUR/month, if you pay for 2 years (and they give you 4x the storage as
bandwidth).

I don't work for any of these companies but I have services with all of them.

~~~
3pt14159
Yeah, but I like the DO per hour billing, design, and API. Other than their
insecure-by-default SSH key thing,
[https://digitalocean.uservoice.com/forums/136585-digitalocea...](https://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/9307569-return-
the-droplet-s-ssh-public-key-as-part-of-api) that is.

~~~
happyslobro
How many hours does it take to load 10TB and then reduce it to 5GB? ;)

~~~
3pt14159
Actually because I use the DO API and lots of small instances that work
together, not that long. Right now I use S3, which blows because I hate the S3
cli and API, so I would love a better solution.

~~~
tedmiston
Not sure if it's _much_ better for you use case, but Backblaze introduced an
S3 competitor called B2 a few months ago.

Price calculation is straightforward:

\- $0.005/GB/month for storage

\- $0.05/GB download

\- $0.004 per 10k downloads

There is a base free tier as well.

[https://www.backblaze.com/b2/](https://www.backblaze.com/b2/)

~~~
3pt14159
Wow thanks for this! I'll definitely use these guys in the future.

------
misframer
It's the same price as AWS's General Purpose SSD EBS volumes.

[https://aws.amazon.com/ebs/pricing/](https://aws.amazon.com/ebs/pricing/)

~~~
hashmp
Yep.... Digital Ocean is much more competitive for instances though and out
going bandwidth.

~~~
eropple
At the cost of significantly worse networking (VPC is _fantastic_ ), no object
storage mechanism, no equivalent to IAM for instance permissioning, and no
autoscaling (seriously? in 2016? I had to go look to make sure I wasn't
misremembering--that they recommend something like DOProxy should make the lot
of them faintly embarrassed).

DigitalOcean may have some value at the basement level of compute, but as a
professional in this area there is literally no situation where I would use
DigitalOcean right now because I value my time. AWS is already laughably
cheap, the tooling is overwhelmingly superior, the resources available are
better, and you aren't duct-taping together half-solutions and reinventing
every wheel. (This is less an endorsement of AWS and more an endorsement of
Not DigitalOcean; GCE is more than fine, Azure has some ugly bits around
autoscaling that I don't like but you can get by.)

------
andybak
This helps me with a nicer deployment setup. I was always keen on 'rebuild
from scratch' rather than 'update stuff and hope you're idempotent and have
captured all changes' but transitory data was always the problem. Now I can
start building a new updated droplet and the only downtime will be that needed
to detach and reattach the block storage containing the db etc.

Anyone see a flaw in this? (I know there are other ways to achieve similar
benefits - my files could be on S3 and the database could be a separate
droplet etc but these introduced various drawbacks and added complexity)

~~~
Jedd

      > Anyone see a flaw in this?
    

Perhaps not a flaw, but some issues with your setup are implied.

If you're rebuilding from scratch _because_ you're not sure that you can
update things, then you're probably in need of a configuration management tool
(I'm a big fan of saltstack[1], mostly because I don't like Ruby or DSL's, but
there's lots of options out there[2])

If you're worried you're going to lose transitory data, it _sounds_ like you
don't have a trusted and tested backup/archival/recovery process in place. So
having it stored on a single EBS / DO BS / etc means you're still exposed. If
you're rebuilding and rolling data over, in this scenario, I'd be copying,
rather than relocating, any precarious data repositories.

[1] [https://saltstack.com/](https://saltstack.com/) [2]
[https://en.wikipedia.org/wiki/Comparison_of_open-
source_conf...](https://en.wikipedia.org/wiki/Comparison_of_open-
source_configuration_management_software)

~~~
vidarh
I tend to avoid those config management tools other than for basic
bootstrapping exactly because while you can use those too to recreate from
scratch, when you don't do that, you leave the door open for undocumented,
unknown state, since most of them basically take a system in an unknown-but-
hopefully-mostly-consistent state and try to bring them to a known state.

But they'll only be in a known state in that case if your setup is extremely
comprehensive.

In practice I've seen too many config management tools where long running
servers have ended up in unknown states because changes have been applied, and
subsequently changes have either been made outside of the toolchain, or
changes have been done to the config in ways that doesn't let the tool know
what has changed, or the tools simply doesn't have a way of comparing machine
states without comprehensively enumerating everything on the server (e.g.
people running Ansible playbooks that adds X, subsequently removing the
requirement X from the playbook, and going about without considering whether
or not X will interact with Y which they've added later).

As a result, I see rebuilding from scratch as largely orthogonal to whether or
not you use a configuration management tool or e.g. build VM images that you
replace wholesale, or whatever you do: You should rebuild from scratch
regularly, as coupled with a test-suite it's the only realistic way of knowing
whether or not you've left anything out of your build process.

My biggest caveat with config management systems is that they tend to end up
encouraging live changes to a setup, instead of a build-test-deploy cycle.
Sometimes that's necessary, but to me that's a last resort.

~~~
brazzledazzle
I actually agree with your overall point but generally speaking if someone is
making changes outside of your standardized toolchain you have a human
problem. Emergencies aside, you use those tools for a reason. Straddling the
fence is almost the worst of both worlds.

~~~
vidarh
Agreed. But in my experience, the easier you make going outside the process
the easier it becomes to invent excuses for why it is ok. A lot of my job
involves making the right thing to do the path of least resistance, because
when it isn't, because humans overall tend to be the cause of a whole lot more
of the problems than the servers.

------
skrowl
I like their straight forward pricing. $0.10 USD per GB per month. No IOPS
limits.

That said, how do you prevent a rogue droplet from going crazy and hogging up
all of the SSD I/O?

~~~
zbjornson
Unless I missed it, I saw "no need for complicated formulas to determine the
overall cost for transactions or IOPs limit," which I do not read as "no IOPS
limit." I was looking but could not find any word on performance.

~~~
brianwawok
I read it as they either silently limit you, or have a free for all and let
the traffic duke it out for iops.

------
happyslobro
Setup: find it's name, format and mount it, as if you were adding an SSD to a
desktop.

[https://www.digitalocean.com/community/tutorials/how-to-
use-...](https://www.digitalocean.com/community/tutorials/how-to-use-block-
storage-on-digitalocean)

------
koolba
This has been a much requested feature and I'm sure it will be very popular.
I'm still reminded of this quote though:

" _He was a bold man that first ran a production database on a brand new block
storage service!_ "

------
johnwheeler
I love how DO focuses on what matters the most: Inexpensive VMs and scalable
block storage.

If I had to pick two, those would be them!

~~~
brianwawok
Well except being 4 years late to the block storage game? Seems they have an
uphill fight. Aws and gce match them on low end droplet price and offer much
more. No local ssd but not sure what % of apps really need local ssd. DO
effectively forces you to pay for local ssd for all of your servers.

~~~
velodrome
Well, they beat Linode..

With DO, at least there is an additional option available for users.

With GCE and AWS, the outbound bandwidth is expensive. 1TB = $90 (AWS, GCE) vs
1TB included (Linode, DO).

~~~
e12e
This. I'm always a little confused by how people even evaluate the big three
clouds, with prices per request and all bandwidth "not included". It's so
strange coming from cheap dedicated servers with typically 10tb
bandwidth/month "included". I mean, what do people _do_ in the cloud that
requires 100gb of storage, and insignificant transfer?

------
aibottle
Thank god! Highly Available Block Storage. From Digital Ocean. Great! Now I
can finally store all the 300mb/s streaming in on my server. Oh wait. I
cannot, because DO cancelled the service again. Bummer.

~~~
cheapsteak
Cancelled which service?

------
cgag
Sweet. This makes digitalocean much more appealing as a potential substrate
for a kubernetes cluster.

~~~
pstadler
I just migrated my Kubernetes/Rancher stack from NYC2 to NYC1 in order to use
block storage. Eager to see whether this plays well with GlusterFS.

------
Mister_Snuggles
I can't wait for this to roll out to more regions.

This is EXACTLY the thing I need for some stuff I'm working on!

------
ozy23378
Going to perform some very basic DD i/o benchmarks using:
[https://haydenjames.io/web-host-doesnt-want-read-
benchmark-v...](https://haydenjames.io/web-host-doesnt-want-read-benchmark-
vps/)

Will post results.

------
scurvy
What's the backend? Ceph?

~~~
marcstreeter
Ceph is not block storage -- ceph is _object storage_

~~~
vruiz
It's actually both, and a file system.

~~~
marcstreeter
Oh I guess I thought ceph was all about _eventual_ consistency. I didn't know
it was strongly consistent like what I expect from block storage.

------
mrmondo
How is it taking huge cloud providers so long to catch up with things we do
self hosted every day? It obviously has to be well engineered, yet it's
relatively simple. Woefully poor performance too.

------
simos
Some early benchmarks about the new block storage,
[https://simos.info/blog/trying-out-lxd-containers-on-
ubuntu-...](https://simos.info/blog/trying-out-lxd-containers-on-ubuntu-on-
digitalocean-with-block-storage/)

I did not get good speeds and I am wondering why that may be...

~~~
zbjornson
> The immediate benefits are that the latency is much lower with the new block
> storage

I think you might have misread the units: locally attached 50105 us (50 ms) vs
block 546 ms.

The throughput numbers are on par with AWS and GCP block storage. This seems
reasonable aside from the high latency.

------
drtse4
A bit pricey for the long term but great if you just need to add some disk
space to your vm and don't need the other improvements more expensive vms give
you.

I use DO mostly to compile stuff on Linux when i don't have access to a
physical server, and storage size is always a problem.

~~~
marcstreeter
The purpose of the block storage in this instance isn't about giving your
vm/droplet more space. It's separation. That way any data that's on that
device can be attached to another vm/droplet. It probably would be more cost
effective just to upgrade the vm/droplet if space were a concern. It's at
least how we've marketed the same feature for the past year or two via
Codero's portal. Not to say I don't like how DO has entered the space: keeping
it simple.

~~~
walkertraylor
Exactly! It's about having an easy upgrade path, generally reducing the amount
of work for common operations, and for more flexibility engineering your cloud
architecture. It's nice that it happens to be SSD, but highest performance or
lowest cost per GB isn't necessarily the only cost saving factor.

This a main reason I still use Amazon AWS: I can create an instance and if it
doesn't perform, upgrade it until it does. Then when I'm finished, kill the
instance and save the volume. Next time I need it, just create the instance
for the job, perhaps at spot pricing, then kill it again.

~~~
drtse4
What you describe was already possible on DO using snapshots (that can be
stored for free), and that's what I usually do too.

------
daveguy
Edit3: Remountable/movable flexible storage for DO instances is what this
gives you and it's kind of pricey. The comparison to B2 is not valid. Leaving
the original mess for posterity.

\---

TWENTY times (edit) the price as B2 from Backblaze ($.10 vs $.005 per GB per
month). It is one of the more expensive ones. But that gets you two things:

* (moot See edit 2) SSD! (significant iops improvement)

* (moot See edit 2) No transactional costs! (not sure if just between digital ocean instances, but they say none)

Improved performance and no transaction costs MAY (edit) be worth it for some
applications.

Edit1: made it an order of magnitude cheaper in my head after looking at
backblaze. It is no where close to the same price. Thank you for the catch,
scq!

Edit2: and I'm just all kinds of off on this. Block not object storage.
Essentially storage you can mount and move between digitalocean instances.
That makes no transaction costs moot. You still have to get data out of the
instance.

Thank you all for quickly catching how backwards this post was! I need coffee.

~~~
crb
B2 is blob storage, like Google Cloud Storage or Amazon S3. What they have
launched is SSD block storage, like Google persistent disks or Amazon EBS.

(The TechCrunch article also got this confused.)

~~~
askmike
> Amazon EBS.

Not more like amazon EFS?

~~~
boynamedsue
No. EFS is NFS in the cloud which is NAS.

------
happyslobro
I don't suppose DO or anyone else is already working on an Ansible addon for
this?

------
lamarkia
I benchmarked the new block storage and it is not that faster than the virtio
disk.

~~~
daveguy
Um. Can you provide context? What virtio setup? How much faster? How are you
accessing each? This is the reason benchmarks are often worse than useless.

------
fweespeech
$.10/GB block storage is too expensive.

[https://www.online.net/en/dedicated-server/rpn-
san](https://www.online.net/en/dedicated-server/rpn-san)

There are several places you can get ~5 TB for +/-10% of the 1TB price at DO.

DO is offering a SAN at Object Store prices. :/

~~~
tomschlick
So is that service. You linked to the non SSD version. Their SSD version is
$0.12/GB

