
Linode SSD Beta - kbar13
https://forum.linode.com/viewtopic.php?f=26&t=10406
======
jcampbell1
I know there is this theory that server hardware needs to be more durable
therefore you should pay an order of magnitude more, but all of my server
workloads are write somewhat frequently, read randomly, and delete almost
never. It is my understanding, that commodity consumer SSDs should work fine
for this workload.

I assume Digital Ocean is using consumer SSDs, and it feels like it shouldn't
be a problem with the exception of the bad neighbor issue.

~~~
steve-howard
Actually I thought it was going the other way: "Old school" is that server
hardware should be reliable so it doesn't go down, and "New school" is that
hardware should be cheap and there should be a lot of it so that if one server
goes down we don't care.

~~~
jcampbell1
Both DigitalOcean and Linode are in the "old school" camp. They are in the
business of providing reliable hosting at good prices.

My question was along the lines of: Using consumer HDs in servers is a
disaster because the 24/7 read workload eventually breaks spinning platters.
Server grade magnetic disks are a must in servers. Consumer grade SSDs are
acceptable in servers because they don't wear out from 24/7 reads. Consumer
SSDs fail from constant deletes+writes. Server workloads don't produce many
deletes, therefore it is safe to put consumer SSDs in servers.

Is the above correct? Is the premium for server grade SSDs a myth? Should I
feel safe using Digital Ocean under the assumption they are using consumer
SSDs for multi-tenant servers?

~~~
wmf
Some workloads are write-intensive and some aren't. For a hosting provider
there's no way to know what the customers are going to do. I would expect that
SSDs attract customers who are going to actually give them a workout, though.

~~~
jcampbell1
> Some workloads are write-intensive and some aren't.

Did you read what I wrote? Do you mean delete-intensive?

~~~
wtallis
I don't think there's any truth to your notion that deletes are somehow
_worse_ for a solid state drive than other kinds of writes. Overwriting a
sector has the same effect on longevity as erasing it and filling it up again,
but in the latter case, you can use the ATA TRIM command to defer the flash
block erase latency (which is significantly higher than the flash program
latency). The only way in which deletes are "bad" is if you're comparing to a
workload that fills the drive once and then moves on to fill a different drive
- but that's not a fair comparison against doing all the writes to the same
drive.

Different workloads can somewhat affect how much write amplification results
from the wear leveling, but the best-case there actually is to have no long-
lived data on the drive, ie. lots of deletes.

~~~
smoyer
I think he's assuming that since it's actually rewritten flash-memory bits
that degrade, if you don't delete data, then you don't rewrite. So without
deletes, all writes simply fill more of the drive and you shouldn't see
degradation.

Modern flash drives do indeed try to level the writes across all bits, but
he's talking about work-loads that don't rewrite at all.

~~~
sirclueless
I wouldn't call an application that writes once and never deletes anything on
the hard drive a write-intensive workload. Filling up a SSD with write-once
data means paying well over 50¢/GB for your writes, which isn't something you
can do if you are "write-intensive" \-- if you're writing 100x the capacity of
the drive over its lifetime, _then_ you are really write-intensive, at which
point the distinction between delete-intensive and write-intensive is nearly
nonexistent for obvious reasons.

------
fletchowns
This sounds pretty cool! What sorts of considerations does one make when you
are deciding between more RAM or SSDs?

 _Random IO is processed first through the SSDs (the thing that they are
really good at) while sequential IO short-cuts to the hard drives - which is
pretty slick._

I'm curious, what do you use to develop something like that? Is it build on
top of something? Built into the kernel? I wouldn't even know where to
begin...

~~~
bingaling
Sounds like SSD's are used as a cache, like ZFS does with its l2arc.

~~~
jevinskie
Does ZFS use some sort of last-used eviction to purge the cache? It seems like
differentiating between random/sequential IO is a bit different than LRU.

~~~
wmf
The design is documented here:
[https://github.com/zfsonlinux/zfs/blob/master/module/zfs/arc...](https://github.com/zfsonlinux/zfs/blob/master/module/zfs/arc.c#L3993)

------
ghc
Every time I think I'm done with Linode, they draw me right back in. I guess
I'll wait to see about the pricing, but I imagine this will be a serious
challenge for Digital Ocean to overcome, since their main selling point over
Linode is cheap SSDs VPSs.

~~~
dlau1
Waiting for digital ocean to give me some free upgrades =)

~~~
eksith
Have you been on Digital Ocean for a while? How do you like it so far?

~~~
nwh
I've been using them for a similar period as _xur17_.

Have not had a single issue, bar some packet routing at AMS1 that just lead to
latency for a few minutes. Their API is getting quite nice to tie into, though
I've had to resort to screen scraping for some of the newer options.

------
mbi
Also worth mentioning that for 59€/mo Hetzner is offering a dedicated i7-4770
Haswell with 32GB RAM and dual SSD raid1.

[http://www.hetzner.de/en/hosting/produkte_rootserver/ex40ssd](http://www.hetzner.de/en/hosting/produkte_rootserver/ex40ssd)

~~~
Keyframe
NB: +99€ for setup.

How is everyone satisfied with hetzner? I have a few friends that run setups
on their systems, mostly for heavy forums. I'm more interested how do you deal
with large backups? Seems to me it's just easier to buy machines in sets for
redundancy and every now and then move things over to glacier.

~~~
mbi
Hetzner includes 100GB on a SAN in a distinct datacenter with every dedicated
plan (500GB is 10€, 10TB 80€.) Of course you could also push to Glacier, but
that'll get counted in your outbound bandwidth (2€/TB after 20TB/mo)

~~~
Keyframe
Do they charge for traffic between machines at hetzner?

------
andrewcooke
kind-of related, is anyone using bcache with linux yet? how easy is it to get
working? does it speed things up as expected?

[http://arstechnica.com/information-
technology/2013/07/linux-...](http://arstechnica.com/information-
technology/2013/07/linux-3-10-out-with-better-ssd-caching-and-radeon-support/)

~~~
Sami_Lehtinen
Well, I've been using it for over an year now. Yes, it does really speed up
things. I've found out that people grossly over estimate need for SSD space.
I'm using 64 GB SSD with 3TB drive and hard disk is really rearely touched.
I've even enabled power saving spin down for it, so I know when it starts. In
normal daily usage HDD isn't being touched at all. It's only when something
like linux updates are being run or so. I've also enabled write-back caching
without maximum time limit. If you use write-through caching it naturally
causes HDD to run all the time. I've chosen to cache everything, not only
random reads. Because I have plenty of space with 64 GB SSD. I don't know what
people are doing who claim thei need larger SSD than that. Maybe they're
working with large data sets or have absolutely massive games or so. As
summary, yes, I love it. SSD is never full, I'm not running out of disk space
and yes, I do get pure SSD performance over 99% of the time. Only if I pickup
some movies or music which has been around for months without being accessed
then there's HDD access of course. Setup was ok, because I did it when I
replaced my computer so I built everything from scratch anyway. I'm going to
blog about that, but I have huge backlog of stuff to get blogged. P.S. Some
cache vendors (like Seagate Hybrids drives) recommend using write-through
caching, because in the case SSD dies, you'll still have fully working file
system on HDD. With write-through caching, things are going to be very badly
messed up if SSD dies. Practically totally unrecoverable situation. But that's
why we got backups, right?

~~~
chetanahuja
_" With write-through caching, things are going to be very badly messed up if
SSD dies"_

You probably meant the reverse of that (With _write-back_ caching... etc.).

~~~
Sami_Lehtinen
Yep, true.

------
tod222
> Chicks dig scars

Chicks? Really? Come on Linode, you should be better than this.

~~~
rektide
Purify! Purify!

------
WatchDog
Sounds over-engineered, slower and possibly over-priced. A consumer SSD will
be faster and so long as it is trimmed correctly and not fully utilized, it
should be sufficiently reliable.

~~~
rektide
For people that require a large bulk of data but who also leave most of it
cold, unused, front-caching SSDs are brilliant. Paying four times as much for
twice the performance and far better reliability is a no-brainer for this-
what's the opposite of a bottleneck- accelerator? Data reserve+pump?

What would you guess is the average interval between accesses for any given
byte-on-disk a Linode customer has? If it's hours, days, or weeks, I'd call it
reckless to be spending money to put those bytes on expensive SSD systems.

Make the hot stuff fast, be price conscious with the rest.

------
ksec
Things I really wish they could add or improve.

PHPBB? What year is this? The Website design and structure seriously needs
some thought and work. LongView, either have a trial only version of bump the
free tier retention to 24 hours. What is the point of a 30 minutes graph? Get
rid of the Add on, those pricing are just plain stupid. Give options to
increase Memory without buying new plans for $10/GB at a maximum of double
current capacity. So a $20 1GB plan could be increase to $30 2GB Memory with
everything else the same $20 Plan. That should just make Linode competitive
against DO. Linode CDN - A CDN coming from those 6 Linode DC with Data coming
off your transfer pool. May be any Data served over CDN would count as triple
the amount of data from your pool. SSD Speed up. From the Data on ServerBear,
this new SSD tier is working as well as its competitors. I am sure the
NodeBalancer could do with a price decrease or bump in concurrent connection.

DO has all of the above in the pipeline for releasing this year, so lets hope
Linode react quicker.

------
nwmcsween
Linode has "developed" nothing. Linode is using bcache as a tiered storage but
there are problems such as certain IO patterns will bypass bcache and hammer
the disks causing slower-than-disk-alone IO speed.

------
hosay123
From the description it sounds like they're just using bcache for the storage
layer, or one its equivalents (IIRC Facebook came out with a very similar
patch). Still pretty cool

------
api
So SSDs are bad for server loads... what secret sauce does Digital Ocean have
that Linode doesn't? Did they write their own storage layer that's doing
something cool?

------
kulinilesh456
Lindo SSDs are expensive, and the good SSDs are really, really expensive.
Although the cheaper SSDs exist, they wear out more quickly, potentially
slowing down as they wear, and have slower overall throughput. Not a good
combination for use in multi-tenant server workloads.

------
orijing
> Random IO is processed first through the SSDs (the thing that they are
> really good at) while sequential IO short-cuts to the hard drives - which is
> pretty slick.

Any idea why the sequential benchmark numbers improved 4-5x when it is still
"short-cutting" to the HDs?

~~~
lucian1900
Possibly because the HDDs are free to only do long sequential reads, as
opposed to having to seek to some random place every now and then.

------
SilliMon
Sounds promising. SSDs have good potential as a cache for bigger backend
spinning disks.

------
ing33k
while its a welcome move, it should have been done some time ago.

------
jacques_chester
Too late.

About 8 months too late.

~~~
Kudos
Too late for... ?

~~~
jacques_chester
For me, at least.

I run a small blogging network. Linode have upgraded RAM, added cores, added
disk space. They've done everything except improve random-access I/O, which is
a major bottleneck for Wordpress installations thanks to MySQL's penchant for
joining tables on disk regardless of indices.

I moved to DigitalOcean about 8 months ago simply to get access to SSDs. In
most other respects I preferred Linode.

~~~
apapli
Hey fellow Australianite.

I've been looking at using DO instead of Amazon, the main stumbling block for
me is I cannot seem to figure out if they offer any configurable firewal. Ie I
want to modify port rules, blocking them mostly.

Does DO offer this, and how have you found them so far?

My use case is similar to yours - hosting for multiple shopping carts running
on MySQL, hence the appeal of SSDs.

~~~
voltagex_
Another Australianite here.

You can just bind to localhost if you don't want things to be open to the
world, or modify iptables.

Your other issue in Australia is the latency to DO's servers - ~200ms for the
US and 350+ for a European droplet. Cloudflare may help there, though.

~~~
jacques_chester
It's a blog network. Latency matters, but not so much that I can justify
paying the ruinous bandwidth rates Australian hosts want. I'd be looking at
several hundred extra dollars per month for what is really a quite modest
operation.

