Hacker News new | comments | show | ask | jobs | submit login
Redis with an SSD swap – not what you want (antirez.com)
87 points by janerik 1542 days ago | hide | past | web | 23 comments | favorite

I saw a few comments in the post asking what the point of the blog post was. After all I already expected very poor results, and I already tested it less formally in the past.

The point is simply to show how SSDs can't be considered, currently, as a bit slower version of memory. Their performance characteristics are a lot more about, simply, "faster disks".

Now those new disks are fast enough that if you design a database specifically to use SSDs, you can get interesting performances, compared to old disks. However the idea to use the disk as something you can use to allocate memory will not work well, and complex data structures requiring many random access writes, will not work either.

Code can be optimized for SSD usage of course, but this poses huge restrictions to what you can do and what you can't. This shows how the current Redis strategy of providing complex fast operations using just memory makes sense. In the future as SSDs will converge more with memory, this may change.

As antirez knows, but others may not:

Redis' model is particularly vulnerable to the disk slowdown because the pagefault blocks all requests..

Normally being single-threaded isn't a big deal for Redis because you are likely bound by Network i/o or CPU but using ssd swap is equivalent to using blocking sync disk i/o, which nobody would do :D

Could there be more aggressive memory allocation (basically a region per key / implement moving whole keys to new regions when their ds outgrows the block)? Sure.. but you're still going to pay dearly for the cost of a miss. This approach would help if you want to have only your 'active' ds in memory and let the OS page out cold keys, but this would require major re-work of redis internals (or I thought it would 6 months ago when I last considered this as a fun project..)

> The point is simply to show how SSDs can't be considered, currently, as a bit slower version of memory. Their performance characteristics are a lot more about, simply, "faster disks".

I come to the opposite conclusion when thinking about SSD's place between memory and disk. Let's talk about random I/O, which is the relevant metric for Redis: Yes, memory is a lot faster than SSDs (roughly 2+ orders of magnitude), but SSDs are also a whole lot faster than disk (roughly 2+ orders of magnitude). That makes them sound like they are sort of "in the middle".

But, let's look at another crucial property: something that I'll call the "characteristic size". This is the bytes of data such that the "seek cost" is equal to the cost of reading the bytes. You get this number by dividing the scan speed by IOps. I'll work in very rough numbers:

Memory: 20GB/s / 20M iops = 1Kb

SSD: 500MB/s / 100K iops = 5Kb

7K Disk: 160MB/s / 160 iops = 1000Kb

As you can see, SSDs are much more like memory than disk in this crucial parameter which dictates what data structures and algorithms will be most efficient.

So, actually, do I think of an SSD as more like slower memory than a fast disk.

* edit: Typo

You are just changing the metric of measurement to come to your conclusion. The OP said "throughput is closer to disk that memory" which your own numbers, 20 GB/s vs 500 MB/s vs 160MB/s line up very well with, a ratio of 400:1 vs 3:1. You counter with "characteristic size is closer to memory than disk".

You mention that characteristic size is the crucial parameter, but isn't that only the case when you are only considering a single storage medium? Since you always have memory, wouldn't a caching strategy between memory and SSD provide superior performance compared to only worrying about how the data is formatted while on the SSD?

Overall I would agree that SSD's have similar seek performance to RAM, where you can live with fragmentation, but I think the OP's point that there throughput is barely better than a disk is still valid.

You make a good point about bandwidth--that SSDs are more like disks. However, OP doesn't mention bandwidth, nor is it relevant to his experiment.

What would it mean for "SSD to be like slow memory"? To me, it would mean that both bandwidth and seeks would run proportionally slower. This is why I'm using the "characteristic size" metric--to evaluate that proportionality (and to give it a physical interpretation).

I don't understand the utility of your homegrown metric.

Yes, a 7K disk can output 160MB/s ... if it's doing a continous read, with zero seeks per second.

If it's actually doing 160 seeks/second, it's not going to have time to read 1MB (taking a further 1/160th of a second) after each seek.

So this metric means "how much data you can read per iop if you want to cut your iops by about 50%".

How is that useful above & beyond the input numbers?

The comparison gets closer to memory when you look at PCIe-based flash storage.

2.5GB/s bandwidth 540k 512b reads (70 microseconds) 1100k 512b writes (15 microseconds)

Thanks for posting "negative" results. I think it's good science.

interesting. have you tried running the same tests on other db's as well? what'd be great would be a comparison with other databases, such as mongo/memcache and even mysql/pgsql.

Is this really something that needs testing?

Compare a good SSD, Samsung 840 to a normal PC using dual channel 1600MHz DDR3.

Maximum sequential read speed 0.5GB/s vs 25GB/s

Random read speed 0.01-0.1GB/s vs 3GB/s

Latency 30000-40000ns vs 6-65ns

So we're dealing with (best case) a bandwidth difference of factor 30 and a latency differnce of factor 500.

Now this isn't taking other things in to consideration, such as SSD performance degradation and the requirement of running garbage collection or trim.

"Is this really something that needs testing?"

Yes. Running tests to validate your assumptions is a big part of robust software engineering. In this case the results were unsurprising but not uninteresting.

I'll do some testing this weekend by mounting an SSHFS as my swap partition and removing all but a 256MB RAM module then opening several dozen instances of FireFox. My calculations show that this will have terrible performance but I want to ensure that my software engineering is robust.

Testing your assumptions is something that you're supposed to do when you hit a wall, not when you're driving through a field.

If you've been using computers for long, you should already know VM thrashing can murder a system, to the point of making it unresponsive and needing a reboot. So this is an assumption you have backed by direct experience. And of course, probing it won't tell you anything.

But do you have enough information about how Redis accesses memory under the benchmark in question, combined with the OS page replacement strategy, combined with the characteristics of SSDs, to know the results beforehand? You can guess, for sure; but do you know?

If we all follow your approach, we'll never be surprised unless we get stuck; and if we know what we think we know as well as you seem to think we know it, we shouldn't get stuck in the first place. We should have assumed that we would have gotten stuck, and avoided it.

The article has relatively low value in terms of information content, but the mindset is to be commended. It should have given the author better intuitions about the 3 factors mentioned above. Modern, non-budget systems very seldom thrash; there's a younger generation coming along who've never experienced systems frozen in that way.

I think that there is more to the pool of information than the points you rightly outlined. Other key pieces of data are that Redis is designed to be extremely fast and that it has a strong reliance on the speed of the memory it is operating on. Those facts, coupled with the statistics sniglom mentioned above, very strongly indicate that performance will be terrible.

An analogy I can think of is testing if a stock Ford Fiesta can reach the speed of sound. You know what the engine is capable of, the environment it is operating in and the tires it is running on - you simply don't need to floor the accelerator to come to a conclusion.

That saying about picking ones battles comes to mind. The mindset is certainly of a sharp character, but what good is a knife without a hand to guide it? The map is not the territory but it does save a lot of time if used strategically.

The difference is that SSD backed key value stores are something people are actually interested in (Google LevelDB, Tokyo Cabinet, Twitter fatcache), and Firefox with SSHFS swap partition is not.

I'm risking showing lack of understanding, but I think it would be really nice to have some kind of a redis API that allows archiving certain keys (to disk). Perhaps the same way that keys can EXPIRE, they can get archived into secondary storage. Another API would allow retrieving keys from secondary storage.

Of course you can do this in your code, but then you step out of redis. I think it would be nice to bake this into redis, knowing that once loaded back from secondary storage, you get exactly the same object, and avoiding the whole (de)serialization process. Of course you won't achieve the same performance, but this is at least a known penalty.

We already have what you described in Redis 2.6: DUMP and RESTORE commands :-)

Dude. This is awesome. Sorry for my severe RTFM deficiency.

I'll go and play with this now... :)

As soon as it started to have a few GB swapped performances started to be simply too poor to be acceptable.

Acceptable is a fuzzy standard. Different applications have different needs, and not all applications require thousands of transactions per second. I'd presume there is an I/O rate below which the performance remains stable. Do you know what this rate is, and how it compares to transfer speed or latency of the SSD?

This is interesting and expected for evenly distributed request patterns. How about for more typical request patterns that follow power-law distributions? I would guess that it'd lead to much fewer page faults. I could write some math but does the benchmark tool let you choose a distribution of keys, which would help check that type of pattern?

Great analysis BTW.

Sensible places to use SSD for Redis: RDB persistence, AOF persistence.

Not sensible places to use SSD: swapfiles.

Swapping to SSD will also trash your SSD, because you are continually rewriting it.

Writing to your SSD will not "trash" it. Even in 2008, Intel drives were rated for 100GB/day for 5 solid years. http://www.anandtech.com/show/2614/4 And controllers have only gotten smarter since then.

Edit: The physical tech has also improved since then with "high-endurance MLC" cells. http://www.anandtech.com/show/5518/a-look-at-enterprise-perf...

Actually SSDs are the best place for a swap file. 4K random reads and writes. If a few random 4K writes is enough to make you worry about your SSD failing then you bought the wrong SSD (i.e. a non Intel or Samsung one).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact