Hacker News new | comments | show | ask | jobs | submit login
Desktop HDD vs SSD for Postgresql (5amsolutions.com)
57 points by cowmixtoo on Aug 8, 2010 | hide | past | web | favorite | 33 comments



SSD firmware needs to do clever things with caching and write combining to avoid premature wear. These algorithms are black boxes whose behaviour is very poorly modeled by the usual tuple of contiguous transfer rate / random 4K access rate / seek time etc. measured by artificial benchmarks. You don't know how the drive is really going to react until you hit it with the mix of operations that match your real workload.

Personally I've found the Intel SSDs to work best at the primary use case, speeding up small random reads, even when other drives have appeared to exceed them in random read. Bulk contiguous transfer looks really impressive, but mechanical drives with high data density are pretty fast at that too. What you really need the SSD for is random reads, and ensure that the random read performance holds up when mixed with other use patterns.


> SSD firmware needs to do clever things with caching and write combining to avoid premature wear.

This has not been an issue for several years now. SSDs last at least as long as spinning platter disks. Especially the server-grade SSDs from the big brands. I've never heard of anyone running into this SSD wear-leveling issues in a production environment.


Unless I'm mistaken, you've completely missed the point of what I wrote. The reason wear leveling is not an issue is because the firmware is smart enough to avoid the problem, and those smarts interfere with the presumed model used by simplistic artificial benchmarks.


Sorry you are correct. I misunderstood.

I don't think the wear leveling will affect the benchmarks that much though - even if it's horribly inefficient your drives will still beat the snot out of any spinning platter drive on the market today.


I fully agree, in competition with a mechanical drive, but it's an important consideration when looking at which SSD to buy.


Nowadays I use SSDs in all my home machines (MacMini, laptop, gaming desktop). It really speeded them up, since seek times are so much faster. My unix server runs with a SSD for more than a year now, and I haven't had any problems with it.

Just for fun, I once configured two Intel SSDs in a stripe to see how fast it would be. After tweaking the stripe block size a bit (this made a huge difference!) the stripe maxed out at 400 megabyte/sec sustained read. That's almost a CDROMs' worth of data, each second ;-)


What was the best stripe size that worked for you?


> That's almost a CDROMs' worth of data, each second ;-)

Hmmm, since when does ~ .6 == 1? ;-)

It would be more accurate to say: "That's over half a CD-ROM's worth of data, each second ;-)"


>Hmmm, since when does ~ .6 == 1? ;-)

After engineering or physics school. Try factor of 10 to get my attention :)


I've been looking in to this recently (ref: http://news.ycombinator.com/item?id=1567330)

I just tested the speed of the elastic block storage on my EC2 instance with HDTune: http://i.imgur.com/Lbgjy.png

vs crap assed 64GB SSD I bought for my netbook last fall: http://i.imgur.com/r2eaw.png

I've been frustrated with IO speeds on EC2 the last few months, but this pushed me over the edge. I'm buying components to roll my own server.


I'm curious if you'll end up saving money.


You may get better disk performance on the Rackspace cloud.

http://www.thebitsource.com/featured-posts/rackspace-cloud-s...


Will you host it yourself? If so, what kind of connection will you have?


In short -

> SSD storage devices are five to fourteen times faster than their rotational brothers using a default 8.4.X Postgresql configuration

This should've really been at the top of the article ;)


Not related to the article: the typography on this page gives me a headache (I'm on win7/chrome5). Ten different font size, random kerning and line space, serif mixed with sans-serif, bold/italics/bold-italics without a ratio. Argh.


I'm VERY sorry about that.

This was my first stab at Blogger as a blogging platform. The 'compose' interface was a real pain to use so I typed the whole thing in OpenOffice first. The move from OpenOffice to Blogger was not smooth thus all the weird spacing, font size changes, etc.

Also, our CSS template is sort of funky too. We are going to fix all that stuff in the next week.


Try doing something to the line-height CSS attribute. It probably needs a bit more spacing between the lines of text for easier reading.


Though the tests were conducted with Postgre, I suspect these results will be interesting to anyone running a database.

This is the sort of thing that causes me to inquire about the possibility of accessing external hardware from my VPS on Linode. VPS hosts should really consider offering SSD options for a premium, IMHO.


Perhaps you're not familiar with how tight the margins are in the VPS business. SSDs may be small when you're sticking one in your desktop but when you're running 40 VPSs on a 1U box, they're huge. Density is king in VPS land, and adding an extra drive for each customer (or simply having to have the ability to add an extra drive for each customer) would kill that density.

I'd you special hardware, get a dedicated box.


You're quite right about my unfamiliarity. Still, it seems that you could have only a few boxes with this option and, as i said, charge quite a premium. I suppose if I really believed this, I would execute on it...


"A few boxes" of anything isn't worth it. Standardization is critical to having any sort of profitability. As soon as you start spinning up a small number of custom boxes, you're adding a lot of work. Enough work that the "premium" you mention would be pretty large (large enough that it would probably wipe out any price advantage in the first place).

If you really need super fast i/o, then why are you using a VPS in the first place? Virtualization adds a non-trivial amount of overhead to i/o, and there's really no way around that (technologies like VT-d and virtio are helping, but virtualization is still virtualization)


What you say makes perfect sense. For me the decision to use VPS was motivated by cost and simplicity. But when it comes to my database server, IO will bottleneck me long before CPU or bandwidth or other such concerns. That will probably lead me to some sort of single-master, multi-slave setup — but I suppose I just feel like that could be substantially delayed if I were to have my DB on an SSD.


1. You can kill a consumer-grade SSD surprisingly quickly if its being hammered with writes all day in a server. (Or at least, you can exceed the manufacturer's recommendations, and do you really want to do that in a server?)

2. SSD should be compared to a RAID of HDDs for a realistic test

3. The article says that they want to use it to store operational stats and log files - in this case I'd spend time testing PostgreSQL's transaction and sync tunables and probably arrive at some acceptable settings which work fine on a HDD.

With this sort of workload, it is easy to make Postgres combine multiple transactions into fewer I/Os.


The first SSD tested reaches 130MB/s reading and writing. IMHO it is running in SATA-1 mode (jumper? BIOS setting?) and that's what kills its performance. Any SSD should reach a much higher read throughput.


I didn't include this in my post but here is the deal: the first time I ran the raw read / write test the results were pretty close to the manufacturer's claim. On the 2nd try (and the next 20 after) i got EXACTLY what is presented in the post.

Weird.


Doing a sequential write to the entire disk probably made the SSD think the disk was full of data. Unless you ran the ATA secure erase command after each benchmark or turned on TRIM, that will hurt performance. http://www.anandtech.com/print/2738 explains why, around page 6.


The other (or 'fast) SSD did not have this issue though. I re-ran the raw read / write test at least 15 times over two days and I got very consistent (and very good) results.


The second SSD probably has better* firmware or flash or both. It's basically maxing out the SATA II bus no matter what you throw at it.

* By "better" I mean it performs better in the benchmarks you used. A firmware that wins benchmarks often doesn't do correspondingly well in the real world. For example, Intel's SSDs are sub-par in sequential read/write benchmarks, but they perform much better in most application benchmarks.


The manufacturer's claim probably only applies to a fresh unused SSD.


Also, all devices that were tested were plugged into the exact same external e-SATA enclosure on the same e-SATA port.


Some SSDs are simply slow. Remember when Apple first introduced SSDs and while it reduced heat/noise and make it more shockproof, they were actually slower than HDs.

I'm going to guess that the second HD being tested is a OCZ Vertex LE, which is one that I have. It's a ridiculously fast SSD, however it has a lot of variance based upon the compressibility of data (as the Sandforce controller's advantage largely comes from dynamic streaming compression).


Both SSDs where from Patriot.


What are the implications for the web and Tim Berners-Lee's vision for linked data once the cloud switches to SSDs? Won't sites like reddit see a lot fewer problems since the DB access won't bring the hard drives to their knees?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: