Hacker News new | comments | show | ask | jobs | submit login

It's an interesting project. I wish we had it 5+ years ago when it mattered.

The only place I'd use a spinning disk as my primary drive is on a server and that's something you rarely boot. If you're not using an SSD skip a month of Starbucks and buy one. Disk IO is far and away the biggest bottleneck for most users. An SSD will increase your productivity profoundly.

> I wish we had it 5+ years ago when it mattered.

You know... Some people still use spinning metal. Also, my notebook is also my main work machine and it only fits one 2.5" device inside it (albeit I'm tempted to experiment with running the OS off an SD card, if not for speed (which is unlikely) for battery life. I don't care if Emacs comes up in 2 seconds instead of four.

Even with SSDs, by keeping the files in contiguous blocks (I would assume this program may even purposely not do it - as it would make sense to interleave blocks of files that are opened simultaneously during the boot process in the order the blocks - not the files - are called in) you may get some extra mileage out of your storage - you'd have a simpler structure for each file - instead of a list of blocks (or extents) you'd have the whole thing in a single extent.

Switching my laptop disk to SSD was probably the best investment I made (laptop related anyways). 64GB or even 96GB SSDs are quite cheap . You can buy an external, USB powered casing for the (presumably) big platter disk and use it like that.

Many recent laptops have mini-PCIe slots. You can put a credit card sized SSD in that slot as your boot disk, and then use the 2.5" drive bay for spinning media with lots of space. Here is an example one from Intel with pictures: http://www.newegg.com/Product/Product.aspx?Item=N82E16820167...

You cannot put an msata drive in a pcie slot without a pcie to sata controller. You can get these eg http://www.sunmantechnology.com/system-ip/usb-i2c-fml.html but the fact that they are the same slot does not mean they are compatible.

There are indeed important details. This link shows the compatibility and potential issues if you use a Lenovo laptop: http://forum.notebookreview.com/lenovo-ibm/574993-msata-faq-...

If your laptop has an optical dive which you don't use, you may be able to get the best of both world.

Put your HDD in the optcal drive bay, and the SSD in the main bay. Adapter frames exist, at least for Apple laptops.

Good idea, though general consensus seems to be to put the SSD in the optical bay, and keep the HDD in the main bay, which (at least in Apple laptops) is the only one that supports the sudden-acceleration drop sensor.

My old MBP had a PATA optical dirve bay connector. I didn't have the choice if in order to get the best performances.

Nope. No optical drive in there. It's a Vostro v131.

It would be lovely to fit both an SSD and an extra battery in all that space.

I bought a 64 GB Crucial M4 SSD for a boot drive for my Linux desktop about a year ago. About a month ago, it started giving read/write errors eventually leading to kernel panics. (Didn't lose any data.) I switched back to the hard drive.

I don't see a huge difference in productivity for normal desktop use. An SSD is definitely quieter, and somewhat faster, but caching means most of us are already not hitting the disk that hard. (Obviously if you're doing intensive work with big data that doesn't cache well, that's a different story.)

For most people, I would say not to get a flash drive until the memory has been maxed out. At that point, a flash drive will probably help more than the price difference on a faster CPU.

That's excellent advice. Its pointless to spend money on an expensive disk before you max out your RAM. Unless the OS is really brain dead, it will cache reads fairly frequently and that is still much faster than any SSD can be.

Most of the time, that is. There are situations when you'll have write-heavy workloads or a dataset that's larger than your RAM can be. But then I'd also assume you are out of the personal computer league anyway.

I would disagree that most people won't benefit from an SSD. If I were building a computer, I think I might set aside money for one before even considering the other components. Thankfully, SSDs are now relatively cheap. It would cost me $200 to get 32GB of RAM. A good 60GB SSD can be had for around $75.

Fast writes do make a different for normal desktop workloads, and fast reads are noticeable despite OS caching. If nothing else, you always have to read something from disk at least once, and you always have to write your dirty pages to disk eventually. The difference becomes more noticeable as the contention for disk access increases. No matter what I'm doing on my computer with a SSD, I never feel like I have to wait. On my other computers, even browsing the web while performing an `apt-get upgrade` can feel unbearably slow.

60 GB is not that much storage space if you consider the amount of music and photos an average user can generate. My photos alone are over 20 GB right now.

Then there is the frequency with which you reboot your computer. It's true every item must be read at least once, but, with enough RAM, it's read only once per boot - if you keep your machine on for a month at at time, reads to /bin will hit the disk only once every month or so. Disk writes can be trickier, since waiting for the physical write may halt the thread for a while, but, unless you specify writes to be synchronous, there is no reason not to trust the OS with the data and let it flush the cache when it's more convenient. And subsequent reads to the data you wrote won't hit the disk again until the memory is needed for something else. Reads from RAM are still orders of magnitude faster than reads from disk.

I agree that once you have more RAM than your usual workload and the size of the most frequently used files, adding more memory will have little effect and, when you get there (say 8 GB or so) you are better off spending your money on a good SSD. Given the failure modes I keep reading about, I suggest getting a smalled SSD, large enough to fit your software, and a larger hard disk where you rsync it from time to time.

I did buy one, then it broke.

Once you factor in the re-installs the latency of spinning rust looks a bit better.

All drives can fail. Not including servers I've managed I estimate I've had more than 10 hard drives fail, all of which were well below the manufacturer's stated MTBF.

My first SSD (low to mid range) failed within the first 3-6 months but the replacement has been going for almost 2 years now.

Sure, but there's no doubt that SSDs on average wear out faster, which makes them that much more expensive a choice.

It may be worth it in the long run, but you have to keep that in mind when making the choice.

The replacement is always better... somehow.

Is this similar to how lost things are usually in the last place you look for them? It does make sense that people would continue replacing something until it worked!

Which SSD did you buy?

I'd only trust an Intel at this point, though even they're not much more reliable than a traditional hard drive on average (now that they've shipped a few buggy firmware revisions).

My SSD makes for a nice drink coaster.

Don't buy garbage SSDs, and make sure your system is utilizing them correctly. Putting swap on an SSD = horrible idea.

Actually, putting swap on an SSD == great idea. If you need swap, there aren't any better places to put it. I quote from [0] (for Windows), because it is fairly large wall of text and the relevant point might be missed:

_Should the pagefile be placed on SSDs?_

Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well. In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that

> Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,

> Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.

> Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.

In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.

On the other hand, attempting to defragment an SSD is truly a bad idea, and pointless at that. Unless you are defragmenting free space, which does get important as the SSD fills with data.

[0] http://blogs.msdn.com/b/e7/archive/2009/05/05/support-and-q-...

This deals with performance, but not durability. The system would pump data into swap at a fast rate during a thrashing situation, wearing out the SSD.

It's not a good idea for an MLC drive, but a decent SSD should fail into read only mode when enough sectors are lost.

It wasn't even the flash that failed, the controller stopped my bios from posting until the sata polling timed out.

Yes it was a cheap SSD, but the difference between the high end and low end should be performance. Not "breaks vs !break". When that is true I'll buy another SSD.

That is not to say that SSDs are "bad", they are just not for me, yet. I like my computers to be low maintenance. I stopped fiddling with hardware for performance a few years ago.

almost all of my spinning disks in the last 20+ years have failed. Some right away, some in a few months, some in a few years but they all have failed.

I wish [I] had it 5+ years ago when it mattered [to me].

Most people don't drop 75+ dollars on Starbucks a month, and rent, food and healthcare expenses are a tad more important than buying faster, more expensive hard drives. Free software optimizations would actually be very much appreciated by a great deal of people with scarce disposable income.

This reminds me of that article in a New York publication a while back written by a woman marveling at how much money she saved by not eating out every meal. Like the advice given wasn't an obvious necessity already for 95+% of people.

SSD is still expensive for the mass. Fast boot for Linux is still very much relevant for the few who can afford SSD. It's great for embedded devices using Linux.

There actually were projects which did this years ago. One of them was called fcache (by Jens Axobe), which had the same goal but achieved it in a filesystem independent fashion by using an extra partition and placing all data read during boot linearly in the extra partition.

Well, I only boot my home/work computer once a day. I only do hibernate/dehibernate during the day, which is quite fast.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact