It's an interesting project. I wish we had it 5+ years ago when it mattered.
The only place I'd use a spinning disk as my primary drive is on a server and that's something you rarely boot. If you're not using an SSD skip a month of Starbucks and buy one. Disk IO is far and away the biggest bottleneck for most users. An SSD will increase your productivity profoundly.
You know... Some people still use spinning metal. Also, my notebook is also my main work machine and it only fits one 2.5" device inside it (albeit I'm tempted to experiment with running the OS off an SD card, if not for speed (which is unlikely) for battery life. I don't care if Emacs comes up in 2 seconds instead of four.
Even with SSDs, by keeping the files in contiguous blocks (I would assume this program may even purposely not do it - as it would make sense to interleave blocks of files that are opened simultaneously during the boot process in the order the blocks - not the files - are called in) you may get some extra mileage out of your storage - you'd have a simpler structure for each file - instead of a list of blocks (or extents) you'd have the whole thing in a single extent.
Switching my laptop disk to SSD was probably the best investment I made (laptop related anyways). 64GB or even 96GB SSDs are quite cheap . You can buy an external, USB powered casing for the (presumably) big platter disk and use it like that.
Good idea, though general consensus seems to be to put the SSD in the optical bay, and keep the HDD in the main bay, which (at least in Apple laptops) is the only one that supports the sudden-acceleration drop sensor.
I bought a 64 GB Crucial M4 SSD for a boot drive for my Linux desktop about a year ago. About a month ago, it started giving read/write errors eventually leading to kernel panics. (Didn't lose any data.) I switched back to the hard drive.
I don't see a huge difference in productivity for normal desktop use. An SSD is definitely quieter, and somewhat faster, but caching means most of us are already not hitting the disk that hard. (Obviously if you're doing intensive work with big data that doesn't cache well, that's a different story.)
That's excellent advice. Its pointless to spend money on an expensive disk before you max out your RAM. Unless the OS is really brain dead, it will cache reads fairly frequently and that is still much faster than any SSD can be.
Most of the time, that is. There are situations when you'll have write-heavy workloads or a dataset that's larger than your RAM can be. But then I'd also assume you are out of the personal computer league anyway.
I would disagree that most people won't benefit from an SSD. If I were building a computer, I think I might set aside money for one before even considering the other components. Thankfully, SSDs are now relatively cheap. It would cost me $200 to get 32GB of RAM. A good 60GB SSD can be had for around $75.
Fast writes do make a different for normal desktop workloads, and fast reads are noticeable despite OS caching. If nothing else, you always have to read something from disk at least once, and you always have to write your dirty pages to disk eventually. The difference becomes more noticeable as the contention for disk access increases. No matter what I'm doing on my computer with a SSD, I never feel like I have to wait. On my other computers, even browsing the web while performing an `apt-get upgrade` can feel unbearably slow.
60 GB is not that much storage space if you consider the amount of music and photos an average user can generate. My photos alone are over 20 GB right now.
Then there is the frequency with which you reboot your computer. It's true every item must be read at least once, but, with enough RAM, it's read only once per boot - if you keep your machine on for a month at at time, reads to /bin will hit the disk only once every month or so. Disk writes can be trickier, since waiting for the physical write may halt the thread for a while, but, unless you specify writes to be synchronous, there is no reason not to trust the OS with the data and let it flush the cache when it's more convenient. And subsequent reads to the data you wrote won't hit the disk again until the memory is needed for something else. Reads from RAM are still orders of magnitude faster than reads from disk.
I agree that once you have more RAM than your usual workload and the size of the most frequently used files, adding more memory will have little effect and, when you get there (say 8 GB or so) you are better off spending your money on a good SSD. Given the failure modes I keep reading about, I suggest getting a smalled SSD, large enough to fit your software, and a larger hard disk where you rsync it from time to time.
Actually, putting swap on an SSD == great idea. If you need swap, there aren't any better places to put it. I quote from  (for Windows), because it is fairly large wall of text and the relevant point might be missed:
_Should the pagefile be placed on SSDs?_
Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.
In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that
> Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,
> Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.
> Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.
In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.
On the other hand, attempting to defragment an SSD is truly a bad idea, and pointless at that. Unless you are defragmenting free space, which does get important as the SSD fills with data.
I wish [I] had it 5+ years ago when it mattered [to me].
Most people don't drop 75+ dollars on Starbucks a month, and rent, food and healthcare expenses are a tad more important than buying faster, more expensive hard drives. Free software optimizations would actually be very much appreciated by a great deal of people with scarce disposable income.
This reminds me of that article in a New York publication a while back written by a woman marveling at how much money she saved by not eating out every meal. Like the advice given wasn't an obvious necessity already for 95+% of people.
There actually were projects which did this years ago. One of them was called fcache (by Jens Axobe), which had the same goal but achieved it in a filesystem independent fashion by using an extra partition and placing all data read during boot linearly in the extra partition.