It's an interesting project. I wish we had it 5+ years ago when it mattered.
The only place I'd use a spinning disk as my primary drive is on a server and that's something you rarely boot. If you're not using an SSD skip a month of Starbucks and buy one. Disk IO is far and away the biggest bottleneck for most users. An SSD will increase your productivity profoundly.
You know... Some people still use spinning metal. Also, my notebook is also my main work machine and it only fits one 2.5" device inside it (albeit I'm tempted to experiment with running the OS off an SD card, if not for speed (which is unlikely) for battery life. I don't care if Emacs comes up in 2 seconds instead of four.
Even with SSDs, by keeping the files in contiguous blocks (I would assume this program may even purposely not do it - as it would make sense to interleave blocks of files that are opened simultaneously during the boot process in the order the blocks - not the files - are called in) you may get some extra mileage out of your storage - you'd have a simpler structure for each file - instead of a list of blocks (or extents) you'd have the whole thing in a single extent.
Switching my laptop disk to SSD was probably the best investment I made (laptop related anyways). 64GB or even 96GB SSDs are quite cheap . You can buy an external, USB powered casing for the (presumably) big platter disk and use it like that.
Good idea, though general consensus seems to be to put the SSD in the optical bay, and keep the HDD in the main bay, which (at least in Apple laptops) is the only one that supports the sudden-acceleration drop sensor.
I bought a 64 GB Crucial M4 SSD for a boot drive for my Linux desktop about a year ago. About a month ago, it started giving read/write errors eventually leading to kernel panics. (Didn't lose any data.) I switched back to the hard drive.
I don't see a huge difference in productivity for normal desktop use. An SSD is definitely quieter, and somewhat faster, but caching means most of us are already not hitting the disk that hard. (Obviously if you're doing intensive work with big data that doesn't cache well, that's a different story.)
That's excellent advice. Its pointless to spend money on an expensive disk before you max out your RAM. Unless the OS is really brain dead, it will cache reads fairly frequently and that is still much faster than any SSD can be.
Most of the time, that is. There are situations when you'll have write-heavy workloads or a dataset that's larger than your RAM can be. But then I'd also assume you are out of the personal computer league anyway.
I would disagree that most people won't benefit from an SSD. If I were building a computer, I think I might set aside money for one before even considering the other components. Thankfully, SSDs are now relatively cheap. It would cost me $200 to get 32GB of RAM. A good 60GB SSD can be had for around $75.
Fast writes do make a different for normal desktop workloads, and fast reads are noticeable despite OS caching. If nothing else, you always have to read something from disk at least once, and you always have to write your dirty pages to disk eventually. The difference becomes more noticeable as the contention for disk access increases. No matter what I'm doing on my computer with a SSD, I never feel like I have to wait. On my other computers, even browsing the web while performing an `apt-get upgrade` can feel unbearably slow.
60 GB is not that much storage space if you consider the amount of music and photos an average user can generate. My photos alone are over 20 GB right now.
Then there is the frequency with which you reboot your computer. It's true every item must be read at least once, but, with enough RAM, it's read only once per boot - if you keep your machine on for a month at at time, reads to /bin will hit the disk only once every month or so. Disk writes can be trickier, since waiting for the physical write may halt the thread for a while, but, unless you specify writes to be synchronous, there is no reason not to trust the OS with the data and let it flush the cache when it's more convenient. And subsequent reads to the data you wrote won't hit the disk again until the memory is needed for something else. Reads from RAM are still orders of magnitude faster than reads from disk.
I agree that once you have more RAM than your usual workload and the size of the most frequently used files, adding more memory will have little effect and, when you get there (say 8 GB or so) you are better off spending your money on a good SSD. Given the failure modes I keep reading about, I suggest getting a smalled SSD, large enough to fit your software, and a larger hard disk where you rsync it from time to time.
Actually, putting swap on an SSD == great idea. If you need swap, there aren't any better places to put it. I quote from  (for Windows), because it is fairly large wall of text and the relevant point might be missed:
_Should the pagefile be placed on SSDs?_
Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.
In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that
> Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,
> Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.
> Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.
In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.
On the other hand, attempting to defragment an SSD is truly a bad idea, and pointless at that. Unless you are defragmenting free space, which does get important as the SSD fills with data.
I wish [I] had it 5+ years ago when it mattered [to me].
Most people don't drop 75+ dollars on Starbucks a month, and rent, food and healthcare expenses are a tad more important than buying faster, more expensive hard drives. Free software optimizations would actually be very much appreciated by a great deal of people with scarce disposable income.
This reminds me of that article in a New York publication a while back written by a woman marveling at how much money she saved by not eating out every meal. Like the advice given wasn't an obvious necessity already for 95+% of people.
There actually were projects which did this years ago. One of them was called fcache (by Jens Axobe), which had the same goal but achieved it in a filesystem independent fashion by using an extra partition and placing all data read during boot linearly in the extra partition.
Ten years ago, BeOS booted in ten seconds without any special tricks. That was on late 90's and early 00's hardware. Certainly it didn't access hundreds of megabytes of libraries and files in order to boot as a modern Linux does, but it was also specifically designed that way even by the contemporary standards.
And don't get me started about the 16-bit era... Machines were slower then but using them was generally faster.
That's why people love tablets and smartphones. You're much faster using them to check your mails or weather, etc than using a topnotch PC.
[Now of course that doesn't include booting but when did you reboot your tablet or smartphone the last time?]
It's more of a problem with hardware more interested in coding to "works on Windows" than "complies with spec."
And honestly, Startup is as fast as resume these days - the problem is applications that don't remember last known state and window managers that won't remember the rest. Why don't we fix that instead rather than chasing down all the suspend bugs?
Restoring app last state? Like terminal emulator recreating tmux session wrapping screen session with package manager working under sudo? SSH unHUP-ing processes on remote machine? Python VM resuming all scripts in a precise place and state they were stopped?
The problem is that there is no such thing as isolated "application" on Linux or any other real OS.
I've had some problems with Intel graphics on my DELL in Fedora 14. Sometimes video, sometimes full screen flash, and sometimes sleep caused it to crash. On my older IBM with ATI, everything worked just fine. Anyway, in Fedora 16 the graphics and sleep works just fine for me.
It's not a fair test; haven't tried a recent Ubuntu, but: suspend to ram and suspend to disk were broken on Ubuntu 10.04 and 10.10 with my Eeepc 900. Arch + TuxOnIce has worked great for suspend to disk for about a year now.
I try to avoid using sleep/resume when I'm away from home, because it partially defeats the purpose having full-disk encryption on my laptop. A thief who steals it when it's powered off has no access to my files. On the other hand, a thief who steals it when it's asleep might be able to get around the login once it wakes up.
So yes, it sucks to wait 30-40 seconds for a reboot.
Wouldn't the ideal solution then be to modify the OS to purge the disk encryption keys from memory on sleep? If you're concerned about unencrypted file contents in memory, purge the page/buffer cache while you're at it.
Then ask the user to re-enter the key on resume and get back to business...am I missing some obvious problem here?
I guess depending on one's level of paranoia, there might be sensitive non-file data sitting in memory...you could then quit the applications you're concerned about, and have the kernel wipe any unallocated memory before sleeping (I think by default it doesn't wipe pages until they're reallocated to something else, on Linux at least).
Obviously with flushing caches and quitting applications and so forth you're trading off some of the benefit of keeping the system alive, but presumably it still beats a cold boot every time you come back to your laptop.
sudo pmset -a destroyfvkeyonstandby 1 hibernatemode 25
From the pmset man page:
destroyfvkeyonstandby - Destroy File Vault Key when going to
standby mode. By default File vault keys are retained even when
system goes to standby. If the keys are destroyed, user will be
prompted to enter the password while coming out of standby
mode.(value: 1 - Destroy, 0 - Retain)
hibernatemode = 25 (binary 0001 1001) is only settable via pmset. The
system will store a copy of memory to persistent storage (the disk), and
will remove power to memory. The system will restore from disk image. If
you want "hibernation" - slower sleeps, slower wakes, and better battery
life, you should use this setting.
So, under Lion, turn on FileVault, run that command and always sleep your Mac (close the clamshell, Apple Menu > Sleep, or Option-Command-Eject) when you want to be secure.
If your computer crashes under resume after having done so, something's amiss. Remember that you'll need to auth twice on wake-from-sleep if you are logged in – once to unlock the volume, and again to unlock your user's session.
Since my SSD-equipped, btrfs using laptop (Thinkpad X60s running Debian testing, kernel 3.2.15) boots from power on to graphical login in 27 seconds (13.5s of which is the time taken to get through the BIOS boot sequence) I suspect it's something specific to the parent poster's system.
I have btrfs (with lzo compression) on a rotating disk, and the boot feels a little slower (one or two minutes total?) for reasons I haven't really examined. I'll have to check if something messed with ureadahead.
Yeah but you don't have to just sit there and twiddle your thumbs while your machine boots. You can just do something else. I'm sure we waste far more time during the day doing other things. We don't necessarily obsess over those types of time inefficiencies. If you wanted to save time you could brush your teeth in the shower etc.
As has been mentioned full disk encryption loses a lot of its efficacy if you just put your laptop in standby all the time. I'd add that as long as a faster boot doesn't compromise your system in other ways, why wouldn't you want it?
When hibernating my laptop it writes the memory contents into swap - which is also encrypted. Yes, de-hibernating is slower if contents need to be read from disk. OTOH it's still faster than booting from disk.
Although note, at the part where it says push CTRL+ALT+F1 to get to a new terminal login, that didn't work for me. I had to go to the default one (CTRL+ALT+F7) and type "logout" and then go to CTRL+ALT+F6.
I have a web page I keep track of common things I do to my Ubuntu installs (shameless plug: http://ubuntu.mindseeder.com) and I'm definitely going to add this so I don't forget!
The easiest way I've found to increase boot speed and application load even under heavy system load is to use the pf kernel patchset. With a properly configured kernel using BFS, BFQ, and LZMA compression, my system is amazingly fast. Even when compiling and both cores at my laptop are at 100% my system is fully usable. If you have a Macbook Pro my kernel configs are here: https://github.com/meinhimmel/kernel-configs
AFAIK, ureadahead just keeps track of which files/blocks are touched during bootup/startx and pre-reads them into a cache at the very beginning of the boot sequence so you will effectively boot from cache.
This still requires seeking to several semi-random areas of the disk while prereading which e4rat is fixing by physically moving the needed blocks adjacent to eachother.
Since ureadahead looks at blocks and not files, it has a potential speed advantage if the boot process is opening some large files but not reading them in full.
Edit: except the ureadahead packfile only points to the blocks and files, it does not provide a way to inline them. So e4rat is almost certainly faster. It's a shame it is ext4 specific.
Apparently Scott James Remnant, the ureadahead developer, considered feeding the collected info to a defragmenter: http://ubuntuforums.org/showthread.php?t=1434502 ; this would be nice, as it means a single package is responsible for the feature, and filesystems perform to the best of their ability whether or not they have ext4-like fine-grained control of defragmentation.
For some reason, I had the idea that ureadahead's pack files actually contained the contents of the blocks that needed to be read during boot, turning readahead into a sequential operation. After reading the manpage today, I see that I was mistaken.
So when will we see this in our smartphones? I don't understand and, frankly, find it ridiculous that I have to wait about a minute or up to 90 seconds for my smartphone to boot, a device that is completely flash-memory based and that has no variability in hardware. I still remember the article about that industrial Linux PC that booted in less than one second, including initializing video4linux and two cameras etc., simply by optimizing kernel parameters, boot order, and driver timeouts. That must have been 2005 or so. Now it's 2012 and my phone takes longer to boot than my laptop (late 2010 MBP w/ SSD).
By placing the files together in the memory you may get some extra performance, but it depends on how your SSD is internally organized - you may have some locality effects if the drive pre-fetches more than the block you called for into a cache faster than the flash memory.
With an SSD, the bottleneck for booting is usually hardware detection and initialization, not reading data off the disk. My system takes about 4-5 seconds from the time the bootloader hands off control to the kernel to the time the kernel starts executing the initrd, and another 6-7 seconds to mount the SSD and hard drive, establish the network connection, start system services, and present a login prompt (though starting X and changing resolutions takes another second or two on top of that). I probably can't make that more than 20% faster without getting a faster DHCP server or tweaking various delays and timeouts that exist for good reasons.
I saw a demo of this at Intel Labs on a Windows machine about 15 years ago and it was very impressive. I don't think they optimized startup, but application launch was incredibly fast with their disk layout optimization.
Isn't this essentially what DiskKeeper does on Windows?
I don't know about DiskKeeper, but I believe it's part of the "Lenovo Enhanced Experience" on ThinkPads. Amazingly, my factory install with crapware booted from pressing BIOS to Windows desktop in just over 7 seconds. I haven't been able to get my clean install below 10 seconds, I think because I'm missing the filesystem tweaking.
The OSX feature apparently notices files which are slowly appended to (downloads) and defragments them. It does not reallocate a set of files (such as the ones touched at boot) to be adjacent on disk, which is what e4rat does.