The only place I'd use a spinning disk as my primary drive is on a server and that's something you rarely boot. If you're not using an SSD skip a month of Starbucks and buy one. Disk IO is far and away the biggest bottleneck for most users. An SSD will increase your productivity profoundly.
You know... Some people still use spinning metal. Also, my notebook is also my main work machine and it only fits one 2.5" device inside it (albeit I'm tempted to experiment with running the OS off an SD card, if not for speed (which is unlikely) for battery life. I don't care if Emacs comes up in 2 seconds instead of four.
Even with SSDs, by keeping the files in contiguous blocks (I would assume this program may even purposely not do it - as it would make sense to interleave blocks of files that are opened simultaneously during the boot process in the order the blocks - not the files - are called in) you may get some extra mileage out of your storage - you'd have a simpler structure for each file - instead of a list of blocks (or extents) you'd have the whole thing in a single extent.
Put your HDD in the optcal drive bay, and the SSD in the main bay. Adapter frames exist, at least for Apple laptops.
It would be lovely to fit both an SSD and an extra battery in all that space.
I don't see a huge difference in productivity for normal desktop use. An SSD is definitely quieter, and somewhat faster, but caching means most of us are already not hitting the disk that hard. (Obviously if you're doing intensive work with big data that doesn't cache well, that's a different story.)
Most of the time, that is. There are situations when you'll have write-heavy workloads or a dataset that's larger than your RAM can be. But then I'd also assume you are out of the personal computer league anyway.
Fast writes do make a different for normal desktop workloads, and fast reads are noticeable despite OS caching. If nothing else, you always have to read something from disk at least once, and you always have to write your dirty pages to disk eventually. The difference becomes more noticeable as the contention for disk access increases. No matter what I'm doing on my computer with a SSD, I never feel like I have to wait. On my other computers, even browsing the web while performing an `apt-get upgrade` can feel unbearably slow.
Then there is the frequency with which you reboot your computer. It's true every item must be read at least once, but, with enough RAM, it's read only once per boot - if you keep your machine on for a month at at time, reads to /bin will hit the disk only once every month or so. Disk writes can be trickier, since waiting for the physical write may halt the thread for a while, but, unless you specify writes to be synchronous, there is no reason not to trust the OS with the data and let it flush the cache when it's more convenient. And subsequent reads to the data you wrote won't hit the disk again until the memory is needed for something else. Reads from RAM are still orders of magnitude faster than reads from disk.
I agree that once you have more RAM than your usual workload and the size of the most frequently used files, adding more memory will have little effect and, when you get there (say 8 GB or so) you are better off spending your money on a good SSD. Given the failure modes I keep reading about, I suggest getting a smalled SSD, large enough to fit your software, and a larger hard disk where you rsync it from time to time.
Once you factor in the re-installs the latency of spinning rust looks a bit better.
My first SSD (low to mid range) failed within the first 3-6 months but the replacement has been going for almost 2 years now.
It may be worth it in the long run, but you have to keep that in mind when making the choice.
I'd only trust an Intel at this point, though even they're not much more reliable than a traditional hard drive on average (now that they've shipped a few buggy firmware revisions).
_Should the pagefile be placed on SSDs?_
Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.
In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that
> Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,
> Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.
> Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.
In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.
On the other hand, attempting to defragment an SSD is truly a bad idea, and pointless at that. Unless you are defragmenting free space, which does get important as the SSD fills with data.
Yes it was a cheap SSD, but the difference between the high end and low end should be performance. Not "breaks vs !break". When that is true I'll buy another SSD.
That is not to say that SSDs are "bad", they are just not for me, yet. I like my computers to be low maintenance. I stopped fiddling with hardware for performance a few years ago.
Most people don't drop 75+ dollars on Starbucks a month, and rent, food and healthcare expenses are a tad more important than buying faster, more expensive hard drives. Free software optimizations would actually be very much appreciated by a great deal of people with scarce disposable income.
This reminds me of that article in a New York publication a while back written by a woman marveling at how much money she saved by not eating out every meal. Like the advice given wasn't an obvious necessity already for 95+% of people.
And don't get me started about the 16-bit era... Machines were slower then but using them was generally faster.
And that was on a 1 MHz 8-bit processor.
There were people who could toggle a bootstrap loader into an Altair in less time than a modern x86 PC takes to boot.
Does super fast booting matter much these days? It's just about ceased to matter to me completely.
So yes, it sucks to wait 30-40 seconds for a reboot.
Then ask the user to re-enter the key on resume and get back to business...am I missing some obvious problem here?
I guess depending on one's level of paranoia, there might be sensitive non-file data sitting in memory...you could then quit the applications you're concerned about, and have the kernel wipe any unallocated memory before sleeping (I think by default it doesn't wipe pages until they're reallocated to something else, on Linux at least).
Obviously with flushing caches and quitting applications and so forth you're trading off some of the benefit of keeping the system alive, but presumably it still beats a cold boot every time you come back to your laptop.
Unfortunately, lion/filevault 2 no longer supports it, and if you try to force the options, the computer simply crashes on resume.
sudo pmset -a destroyfvkeyonstandby 1 hibernatemode 25
destroyfvkeyonstandby - Destroy File Vault Key when going to
standby mode. By default File vault keys are retained even when
system goes to standby. If the keys are destroyed, user will be
prompted to enter the password while coming out of standby
mode.(value: 1 - Destroy, 0 - Retain)
hibernatemode = 25 (binary 0001 1001) is only settable via pmset. The
system will store a copy of memory to persistent storage (the disk), and
will remove power to memory. The system will restore from disk image. If
you want "hibernation" - slower sleeps, slower wakes, and better battery
life, you should use this setting.
If your computer crashes under resume after having done so, something's amiss. Remember that you'll need to auth twice on wake-from-sleep if you are logged in – once to unlock the volume, and again to unlock your user's session.
Hibernate takes a full 48 seconds on my laptop. A clean boot takes 8-9 seconds.
If I'm sitting on the couch the instant on nature of my tablet is the main reason I'll reach for that to look at something rather than my laptop.
That kind of math just doesn't work.
Resume from hibernate is another area where I'd like to see improvements. I hibernate instead of suspend because I never know if I'll make it back to an outlet in time.
And honestly, Startup is as fast as resume these days - the problem is applications that don't remember last known state and window managers that won't remember the rest. Why don't we fix that instead rather than chasing down all the suspend bugs?
Why chase down suspend bugs? because they are bugs.
I'm with you on fixing window managers, generally speaking.
The problem is that there is no such thing as isolated "application" on Linux or any other real OS.
When hibernating my laptop it writes the memory contents into swap - which is also encrypted. Yes, de-hibernating is slower if contents need to be read from disk. OTOH it's still faster than booting from disk.
The joys of student life!
I followed this guide: http://www.howtogeek.com/69753/how-to-cut-your-linux-pcs-boo...
Although note, at the part where it says push CTRL+ALT+F1 to get to a new terminal login, that didn't work for me. I had to go to the default one (CTRL+ALT+F7) and type "logout" and then go to CTRL+ALT+F6.
I have a web page I keep track of common things I do to my Ubuntu installs (shameless plug: http://ubuntu.mindseeder.com) and I'm definitely going to add this so I don't forget!
Edit: Actually, the directions say to remove Ureadahead when installing e4rat, so maybe they are quite similar.
This still requires seeking to several semi-random areas of the disk while prereading which e4rat is fixing by physically moving the needed blocks adjacent to eachother.
Edit: except the ureadahead packfile only points to the blocks and files, it does not provide a way to inline them. So e4rat is almost certainly faster. It's a shame it is ext4 specific.
Apparently Scott James Remnant, the ureadahead developer, considered feeding the collected info to a defragmenter: http://ubuntuforums.org/showthread.php?t=1434502 ; this would be nice, as it means a single package is responsible for the feature, and filesystems perform to the best of their ability whether or not they have ext4-like fine-grained control of defragmentation.
You can see the random vs sequential difference in any ssd review, eg http://thessdreview.com/our-reviews/ocz-vertex-3-240gb-max-i... shows 18 MB/s in random reads vs 500 MB/s sequential reads.
Isn't this essentially what DiskKeeper does on Windows?