Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Defrag Like It's 1993 (shiplift.dev)
362 points by centerorbit on Dec 16, 2021 | hide | past | favorite | 268 comments


I remember the sense of deep satisfaction and virtue when I defragged my hard drive. Then I learned FreeBSD. I kept trying to figure out how to defrag the partitions. When I learned that the filesystem took care of it continuously my mind was blown.


That would have been a strong reason for me not to use FreeBSD. It's like taking someone who says, "I have a stressful job. The only time I can really relax and feel worry-free is when I'm cooking."

And then hiring a personal chef for that person. Sure, the food will taste better, but at what cost? At what cost?

(Actually watching defrag stopped being satisfying around the time that I got my first 50GB HDD. It took too damn long to watch, like a 2 year old playing Tetris in slow motion)


> That would have been a strong reason for me not to use FreeBSD.

Well, with this new website, you can now enjoy defraging without having to suffer the consequences of a bad file system.


Yeah, but it's like a gambling addiction. If there are no stakes, no risk of loss? ... ... It takes the heart out of it. It's just not the same.

Maybe I should just defrag my SSD's these days. Sure they don't need it's but it's fast and bad for the drive, which gives it that same thrill of gambling with the integrity of my data. Of course I'd need to disable cloud backups to backblaze and local backups to a raid array, but if I try then I think I can still chase the thrill of the 'frag.


SSDs don't need to get defragged, but it needs occasional trimming, and that can be done either continuously, or batched. Sadly, there's no fancy graphical application to show you what was trimmed in the batched version.

If you use a filesystem with checksums, you can also scrub it. The stakes are there: is my data bit rotted? Will the errors resist recovery? It also takes about the same time as the defrag, but again I don't know of any graphical tools.


Make a 'defrag monkey' script. Take a random file, read it into the RAM, delete the source file, write it back.


I think that I'm happy with:

    xfs_fsr -v


With Linux and BSD I stopped gaming on AAA titles and I went full indie long ago.

Even "worse", now I an translating some English text adventures into Spanish.

And by playing them OFC.

Today I have more fun doing actual projects (programming, writting, playing with teleco stuff) than badly simulating then in video games.

With DOS/Windows you simulated a life on games. With Linux, you made it real. Seriously. Slackware it's still a "game changer. You don't play Cyberpunk 2077 if you want, you can make it real life. I do that with the PocketCHIP, Gopher and text gaming/roguelikes/MUDs up into a mountain at night. Incredible experience.

Or better, listening to the ISS with a WebSDR and decoding SSTV images with QSSTV. That's the ultimate cyberpunk experience.


On freebsd you can build/update ports instead of defragmenting your disk.


And then spend hours fixing the system, which I guess is like if the defragmenter corrupted your files


Never had that problem....not since years at least.


gtk3-webkit got updated again? There goes my day.


No ccache?


haha...true, time to fire up all the distcc machines ;)


I've noticed that myself, that capacity increased much faster than throughput. It would be interesting to see this graphed over time.


The number of tracks per platter increased, but the RPM didn't. Assuming the heads can only read one track at a time per platter, it will take (number of tracks per platter / rpm) minutes to read the whole disk. Therefore it will take longer to read the whole disk. QED


Drives don't read from multiple platters in parallel, so it's actually num_tracks * num_platters * 2 / rpm to read in the best case. Real world is more than that since it takes time for the drive firmware to shift between different platters and tracks.

*edit: not entirely true: the Seagate MACH.2 drive series that has 2 separate head assemblies per drive. They're pretty expensive and hard to get ahold of.


> defrag stopped being satisfying around the time that I got my first 50GB HDD

In the rare circumstances when I was in need of actual defrag of a heavily polluted drive > 40Gb I opted to take a filesystem aware image with Norton/Symantec Ghost and just re-image it back. Worked fine and by the time of 100-200Gb drives there were a USB HDDs so I wouldn't even need to unscrew the PC case.


You could always make a ramdisk and defrag that.


boy. i remember moving to xp and a visible disappointment after finding out it didn't have defrag like 98se did.


My mid 90s is FreeBSD. Two floppies and a 14.4k on the install. 486 66dx2 with 20 megs of ram and the thing cooked. I used that server up until 2002 as a proxy server (this is before the age of updating software for security). I finally didn’t need it, it simply never broke.


For me it was late 90s. A friend of the family bought for me Walnut Creek's FreeBSD CD set, with all the stickers and other goodies. They sent it all the way to my godforsaken small town in Mexico. As a 12 year old it was soooo cool. I remember spending hours installing (incluing configuring and compiling the kernel in an ncurses based interface). I printed the whole FreeBSD manual and my mom bound it for me.

Great times!


The printed FreeBSD Handbook with the 2.5 CD-ROM was amazing at the time. An open source OS with a book that was better than some of the commercial OS offerings. We setup the colleges first mail server with that combo.


Would you have been able to get FreeBSD otherwise? Like, mid 90s I remember downloading Slackware from a BBS. Floppy after floppy. Was that available in your area at the time?


I dont think so. BBSs were not really a thing in Mexico... particularly in a city as small as mine. Maybe in Mexico city. But long distance calls were super expensive (mom used to talk to my grandma living in another city only once a week for 30 mins).

I think I might have been able to get something in the local public university computer centre. It was the beginning of the internet (14.4 modems min to connect to the university, which was the only internet provider)

Great fun times!


Remember those times.

Some PCs just last. A 2.4 GHz Core2 Quad I had, I think from 2008, is still going strong and can run most modern software just fine.

It's amazing how little progress there has been in last 13 years. In the nineties you had to upgrade every two years or so just to be able to run current software.

I think 2021 computer has a good chance of being still completely usable in 2040.


That is probably the Q6600, I had it as well. Very good CPU at the time and I think I had it for 6-7 years as my main computer. The next time I got an (quite expensive) 6-core CPU. And the difference was mindblowing. Not because of the extra cores but the responsiveness and single-core performance was out of this world.

I'm currently building my next computer, and the difference again is just staggering. And that is not even factoring in that the next one has got 12-cores.

It is not that the progress hasn't been slow (I guess it has if you compare it to the pace of the past), but I think it has more to do with computers back then still being very capable.

You can still fire up some CAD software and do proper work on the Q6600. I mean, right until you want to listen to music or do some very light web browsing. Then it will suddenly become quite painful. The amount of waste in today is incomprehensible. No seriously, properly incomprehensible.


> You can still fire up some CAD software and do proper work on the Q6600. I mean, right until you want to listen to music or do some very light web browsing. Then it will suddenly become quite painful. The amount of waste in today is incomprehensible. No seriously, properly incomprehensible.

I recently set-up my old Q6600 as a station for CNC machinery, that thing flies with LinuxCNC on a small SSD. There's no issue running firefox and a music player at the same time (though I tend to not have other things running when milling something but more by cargo-cult than actual tests of the impact on the real-time behaviour needed for the CNC machine control). Some operations on Inkscape are of course slower than on more recent hardware but otherwise it's still very useable.


Obviously was some hyperbole, but still, running spotify on a 12core cpu with 64 GB ram and a top of the line SSD and an internet connection that is faster both in access time and bandwidth than a harddrive from 14 years ago is still slower than winamp on a machine from 2008.

And of course you can surf the web. But even with 5GHz turbo it is still slower than a fast connection was in the late 90s (on 90s hardware and browser). And it has nothing to do with that progress hasn't advanced in that time (we are talking orders of magnitude).


Spotify’s web app is horrible for performance. Even when idle, it keeps several cores fully occupied. They do not care to fix it either, as it has been this way for as long as I have used them. I even reported the issue multiple times and got nothing but a canned response. They just do not care.

Absolute garbage, and entirely representative of all other consumer class software.


Yeah, also using a recent 8-core Ryzen and the difference is definitely obvious for more demanding tasks. Been considering Threadripper, but I this is plenty fast.

I do think I could still manage on Q6600 with 16 GB RAM and an SSD. Might need to downscale VMs and browser tabs, but manageable in a pinch. Heck, in the early nineties I was happy to have an Amiga with 1 MB of RAM and that thing flew in comparison to C64. :-)


I'm still using an overclocked i5-2500K circa 2011 in my desktop. It does everything fast enough for me (even modern games). I haven't felt the need to upgrade.


I was planning on upgrading my 3570K five years ago. I'm still running it.


Intel Q6600 I presume? Great CPU. PC I a built in 2008 has the same. I recently retrieved it from my parents’ attic and updated it to Windows 10 - still works like a charm.


Got a Core2 Duo and 8 gigs of RAM since 2009. Added an SSD years ago and a better GPU and the thing remains just as fast as any other laptops (I run Linux so I guess it's lighter on resource than Windows but not sure). I use that computer 10 hours a day for all my stuff (no gaming though). I just hope the PSU won't fry everything when it'll die.


Jokes on you. I'm currently using a Q9400 as my daily driver :-) What you say is totally right, and the only thing that feels slow is 1080p 60 fps video decoding in web browsers, because it has to do it in software AFAIK.


My Main-PC is now 11-12 years old, it's a HP Z600 upgraded to the max, buy'd a second CPU for 15$ additional 24GB ram (now 48GB) for like 100$ and a new GPU for 350$ some time ago, and it still runs like a champ.


Also got one HP Z-something (can't remember) PC, a bit later with a Xeon CPU (or two?) with 24 GB RAM. Otherwise great, but it's very power hungry, heats up a room in a hurry.


I always liked to game and even with running everything on Medium or Low I needed to upgrade my PC every 3-5 years. But now, my CPU is from 2012 and my video card is from 2017, and I'm fine.


I had a FreeBSD doitall-machine from 1998 to 2006 on some Pentium-ish thing with an AT motherboard. Moved a few thousand miles and it didn't get unboxed, until 2020 when I needed an AT PSU for something unrelated. Prior to cannibalizing it I powered it up long enough to check that it would still send email and run brief X session to hit YTMND in 15 year old Firefox.


And my disappointment in btrfs when it reminded me of how long it took to defrag a hard drive was limitless. 100 out of 500 GB used, free disk space 0, had to spend an afternoon running rebalance with various parameters to get to a point where it could run a full rebalance without running out of memory. Of course our IT seems to love it since it comes with all possible bells and whistles so I already dread the next time it happens.


Is this anecdotal of some years ago or recent? Are they running a really old kernel? btrfs had this sorts of problems, but some major fixes in this regard have been committed like two years ago.


The issue was earlier this year, but the systems I saw it on were probably both LTS with an older kernel.Good to know that it was fixed.


No probably no a old kernel....but i know like everyone btrfs...that never happened to me.


This looks like Norton Utilities' Speed Disk. My 8-year-old self really thought that rearranging blocks to be sequential really made the disk faster.

In reality though, the increase in performance was barely noticeable.


Aren’t you selling it short? Spinny disks, especially in the days before NCQ/TCQ have a vast imbalance between reading sectors sequentially and positioning the head to a new track. NCQ/TCQ later helped a bit by optimizing the path the head takes (With Physics!) when multiple operations are in the queue, but a completely fragmented disk drive seems like it will underperform noticeable against one where everything is laid out neatly in a linear fashion, at least in some access patterns (reading large files in one go).

But I’d be interested in any real data anyway.


I can't conduct the experiment anymore obviously, but to qualify my perceptions -- if you try to do a linear read from a 90% fragmented filesystem, performance would definitely suck and defragging would obviously provide perceptible benefit.

Most of the time however, my disk was probably only about 30% fragmented (if I recall correctly from Speed Disk). I was running DOS on a FAT file system where there was no swap file or anything that would cause massive fragmentation except deleting and installing programs, and maybe some temp files created by certain programs. The performance delta for me was barely noticeable on a 40MB Seagate IDE drive.


> I can't conduct the experiment anymore obviously

Why not?

  $ virt-make-fs --type=msdos /usr/share/doc /var/tmp/disk.img
  $ export file=/var/tmp/disk.img
  $ nbdkit eval \
       after_fork=' echo 0 > $tmpdir/read ' \
       get_size=' stat -Lc %s $file ' \
       pread='
           diff=$(($4 - `cat $tmpdir/read`))
           echo $4 > $tmpdir/read
           if [ $diff -lt -10000 ] || [ $diff -gt 10000 ]; then
               # simulate a seek
               sleep 0.1
           fi
           dd if=$file skip=$4 count=$3 iflag=count_bytes,skip_bytes
       '
  $ sudo nbd-client localhost /dev/nbd0
  $ sudo mount /dev/nbd0 /tmp/mnt
The tricky thing is probably making a deliberately fragmented disk image for testing using modern tools.


In Vista defragging was one of my goto's for getting the thing to act nicely.

I would use process monitor from sysinternals to measure which files are loaded as the system boots. Then use jkdefrag/mydefrag to take said list and rebuild drive into different zones. Try to put small files that are used together in the same area of the disk. Put huge things like 'isos/vhds/acrivezips' etc near the end of the drive. Put small frequently used files near the front in used order if possible. Smash out any gaps if possible. This helps NTFS from creating more fragments. Defrag any files that 7zip unarchives immediately as it has a bad habit of creating highly fragmented files (contig from sysinternals). Try to put file index in its own 'area' as one reason it runs so badly is the drive fragmentation it causes to itself. It got so bad I would put the index on its own partition sometimes. Try to put files that fragment frequently in their own 'zone' as NTFS will tend to pick the next available gap to where the file is (not always).

That made a real noticeable difference in perceived speed.

These days with nvme and ssd I might just defrag the files themselves once a year if I feel like it. There is a difference but the perceived is pretty much not there anymore. The difference is there is a chain of sectors in the MFT that windows will have to traverse if you ask for something in the middle of a file. That is about the only thing you save with newer drives. ext4 has a similar issue. But it seems to be better about picking spots where it will not fragment. It will however fragment badly in low space conditions. Most filesystems will.


With FAT, you are constantly jumping between the table at the beginning of the disk to get the next cluster number and the data, unless disk caching pulled it in RAM.


I remember that being a reason why the MFT in NTFS is positioned in the middle of the drive so that on average the head has to travel less. And NTFS also supports putting small file contents directly into the MFT entry, which also cuts down on head movement. Nowadays all that probably is moot and a relic from ancient times (although I still have enough hard drives running at home).


Not really.

First thing the middle of the volume/partition is not the middle of the disk in any multi-partitioned disk, then NTFS $MFT is not (and never has been) in the middle of the partition, its location varies depending on the size of the volume/partiton but above a certain size, around 5/6 GB it is at a fixed offset, usually on LCN #786432 which - on a normal 8 KB/cluster NTFS volume - amounts to offset 6,291,456 sectors or 6,442,450,944 bytes, rarely the middle of the volume, unless its size is around 13 GB.


Have you found the difference between formatting NTFS in older Windows versions compared to the recent releases?

Seems to me the MFT was in the middle of volumes commonly in older versions, and lately much closer to the beginning of the volume.

Could have something to do with the feature of shrinking a volume which became more common, and you can now more often shrink an NTFS volume by more than half, which was often impossible in earlier years.


I have not enough data/experience for more recent Windows versions, but at least up to 7 (and starting from Windows 2000, the NTFS in NT 3.51 and 4.00 had some differences) the MFT start address is variable up to a certain volume size (as said if I recall correctly the switch to "fixed" is between 5 and 6 GB.

If you "think in hex" the address is in the PBR expressed in clusters (VCN) as "0xC0000" (at offset 0x30 or 48 dec you should find "00000C000000000"), which is a nice, round number.

Maybe you remember the "old" NTFS format (NT 3/4) where the mirror of the bootsector (aka $BootMirr, not the $MFT)was exactly in the middle of the volume, but since Windows 2000 this copy of the first sector of the PBR was moved in the "gray zone" at the end (inside the partition but outside the volume[1]).

As a side note (and JFYI) thanks to (from 7 onwards) NTFS resizing capabilities it is possible to "force" the $MFT to very early sectors, see this only seemingly unrelated thread here:

http://reboot.pro/index.php?showtopic=18022

[1] some info on the matter in this other, as well seemingly unrelated, thread:

http://reboot.pro/index.php?showtopic=18034


HPFS definitely put the root directory in the "seek center": http://spider.seds.org/spider/OS2/HPFS/dirs.html

This now makes me wonder whether the middle disk block (numerically) is equal "seek times" away from disk edges.


But a defrag FAT jumps much much less, because the cluster slots allocated to the same file will be close to each other, often in the same sector. I would be surprised if DOS doesn't optimize for this case.


Defrag absolutely helped in the winxp days if your HDD was always >90% full. Every chunk of data written to a full HDD would get spread across the platter, cutting your read and write IOPS to a fraction of what you expect. It was especially noticeable on laptops since their platters spin slower.


For most file systems performance suffers at latest at the 90% full mark, if not before.


Faster disk access was one selling point. But as I recall the primary reason to defrag was to prevent problems caused by long FAT chains. And when something did go wrong, contiguous files were easier to recover from the damaged filesystem.


It made a massive difference on spinny disks because of the elevator algorithm and the time it took to relocate physical heads and spin the disks around.


I suppose it solves the support staff's problem of convincing a customer that they fixed an issue.


A fragmented hard drive was so much faster than a defragmented floppy disk that I was happy either way.


Also quieter. I remember the satisfaction I felt after defragging the drive and no longer hearing the driving thrashing around on the smallest read operation.


UFS is fragmentation resistant by virtue of its allocation algorithms. It still get fragmented under certain patterns or if free space gets low. I'm not sure if FreeBSD did any online defragmentation back then, maybe it does now with softupdates? I know Linux never did only deframentation with ext2/3 (despite being inspired by UFS), although it did have an offline e2defrag tool.

But yes these designs generally seemed to be much better than FAT under DOS or Windows at avoiding excessive fragmentation.


The first bad sector I encountered in the 90s left me shaken for days. We stared eye to eye. The bad sector won. I had broken something my father worked a month for. But it didn't break. The bad sector was marked bad. The computer continued working. I rejoiced.


I remember the first time Norton Disk Doctor "repaired" a bad disk. Looked like magic to me.


SO did OS/2. People wanted the defrag of Windows 95 or Norton Utilities but OS/2 defragged in the background.

SSD drives don't need to be defragged.


NVME drives still like some TLC optimisation, i.e running TRIM on a regular basis.


I’m perpetually kind of bothered that I can’t defragment my ZFS HDD. I’m aware it’s not actually a problem in most cases and it’s not like I’m having performance issues, but it feels like a potential ticking time bomb, albeit one the rational part of me knows will never go off.


Brings back good memories... I used to go directly to the electronics section at Walmart while my parents shopped, start defrag on all the computers and then watch from the CD aisle with a smirk as an associate would come across them, exclaim "my god!" and restart the computers with a power off and on.


The modern version is to pair with bluetooth speakers on display. Then ambush play weird shit from the next alley for reactions.


Same with their demo tvs that have chrome cast. Found that a lot end up connected to the public wifi.


Potential consequences are what?


If the tv is vulnerable you can install your payload. Once somebody buys it and brings it home you are in their network.


You can cast videos from the weird part of YouTube to the TVs.


Is there a non-weird part?


The boring one


WiFi printers are also fun


Reminds me of that time when I went into the Apple Store and came up to a Mac, opened a window (idk I think it was something harmless like Finder), took a screenshot and then set that screenshot as the wallpaper. I'm sure there were many people that day who tried to close the 'uncloseable' window that was purely part of the background...


I did that on a high school Windows lab PC, put the task bar on top of the screen, set it to auto-hide, and killed explorer.exe so the Desktop icons were only part of the background.

A prof had to ask me to set it back because they couldn’t figure it out.


I imagined this entire sequence as a scene from The Simpsons, sound effect and all - thank you.


Oh, you made me laugh out loud, that is brilliant!


Power cycling during a defrag seems like a good way to lose data.


That's the joke.


Friggin weird how this sort of thing just goes over some people's heads.


But not an effective one. Generally defrag routines copy data and then update the file system. Power loss would have to occur during the file system write in order for it to be destructive. (And in the case of journaling file systems, likely still recoverable.)


Yes, but the system is writing roughly half the time, and defragging was mostly a thing before the advent of journalling file systems.


Power loss while a defragmenter is writing file data will not cause data loss, even on the most basic of file systems like FAT or HFS. Duplicating data into free space is effectively a no-op until the file system is updated to point from the old location to the new location.


It think most versions of windows that included a defragmenter as part of the OS were XP and later, all of which would have used NTFS, which is a journaling filesystem.


DOS defrag goes back a lot farther than that. It even predates FAT32, let alone NTFS.


That's an obvious sign of OCD.


I think in this case that would expand to Ordinary Childish Deviousness, though.


Obsesive compulsive Defragmentation


I miss the incredible slickness of these old DOS programs. Single-character commands, judicious use of color, and efficient presentation. Pure art.


The "magic" of doublespace / norton utilities, XTree Gold hex-view and other programs amazed my young self.


Well when the target platform only had 512KB ram, or 640K if they could afford it, and a 30MB drive cost $400, you had to squeeze every last bit out of your program.

Norton Utilities were slick. Slicker than any MSDOS tools, or even Apple Dos 3.0 or ProDOS (or ProntoDOS for that matter) utilities for that matter.


I never understood why the beautiful & easy-to-use QBasic Editor never inspired a copycat for linux. It would've easily beaten out nano.


Midnight Commander "mcedit" comes close to the aesthetics (blue background, grey dialogs with shadows, etc.) but is not a perfect copy.

A perfect clone of the MS-DOS editor for UNIX/Linux does not exist. Maybe this is the new toy project for the "written in Rust" crowd.


Joe's Own Editor is a thing since 1988

https://joe-editor.sourceforge.io/


Well the one of the appealing part of QBasic Editor was the friendly bright blue color scheme. Along with the always present menu bar on the top. Just a great editor for newbies.

Seems like Joe has the helpful menu system, but it feels overall more "serious" in comparison, especially with the minimal white on black color scheme.


I think that is because back then, terminals weren't capable of handling key combinations like shift-arrows (for selection), ctrl-c (for copy), and mouse support wasn't great either.



Sounds like their disk was very fragmented. Especially at 4:48


Great music for programming, really gets me in the mood


Really nice :-) I'm hearing echoes of the Thunderforce IV OST (https://www.youtube.com/watch?v=pLDBwMusg4o) which can only be a good thing.


What's that base64 string all about? A naive decode just produces what looks like binary gibberish



On the face of it, it's regular ASCII code. But I wonder if they hid something inside?


Thank you so much for this pointer! It's glorious.


It keeps reading unused blocks instead of just the used ones. It's also writing way more blocks than there were originally, nearly filling the disk. That's good irony.


I think instead of used or unused blocks, something to indicate how much per block were used would help.


The version of Defrag from DOS 6.22 I remember differentiated between unused, partially used, and used (full) blocks. When you'd see a lot of partially used blocks get condensed into a few full blocks, it felt like your disk was getting SO tidy!


A bit like trying to defrag an SSD and shooting write amplification through the roof.


it also doesn’t finish, i was hoping for the zoomie part - it never came.


Can you imagine how powerful dfrag could have been if we had the internet back then? There could be a public list of files that don't change or get deleted a lot, a list of files that change a lot, that get deleted a lot, etc.

Then it could not only regroup files into contiguous blocks, but it could put the most stable files first so that subsequent defrags are faster!


But you don't want dull files that don't change often and only get read once at boot first. First on an HDD was read faster than last. You want file that accessed frequently first.


It depends on what you’re optimizing for; reducing boot times is not an uncommon goal.


Hmmm good point. I guess we’d want to flip it.


Throughput and seek time were the big problem.

You probably want a few alternating stripes of files and free space. Or do what most people did, which is keep a mostly empty hard drive.


You must have a different definition of most. Everyone I knew was always needing a new drive because the current one was full. Empty drive? What's that? The thing in the box. 30 seconds after turning it on, it was 80% full!


Defrag with "telemetry"... sounds about right for "current" software.

"By clicking Accept, you agree to share the information about your files with us and any relevant third parties [read: anyone who wants to pay us money to get info on you]".


What? We had the internet back then...


Sure but not easily accessible where you could make a program that used internet data to operate on DOS.


Dude, we had Windows 95 in 1995

Online gaming with my Dreamcast


I had been on the internet for several years in 93!


Eternal September started in (September) 93.


and? Here in New Zealand, in Wellington, in 91, we got CityNet - https://downtothewire.co.nz/library-1991/index.html


You can monitor a machine's IO to get a good enough idea of that.

The other win is putting files accessed together (like when the computer is booting) close to each other to minimize seeks. I suppose the actual goal of defragmenting is minimizing seek time.


Such a distributed block chain in the 90s could have changed the course of history.


just use modification times or the archive bit


I just showed this to my 32 year old son. He used to sit on my lap and watch as my hard drive defragged. Made him smile.


I'm 32 and at 10 years old I loved watching Windows 98 defrag. <ramble> I also loved Flashget downloading files 10 threads at a time. 10 mb resumable download in 5 hours was amazing for me. A few years later I was ricing Linux with a dangerous tool that prepends binaries with its shared objects dependencies so that every program launch is a contiguous read. Ubuntu then added it as post-trigger to its package manager. There was a hellish period between when Ubuntu removed that trigger and I finally got an SSD (Indian markets still get HDDs).


I (30) also just sent this to my dad. It made him very happy as we reminisced about all the old machines we've gone through over the decades.

Thank you for spending such quality time with your son


That's awesome!


  sudo e4defrag /
  sudo dd if=/dev/zero of=/zeros; sudo rm -f /zeros
While write intensive, I run this on my ZFS VMs on occasion to keep the zstd compression ratio high. No pretty interface, but it's still satisfying when I see the volume's compression ratio is close to entropy. Even in 2021. Even with all-flash.


I know that trick and it is NOT working with dev/zero on ZFS (compression), use dev/urandom instead.


I know you're joking, but if any layreader comes across this: random data does not compress.


No need to sync before rm'ing?


Not afaik. The dd operation returns when the virtual disk is full and linux won't run anything after the semicolon until the previous command returns. I'm not running any async disk behavior on the client.


In case the second sudo wants to ask for password but there's no disk space for logging, it's safer to:

sudo bash -c 'dd if=/dev/zero of=/zeros; rm -f /zeros'


I typically don't put sudos in my scripts but have found filling all free space is particularly hazardous. I ran into rm permissions issues once and had to sneak my way into a recovery environment to remove the file. I haven't ran into sudo timing out during the zeroing, but it is a hazard. This is a good tip.


Any effects on life time of the SSD?


All write operations reduce the life of the SSD. I use 50 GiB volumes on a 4x NVMe RAID-Z1. Each drive is rated for 3500 TBW. I also only call this script when I want to reclaim space, such as after downloading many large dependencies for a one-time build. My home use case is read-heavy, so I'm really not concerned. When one drive goes, I can replace it. If they all go at once: I have remote encrypted B2 backups.


I thought defragging SSD/Flash-based devices is a big no no? Since that would reduce the lifecycle in a considerable way. Is that what TRIM is for?


I always read (from sources like LKML, not random blog posts) that defrag on SSD is simply useless because you never know the actual physical placement of your blocks. You can write a sequential 512 MB chunk of data in a single file and have it exploded in 500k+ 4k blocks in different locations (or vice versa — a bunch of random writes to different sectors can be compacted into a single linear block).

If you really need to defrag SSD (which somewhat helps IME if it was heavily used up to 100% capacity), copy everything to other storage, wipe it (preferably with NVMe/SATA commands, or at least do a full TRIM), and then copy everything back.


if your first assertation is correct, and ssd's use internal mapping of physical and logical sectors, then your suggestion with wiping/rewriting would not work.

i know that per SMART, disks (hdd and ssd) have to have extra space that will be used if an active sector goes bad, but idk about the total internal remapping on ssd's.


TRIM is a solution to a different problem. This is a higher layer operation that saves storage usage regardless of the media. I hardly consider any an operation where I understand the tradeoffs a "big no no". I'm an engineer, not a six-year-old.


I'm pretty sure TRIM is supposed to prevent this problem, but TRIM is often broken in VMs.


Trim is not supposed to help with ZFS compression, it occurs at a layer below what the FS sees (but the FS hooks are necessary for the controller to understand the logical use).

The issue TRIM helps with is the following: while reads and writes are performed at the page level (2-16k), a page can not be overwritten (it can only be written to when empty) and an SSD can only erase entire blocks (128~256 pages).

This means when you perform an overwrite of a page, the SSD controller really has an internal mapping between logical pages (what it tells the FS) and physical pages (the actual NAND cells) and it updates the mapping over the overwritten logical page to a new physical page (in which it writes the “updated” data), marking the old physical page as “dirty”.

So as you use the drive the blocks get fragmented, more and more full of a mix of dirty and used pages meaning the controller is unable to erase the block in order to reuse its (dirty) pages. So it garbage-collects pages, which is a form of completely internal defragmentation: it goes through “full” blocks, copies the used pages to brand new blocks, then queues the blocks for erasure.

The issue with this process is… the SSD only knows that a block is unused when it’s overwritten, historical protocols had no more information since the hard drives had the same unit for everything and could overwrite pages, and thus the FS managed that directly. This means if you delete a file the controller has no idea the corresponding pages are unused (dirty) until the FS decides to write something unrelated there, meaning if you do lots of creation and removing (rather than create-and-never-remove or create-and-overwrite) the controller lags behind and has a harder and harder time doing physical defrag and having empty blocks to write to, this leads to additional garbage collection and thus writes, and all the “removed but not overwritten” blocks are still used as far as the controller knows so they’re copied over during GC even though no one can or will ever read them.

TRIM lets the FS tell the controller about file deletions (or truncation or whatever), and thus allows the FS to have a much more correct view of the actually used blocks, thereby allowing faster reclamation of empty blocks (e.g. if you download a 1GB file, use it, then remove it, the controller now knows all the corresponding blocks are dirty and can be immediately queued for erasure) and reducing unnecessary writes (as known-dirty pages don’t have to be copied during GC, only used pages).


> Trim is not supposed to help with ZFS compression, it occurs at a layer below what the FS sees

The problem that was described was reclaiming space from a VM. I took the comment about compression ratios to be a measurement of the entire VM disk image, because otherwise if you just want a big meaningless ratio then dump a petabyte of zeroes into a new file.

A VM that isn't set up to TRIM will leave junk data all over its disk image, bloating it by a lot. If the guest OS understands how to TRIM, and the VM software properly interprets those TRIM commands, then it can automatically truncate or zero sections of the disk image. If either of those doesn't understand TRIM, then you need to fill the virtual disk with zeroes to get the same benefit. (And possibly run an extra compaction command after.)

This is at a completely different layer from what you elaborated on, because it's a virtual drive. It's good to have virtual drive TRIM even if you're storing the disk image on an HDD! It's a different but analogous use case to real physical drive TRIM.


Its only a problem if the delta between adjacent bits grows too large. "Wear-leveling" is a type of auto-defrag that will shuffle data around intentionally to keep these deltas equivalent. "Program-erase" cycles (the lifetime of the gate oxide layer that you literally destroy) are an order of magnitude less worrisome, and the larger the drive the less this matters.


Don't defrag SSD's it's nearly pointless and all it really is doing is lowering the lifetime of your drive.


I'm not defragging SSDs. I'm defragging virtual drives.


Those virtual drives are on a physical drive somewhere unless they're in RAM.


Your point is that it's pointless to defrag SSDs. Well I'm not. I'm defragging virtual drives, which is not pointless, regardless of the medium. So your point is wrong.


And here's the Win95 version: http://hultbergs.org/defrag/


My goodness was defragging really that mechanically loud or is this exaggerated for effect?

For such a long time (as in, well into the tail end of Windows XP) I was stuck with an old Pentium computer that could handle Windows 98SE at most. Had maybe 2GB of HDD to work with which sounded luxurious to me until my father was issued a 4GB flash drive at work. I defragged this every first Sunday of the month to keep things running smooth. Ofc the effect might as well be purely psychological but hey, as I said it did last me a long time.

The mechanical sounds I remember are finer, not like pop snoring no? :)


Yes, my hard disc at that time (similar experience) was more silent. The sound is either exaggerated or from some even older 5.25" disc.


I still have in a drawer some 80 GB and 250 GB Maxtor drives that are really that loud. I remember when I replaced by 80GB root disk with a 1 TB HGST one, the sudden silence in my office...


I miss it to be honest. All of the computer-ey sounds made it a lot of fun, and it's not something I thought I'd miss until they were gone.

Every once in a while I hear my AIO spin up and some water start moving around and that's nice (not sure if it's supposed to do that but it's done it since day one). But that's all I have to look forward to other than the fan noise.


You can perfectly build a noisy tower computer in 2021, with lots of fans and of course the good old hard discs (instead of SSDs). :-)


It was that loud on certain hard drives, yes.


I just realized this is my first white noise machine that would put me to sleep after sitting and messing with my computer all night. I'd finally do the defrag and listen to that hard drive or whatever it was making that noise. And go peacefully to sleep. Just to be awoken a couple hours later by someone letting me know I was late for school.


Wow, my browser is so much faster now


They should port SoftRAM to the web too, I could do with some additional memory.


SoftRAM wasn't as badly thought out as you thought. The idea was to use compression as a warm layer between "in-memory" and "on disk". That idea is actually implemented in windows 10/11 as "memory compression", and is in linux (zswap) and mac os as well. The only problem was that SoftRAM was half-baked, and their compression algorithm was memcpy (ie. nothing). Raymond Chen has a much longer write-up:

https://devblogs.microsoft.com/oldnewthing/20211111-00/?p=10...


fascinating read, and Raymond seems like an immensely productive person.


Not sure if equivalents are still recommended for Windows users, but I've seen zram recommended to Linux users quite a few times.

Someone also wrote a script to start killing processes before the system can hang when it runs out of RAM.

https://askubuntu.com/a/1018733


> Someone also wrote a script to start killing processes before the system can hang when it runs out of RAM.

Isn't this exactly what the OOMKiller does?


OOMKiller has a bunch of issues. Its heuristics don't apply well across the wide range of workloads Linux provides (mobile/android? webserver? Database server? build server? desktop client? Gaming machine?), each of which would require its own tuning. (more background at https://lwn.net/Kernel/Index/#Memory_management-Out-of-memor...)

That's why some orgs implemented their own solutions to avoid OOMKiller having to enter the picture, like Facebook's user-space oomd [1] or Android's LMKD [2]

[1] https://github.com/facebookincubator/oomd

[2] https://source.android.com/devices/tech/perf/lmkd


In my experience, by the time the OOMKiller actually comes into play, the system has already stalled for minutes if not more. This especially applies to headless servers; good luck trying to SSH into a machine that keeps trying to launch a service configured to consume too much RAM.



I had a bunch of problems with the OOM Killer on a server of mine. It seems to have been due to not having any swap partition. Linux seems to assume you do, and the OOM strategy behaves really poorly when you don't. Like, it doesn't need to be large, the machine has 128 Gb RAM and the swap partition is 1 Gb.


1GB is a giant swap partition; linux regularly ran with swap partitions of 10s of MB in the 90s. The only reason to scale swap with ram is if you want to fit coredumps (or suspend to disk) in swap.


10s of Mb was a lot in the '90s though. A mid '90s consumer hard drive usually clocked in at a few hundred megabytes, and RAM could be in the dozens of Mb.


Right, but swap size should scale with random access times for disks, not disk nor RAM size.


Why is that?


zram shouldn't really be recommended anymore, IMHO. zswap is the more modern alternative — it does not require a separate virtual device, can resize its pool dynamically, is enabled by default on some distributions, and (IIRC) supports more compression algorithms (trade CPU time for higher compression ration or vice versa).

https://wiki.archlinux.org/title/Zswap


Applying memory compression to browsers is a promising idea, the browser heap is not that high entropy and it's largely a white box from the browser runtime POV.



It's not the same :)

I remember the whole experience was cathartic, and my DOS PC definitely felt faster after doing (though it most likely wasn't).


I had a list of things to do at work if it was mid- to late-afternoon and I just wasn't 'feeling it' for the task I was assigned. Update documentation, research new releases of our tools/libraries, defrag my hard drive. Compilers really were happier with some defragmentation (half of which they probably created in the first place).


Remember hearing the sound of the physical disk spinning, whirring and clunking? It felt like an actual machine.


Is there a Windows 95 version?

https://www.youtube.com/watch?v=dc_SDyLYq3U



I stared at that for like an hour straight as a kid.


For a second there I thought it was going to be under the "entertainment" menu.


Oh god yeah! hahaha I remember the pain (of the wait)


so satisfying to see a nicely defragged hard drive.


This feels unsettling without hearing disk needles clicking.


It bothers me that it's mainly moving unused blocks around.


This is actually correct behavior; you can observe it in an original video like this: https://www.youtube.com/watch?v=QRlgjMmdbV0.

"Unused" blocks is a probably misleading terms - "partially used" is more precise.

The simulation performs significantly more reads from "Unused" blocks, so I'm not sure, if it's exaggerated/incorrectly modeled, of if the virtual disk is an edge case (I suspect the former case).

Here's another video, with the same behavior: https://www.youtube.com/watch?v=syir9mdRk9s.


I guess I'm not seeing it in that video. The video you've posted is pretty clearly reading from use blocks and writing to unused blocks, in contrast to this webapp.


The screenshot [here](https://imgur.com/a/HS0C7ex) is taken at second 8, from the central area of the video; it shows an `r` block. If you play at 0.25x from second 8 (https://youtu.be/syir9mdRk9s?t=8), you'll notice that the `r` is displayed on top of an "unused" block.


This. The W symbol for Write isn't actually shown, and I have a feeling (not sure) that sometimes symbol S appears instead of r, not sure what that is.


The variable width font face throws off the effect on mobile


Creator here: Yes, that's bugging me too. I think the default "monospace" font on Android is def not monospace. I'm going to look into embedding a font instead.


On Chrome (Windows) the box frames at the bottom are no right-aligned either.

Have you thought about using <canvas> and just writing OEM DOS style characters where they need to be like some sort of obscene franken-video-buffer?


It works fine on my iPhone, perhaps it’s not setting a correct proportional font on android?


I think you're right. I tried this on Firefox for Android. Looks great on Firefox for OSX, though!


Firefox destop site switch improves the rendering, but still..


What is the optimal algorithm for defragmentation? Some sort of sort?


Obviously you want to consolidate fragments. But especially for HDDs there's different tradeoff of placing stuff near the outside compared to towards the center, which also depends on workload ie streaming vs random access, heavy read/write vs mostly read-only.

With PerfectDisk[1] you can decide how you wanted the disk laid out. And it did make a measurable difference in the cases I tested.

For SSDs fragmentation is less of an issue, but it's not completely gone. Especially on Windows with NTFS, where one can run into issues with heavily fragmented files[2][3].

We actually have this very issue with a client in production these days, due to the DB log file causing heavy fragmentation.

[1]: https://www.raxco.com/home/products/perfectdisk-pro

[2]: https://support.microsoft.com/en-us/topic/a-heavily-fragment...

[3]: https://support.assurestor.com/support/solutions/articles/16...


There wasn't just one. Some defrag tools from the days of DOS would actually let you pick. A commonly used one was to try to place the contents of the same directory adjacent to each other on the disk surface, to minimize latency.

I can't remember which it is, but either the center or the outside edge of a disk read faster, so another common trick was to place all the files necessary to boot the system there, to reduce boot times.


Angular velocity of the edge of the plate is higher, so it is faster than the center. However this only applies to plates that have the same data density on the inside and outside rings.


Depends on the drive technology. For SSDs you want to avoid extra writes as those wear down the cells. For traditional hard drives you can consolidate empty space to reduce cluster waste, or go for broke and just place everything in order (most fun to watch). Probably the best way would be to group frequently accessed files near each other so the heads traveled less distance.

So far as the algorithm, what I observed is the program would look at how much room was needed to place the next file, then clear that many blocks out of the way by moving them to the end of the drive. Then copy the blocks for the file to form a contiguous stretch of blocks. And repeat. You wanted to have as much free space as the largest file (plus a little), but I think some of them were able to move large files in pieces.

Back in the days of Windows NT we had a network share with 8+ million files on it (a lot for the time) and we had a serious fragmentation problem, where it could take a second or more to read a file. Most of the defraggers just gave up, or weren't making any progress, but we eventually found one that worked (PerfectDisk, maybe?)


Curious too here! I wouldn't be surprised if there was some kind of a dynamic programming algo similar to 0-1 knapsack.


What is being measured?

You've got contiguous space, speed, total time, wear.


Bubble sort obviously.


The only thing even more satisfying then defrag was to optimize your drive interleaving. I have fond memories of using SpinRite II (https://en.wikipedia.org/wiki/SpinRite) to optimize the interleaving on my overRLLd ST225s and ST251s on a Perstore controller! To think 30MBs was such a huge amount of storage.


I need that as a screensaver!

Oh wait, we don't have screensavers anymore either...


I do. My desktop PC (running Windows 10) goes black, and then floats a nice digital clock and date around the screen in white. It does this for about 5 minutes, and then my monitor goes to sleep. It's a nice transition between the two states.


On a tangent, I‘m surprised how much SD cards are failing to keep my photos intact. I found a couple old ones and saved them to cloud storage but it was too late - at least 10% of the jpegs are screwed up with artifacts, some of them are not readable. I wonder of these people had ever heard if redundancy? Is it cosmic rays?


i feel like the whole naming "Secure Digital" and "Solid State Disk" is specifically chosen to distract from the fact that there is absolutely nothing "secure" or "solid" about the underlying technology (tiny capacitors that WILL discharge on their own).

disclamiar: did write benchmarks with sd-cards in the past and was astonished by the failure rates. There also was (is?) little correlation to price, seemed like there was a huge variance in the quality of different batches of memory chips.


I would like such a screensaver but the Bs would cause me anxiety. I lived long enough to get accustomed to regular defragmentation (never used this-looking app though, only SpeedDisk and later Defraggler) but not long enough to consider bad clusters presence a norm - they always meant a catastrophe for me.


Reminds me of zFRAG [0], which I found had an update recently to add bad sectors. The game allows you to both defrag automatically and manually (by dragging and dropping).

[0] https://losttraindude.itch.io/zfrag


Hah! Something to replace https://www.twitch.tv/twitchdefrags/ which seems to have stopped broadcasting. Now we just need the right sound. :-)


Creator here: I'm working with an audio engineer to offer a selection of sounds from WD to Seagate to SSD (current default). I first tried to just use old samples but the quality just wasn't good enough.


I'd recommend some of the classic deathstar disks and their click of death, just so those of us with PTSD can relive it.


I have a deathstar in the basement. It was working last time I placed it in its cardboard grave of obsolescence. Who knows what would happen if I powered it up today?

I don't really know what to do with all my old hard drives. I love SSDs because they don't vibrate and make noise.


hehe just tried a 10y+ HDD of mine. put on PATA-USB adaptor *bam* works! the noise is somewhat unsettling thou, maybe that i am just not used to it anymore.

i REALLY wonder if a SSD would still work after lying unpowered for ten years, they are made of capacitors after all ...


Don't recall if this was a deathstar but it certainly clicked. Warning, loud!

https://r-1.ch/hdd.mp3


I associate click of death with IOMega drives


You can recreate Micropolis SCSI drives by dropping a marble in a coffee can.


Yeah, sorry. My ISP decided it didn't want me to keep that alive anymore ¯\_(ツ)_/¯


I defragged a SSD partition recently. First time in MANY years I ran a defrag utility. There is still a reason to do it.. if you want to shrink a partition using stock utilities then you need contiguous free space.


I don’t know whether the tools exist, but that doesn’t require defragging. It just requires moving stuff from the top occupied blocks into free space lower down.

That’s less work and somewhat easier to program. There still will be edge cases where doing that increases the size of directory structures, so it’s not fully trivial.


For windows 10 'defrag' can do this. defrag /X

If I remember from the last time I used it, it is not nice about it. It may create fragments. It consolidates the free space to the end of the drive.

Does not always work. Especially if the drive in question is active.


BtrFS can actually shrink while moving impacted files. I think the target space is just rebalanced.


Cool, now i can create that linux partition!


Unmovable files! I remember that DOS boot sector did not understand FAT, so some files had to be in exact locations.

TBH I'm a bit uncertain on how today's GRUB goes from the boot sector, which is too small to understand filesystems, to a beast that can load the kernel from practically any filesystem.

I think that GRUB 1 used the 32kB DOS compatibility region to store whatever didn't fit in the MBR. The 32kB was enough for it to boot itself to a state where it understands filesystems.


For BIOS boot, these days basically all partitions are aligned to a multiple of a megabyte, leaving basically the same gap after the MBR but bigger, so GRUB puts its core in the same place.

For UEFI boot, it's the firmware's job to understand the partition table and FAT32 so that it can open up the boot partition and make a list of bootable files. Then it can run one or give the user a menu. So GRUB just has to put a single file in the right directory.


Thank you to the dev.

I was actually just thinking about this this morning. After finally getting past single and dual floppy systems I used to defrag my 40MB disk every night and felt rightness with the world. I remember wishing I could choose which files go where because of outer parts of the platter being faster than the inner parts. Whether that was true in practice didn’t matter, just that feeling of fully tuning my machine.


Vopt921 is an excellent defrag utility for Win 10 / 8 / 7 / Vista / XP. It also includes many handy clean up options to remove lots of Win generated junk. The late author has made it freely available to all. https://www.majorgeeks.com/files/details/vopt.html


Do you know if it defrags MFT in NTFS partitions? Asking because I've ran into issues with MFT fragmentation before and it took me a long time to find tools that will actually attempt to fix it and the process took a long time.


I hadn't thought about defragging a hard drive in years. What happens on mobile OS? Do they run routine defrag processes in the background?


There is a background cleanup process for SSDs called trim, but it does something different.

I mean magnetic hard drives are very similar to vinyl recordings, so it's clear why contiguous file placement has lower latency, and thus why defragmentation is a big performance boost. I believe since SSDs are organized differently (many NAND chips), their access latency doesn't benefit as much from a contiguous file placement.

Furthermore, defrag necessarily has to write. Since SSD cells have a finite number of writes before they do not retain data, defragmentation shortens your SSD's lifecycle.


SSDs have their own logical to physical mapping tables. When you trim, it is telling the SSD you no longer care about that entry in the table so the physical space can be freed up. In terms of latency there is no seek time difference from one location to another hence defrag has no benefit. In fact defrag will just cause more wear.


Fragmentation is even beneficial in SSDs. When the contents of a logical file are physically scattered, the internal parallelism of many flash devices can be exploited. If your file are physically compact, you'd lose that parallelism.


I used to work on SSD FTL so have some ideas on the internals. Generally the writes are stripped across the flash devices. Main thing is you need command queueing to exploit the parallelism. You can do a few tricks like stride detection to prefetch in some cases because it makes the benchmarks look good.


Defragging puts files in a line so a spinning disc with a magnetic head can read it cleanly with one swipe.

You phone works simple by asking for data from addresses. Not saying it’s perfect. Files can be fragmented so you may send a few more requests to get them. But it’s not something to lose sleep or run a utility on.


Yeah I think it's more import to keep your space under roughly 85-90% of your storage max size.


For a few decades now, suitable filesystems work in a way that makes defragmentation essentially unnecessary. I don't you ever really had to defragment on Linux, by the time it came into existence its filesystem was already advanced enough (though I know nothing about ext, which existed briefly before ext2, and neither about the minix filesystem that Linux used briefly even before).


You should give "How to Fragment Your File System" by a lot of people a read.

http://csis.pace.edu/~jyuan2/paper/betrfs5.pdf

"In this article, we demonstrate that modern file systems can still suffer from fragmentation under representative workloads, and we describe a simple method for quickly inducing aging.

Our results suggest that fragmentation can be a first-order performance concern—some file systems slow down by over 20x over the course of our experiments. We show that fragmentation causes performance declines on both hard drives and SSDs, when there is plentiful cache available, and even on large disks with ample free space."


The web is full of people shouting "_____ filesytem doesn't get fragmented!"

Apparently they've never let a filesystem get above 70% full or so, especially one on a fileserver that has seen years of daily use by a dozen+ people.

I've heard it for HFS, HFS+, XFS, ext3, ext4, etc and every time it was an lie. Every filesystem fragments after regular use unless you have a fuckton of free space.

It's pretty hard to justify to one's bosses over-provisioning disk space to enough of a degree to render fragmentation unlikely, and users just expand their data to fit it anyway.


2,281 extents found

hmm that does not seem right and off by an order 10x...


Guess I was myself part of what the second sentence mentions: "System implementers and users alike treat aging as a solved problem." :)


During my teen years playing around with my PCs, this was literally my ASMR.

EDIT: I realize that it needs the sounds of the hard drive itself being defraged.


This is gold. Thank you for sharing. I can remember when for me, Zen meant defragging my HDDs before I went to bed.


I am old enough to remember when "defrag days" were part of my job.

Literally nearly an entire work day lost spent waiting for defrag to finish on my FAT32 hard drive. We used some program with a very minimal UI that was supposed to do it faster/better than the utility built into Windows.


This is great...There was nothing worse than seeing the dreaded "B" symbol for bad blocks.

https://i.ytimg.com/vi/lxZyxxHOw3Y/maxresdefault.jpg


I wish ZFS had a defrag option. On a server I run that would allow me to move all the used blocks to the beginning of storage and use the rest for hypervisor level snapshots.

But I'll still take ZFS over any other file system :)


Ah, that defrag :(


How about this one, HN compliant?

https://www.youtube.com/watch?v=7nKDQISSl1o


Much better :)


This was actually a really poor approach, right? Wouldn't you want files that change frequently to be padded at the end to limit future fragmentation?


Heck, watching that do its job felt good


Like watching an AI speedrun 1-dimensional Tetris.


If anyone could find the User Friendly strip where sysadmin guy defrags the cinema, I would be very grateful.


Turning chaos into order has some appeal.

Defrag is relaxing and hands off, but reverse engineering is a hard drug.


Arrgh, those "unmoveable" blocks.. I used to try all sort of hacks to get rid of them.


I'd really love to have this as a screensaver. Such memories sat staring at these screens!


... but it would probably be an electron app and drain your battery ...


I remember vaguely Diskeeper on OpenVMS, still wonder if it helped or not.


Does anyone ever see the W writing state? I don’t. :(


I lament that it seems to start out with a 80-90% free disk and ends with a 60-70% full disk and reads mostly blocks that are free. And when it's done it just restarts with a fragmented disk...


I guess if the author/anyone was really bored, they could write a more realistic simulation, with an object that simulates the disk and its sectors/clusters, another that simulates the file allocation table, and even another that simulates the disk with the I/O seek/read/write times (even taking into consideration the time needed for the head to move from different places...).


I do.


Hah, makes me miss my grandpa


Needs the read/write arm noises for a fuller experience...


shhrrrrr shrrrrr sshhrrrrr tic shrrrrr /s

I remember now, you could even hear the reverberation in the metal case!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: