I.e. just put an SSD in the thing and be done with it. Just make sure to get a big enough device if you're worried over write exhaustion as write lifetime is directly related to device size.
[ Sound of penny falling on head and subsequent facepalming ]
Of course, why didn't I realize that. Good point. I have an SSD here with some data on it; if I can figure out where to stash that I might install it, heh.
Some excerpts on the failure mode:
...all of the drives wrote hundreds of terabytes without any problems. Their collective endurance is a meaningful result....
...The Corsair, Intel, and Kingston SSDs all issued SMART warnings before their deaths, giving users plenty of time to preserve their data...
...Samsung's own software pronounced the 840 Series and 840 Pro to be in good health before their respective deaths...
...If you write a lot of data, keep an eye out for warning messages, because SSDs don't always fail gracefully. Among the ones we tested, only the Intel 335 Series and first HyperX remained accessible at the end. Even those bricked themselves after a reboot. The others were immediately unresponsive, possibly because they were overwhelmed by incoming writes before attempted resuscitation....
...Also, watch for bursts of reallocated sectors. The steady burn rates of the 840 Series and 840 Pro show that SSDs can live long and productive lives even as they sustain mounting flash failures. However, sudden massacres that deviate from the drive's established pattern may hint at impending death, as they did for the Neutron GTX and the first HyperX....
Thanks for the info. This is separate technology than the emmc or whatever that's prevalent in new "cheap" laptops now that is used for main memory. Like the laptops that come with 16GB storage haha.
I'm aware of ACD (https://redd.it/5s7q04) but I'm in Australia, where "real" internet (ie, upload speeds above 100KB/s) are a dream for 95%+ of the country (including me), so the ~5-8TB of (un-deduplicated) data I have here would take at least 1090 days to upload if I did it 24/7 and had perfect and unwavering 100KB/s upload. During which my entire connection would be unusable because TCP. Heheh.
The "send me a HDD" route won't work because the data has been sliced-and-diced in various ways (the worst case example being a disk image tar+split'd to fit on a FAT32-formatted HDD) and I need to de-Rube-Goldberg everything first. That would take me about 6 months, and I'd definitely want a properly mirrored setup for that.
Moral of story: I probably don't want your old computer with its tiny HDD. (And if I could go back in time and tell myself that the hotswappable disk thing I was hearing about in 2006 most emphatically did not apply to my two 486s, and that plugging that HDD in with power on would not only take out the PSU but also the floppy drive controller, that may have helped things a bit... what I had to do in the end was duplicate the HDD contents between the two machines in question so I had my files available on both - the triplicates-of-my-duplicates snowball started there, I think.)
I honestly won't be surprised if, once I've deduped everything, I only have about 50GB of truly important stuff, and a few hundred gigs of "I can redownload that if I need it".
I guess your alternative could be to use local storage.
haha, I cry about my own problems with 1Gbps connection.
My plan is to write a multithreaded disk [re]indexer (since full indexing sadly seems to be the only solution at the moment; no filesystems have a "fast dump" mode yet), along with a bunch of utilities that try to make sense of the generated index.
(By "[re]indexer" I mean that, if you pass the indexer an old database, and any file has the same size and mod/ctime as what's in the old database (or matches any other rule you specify), the checksum for that file won't be recomputed. You could also pass in an old database and simply tell the indexer to scan a specific set of subdirectories on disk that you know have changed, and the indexer would copy simply copy the old database to the new for every path/file outside the ones you've specified.)
Building an index and then working off of that has a lot of benefits - instantaneous visualization without needing to rescan the disk on every program start, realtime similar-file searching, listing every file with the same checksum as you browse around, etc.
Obviously this tool will wind up on here when I eventually (:P) do write it. I probably should think about starting on it already but in all seriousness I'll likely be motivated to start working on it once it looks like I'll finally be able to put my ZFS pool together. (Hopefully that plan doesn't backfire, and I don't get the remaining $ all at once and abruptly wind up with a bunch of disks and no tool... and we wind back to the first approach, of just starting now)
Local storage is definitely the goal, once I can afford it (expensive medical issues that prevent me from straightforwardly getting a traditional job are really fun, FYI xD). I don't really find cloud storage flexible enough for my requirements at this point.
Sadly the NBN has been sufficiently politically meddled with that it's not likely to be 1Gbps - and indeed current installations are only 100Mbps with no path past that. But I heard recently on a news story (on the radio of all places) that if people need an upgrade path with the NBN, that that will be available. Nobody mentioned how much said upgrade would cost, but I'd be willing to drop a few hundred on it once I have a job. What on earth issues do you have with your connection?! lol