Hacker News new | past | comments | ask | show | jobs | submit login

I'm currently using a T43 I was given it by a friend.

I completely completely agree with you regarding the concept that older hardware results in better-designed and faster software. (If I had the money and I could deliver a pallet of T42s to Google's front door as a protest-prank, I would.)

I'm hesitant to put an SSD in mine though. The disk is swapping about 65% of the time I'm using the machine (although I have 2GB)... I fear I'd wear through the flash cells in 6 months. I'm completely serious.




No problem there, the flash wear problem is overstated to begin with, SSD prices have plummeted so even if you were to wear the drive out in a few years - months would be close to impossible on an old, bandwidth-restricted machine like this - you can just buy a new drive. My (120 GB, Intel) flash drive on the T42p (maxed-out at 2GB) has worked flawlessly for about 3 years now, with no sign of giving up. S.M.A.R.T. data is unreliable on this device, unless you really believe it has an uptime of 904608h+33m+15.150s, 5468289 uncorrectable errors (increasing by 2 per second), etc. More believable are the host write 32MB (81785) and NAND write 1GB (3726) numbers. Media wear sits at 0%.

I.e. just put an SSD in the thing and be done with it. Just make sure to get a big enough device if you're worried over write exhaustion as write lifetime is directly related to device size.


> [E]ven if you were to wear the drive out in a few years - months would be close to impossible on an old, bandwidth-restricted machine like this

[ Sound of penny falling on head and subsequent facepalming ]

Of course, why didn't I realize that. Good point. I have an SSD here with some data on it; if I can figure out where to stash that I might install it, heh.


Would you get a warning before the SSD completely failed/was unusable?


In theory, yes. In practice, maybe. Techreport did an endurance test on SSDs which gives some insight in the failure mode. The last article in the series can be found here, links to earlier episodes can be found through the article:

techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

Some excerpts on the failure mode:

...all of the drives wrote hundreds of terabytes without any problems. Their collective endurance is a meaningful result....

...The Corsair, Intel, and Kingston SSDs all issued SMART warnings before their deaths, giving users plenty of time to preserve their data...

...Samsung's own software pronounced the 840 Series and 840 Pro to be in good health before their respective deaths...

...If you write a lot of data, keep an eye out for warning messages, because SSDs don't always fail gracefully. Among the ones we tested, only the Intel 335 Series and first HyperX remained accessible at the end. Even those bricked themselves after a reboot. The others were immediately unresponsive, possibly because they were overwhelmed by incoming writes before attempted resuscitation....

...Also, watch for bursts of reallocated sectors. The steady burn rates of the 840 Series and 840 Pro show that SSDs can live long and productive lives even as they sustain mounting flash failures. However, sudden massacres that deviate from the drive's established pattern may hint at impending death, as they did for the Neutron GTX and the first HyperX....


Man it sounds somewhat involved. It's not as easy as drop-in with regular HDD's. Still I get the benefit... so what do you do? Based on its class, estimate the lifetime? How would you do that? Do an average read/write count per day or something?

Thanks for the info. This is separate technology than the emmc or whatever that's prevalent in new "cheap" laptops now that is used for main memory. Like the laptops that come with 16GB storage haha.


Make sure to have an up-to-date backup of the important bits on the drive (I'm using rsnapshot [1] with a 1h interval, backing up to a mirrored server), otherwise just use the thing for what it was intended to do. I've had quite a few magnetic drives fail without warning as well so I assume storage to be unreliable and act in accordance.


Heh, my storage situation is interesting. I'm currently trying to squirrel spare change away where and as I can so I can get a ZFS pool; it's mildly irritating that the "perfect" price/performance ratio means you need to get $1k in drives before you can begin.

I'm aware of ACD (https://redd.it/5s7q04) but I'm in Australia, where "real" internet (ie, upload speeds above 100KB/s) are a dream for 95%+ of the country (including me), so the ~5-8TB of (un-deduplicated) data I have here would take at least 1090 days to upload if I did it 24/7 and had perfect and unwavering 100KB/s upload. During which my entire connection would be unusable because TCP. Heheh.

The "send me a HDD" route won't work because the data has been sliced-and-diced in various ways (the worst case example being a disk image tar+split'd to fit on a FAT32-formatted HDD) and I need to de-Rube-Goldberg everything first. That would take me about 6 months, and I'd definitely want a properly mirrored setup for that.

Moral of story: I probably don't want your old computer with its tiny HDD. (And if I could go back in time and tell myself that the hotswappable disk thing I was hearing about in 2006 most emphatically did not apply to my two 486s, and that plugging that HDD in with power on would not only take out the PSU but also the floppy drive controller, that may have helped things a bit... what I had to do in the end was duplicate the HDD contents between the two machines in question so I had my files available on both - the triplicates-of-my-duplicates snowball started there, I think.)

I honestly won't be surprised if, once I've deduped everything, I only have about 50GB of truly important stuff, and a few hundred gigs of "I can redownload that if I need it".


do you use fdupes? I wish it would combine all the files into the same "copy" folder rather than scattered, whichever one happens to be the "index" copy that is left.

I guess your alternative could be to use local storage.

haha, I cry about my own problems with 1Gbps connection.


I ran fdupes on a random directory once, very interested to see what it would do. After spending an extraordinary amount of time checksumming everything (I don't even remember if it had a progress bar) it spat out a very confusing text file I wasn't able to make sense of at all.

My plan is to write a multithreaded disk [re]indexer (since full indexing sadly seems to be the only solution at the moment; no filesystems have a "fast dump" mode yet), along with a bunch of utilities that try to make sense of the generated index.

(By "[re]indexer" I mean that, if you pass the indexer an old database, and any file has the same size and mod/ctime as what's in the old database (or matches any other rule you specify), the checksum for that file won't be recomputed. You could also pass in an old database and simply tell the indexer to scan a specific set of subdirectories on disk that you know have changed, and the indexer would copy simply copy the old database to the new for every path/file outside the ones you've specified.)

Building an index and then working off of that has a lot of benefits - instantaneous visualization without needing to rescan the disk on every program start, realtime similar-file searching, listing every file with the same checksum as you browse around, etc.

Obviously this tool will wind up on here when I eventually (:P) do write it. I probably should think about starting on it already but in all seriousness I'll likely be motivated to start working on it once it looks like I'll finally be able to put my ZFS pool together. (Hopefully that plan doesn't backfire, and I don't get the remaining $ all at once and abruptly wind up with a bunch of disks and no tool... and we wind back to the first approach, of just starting now)

Local storage is definitely the goal, once I can afford it (expensive medical issues that prevent me from straightforwardly getting a traditional job are really fun, FYI xD). I don't really find cloud storage flexible enough for my requirements at this point.

Sadly the NBN has been sufficiently politically meddled with that it's not likely to be 1Gbps - and indeed current installations are only 100Mbps with no path past that. But I heard recently on a news story (on the radio of all places) that if people need an upgrade path with the NBN, that that will be available. Nobody mentioned how much said upgrade would cost, but I'd be willing to drop a few hundred on it once I have a job. What on earth issues do you have with your connection?! lol


BTW, +many on the suggestion to deliver some old iron (or, rather, titanium and magnesium) at Google HQ. For some reason they are amongst the worst offenders when it comes to creating sluggish JS-heavy pages. Given the performance of their earlier, JS-free search site this is both confusing as well as disappointing.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: