While I understand some of the vibes the author picks up (can one guy who writes all his programs in assembler really have the answers to hard drive maintenance and recovery?), the article does not seem to go much beyond this vibe and speculation based reasoning. It would be more convincing if there were experiments testing the claimed benefits of using SpinRite, specifically around performance improvements caused by running a scan. Personally I think the claimed data recovery capabilities of SpinRite are less relevant these days due to file systems like ZFS that have scrubbing and data recovery built in. Distributed block and blob storage systems that clouds are built on have similar systems as well.
I'd like to recommend Security Now, the podcast the the author of SpinRite, Steve Gibson, hosts. While at times Steve has a non-consensus take on things (other commenters have noted his preference for out-of-support versions of Windows), he is pretty good at explaining the technical details about the topics he covers. It's a good way to keep up with security news. Probably only the Security, Cryptography, Whatever podcast is better (more in-depth discussions of low level topics with experts), but it less consistently published.
Could you edit your comment and remove the last paragraph? It’s interesting info until you veer into personal insults, which doesn’t help you inform others.
> ZFS scrubbing is designed to counter issues that crop up with bit flips caused by extremely unlikely hardware errors and things like cosmic rays - which are extremely rare, but when you have petabyte scale storage, they actually become plausible risks.
It's not rare at all and you don't need petabytes to hit it in your home. I also doubt it's caused by cosmic rays.
People seem to have this view that hardware is rock solid all the time. It isn't. Components fail.
> ZFS scrubbing is designed to counter issues that crop up with bit flips caused by extremely unlikely hardware errors and things like cosmic rays - which are extremely rare, but when you have petabyte scale storage, they actually become plausible risks.
Scrubbing has multiple purposes. Sure cosmic rays, but it also helps detect poor cabling (if somehow that didn't throw checksum errors in regular use), wild writes and degrading media.
Reading every sector with filesystem data is similar to a function of SpinRite that seems to read all sectors. Of course, SpinRite won't know if a good read holds bad data. But ZFS scrub doesn't read all sectors. According to claims, it sounds like SpinRite could issue uncorrected reads to a weak sector multiple times, run statistics and guess at the right value and write it back? That seems possibly useful, especially if the drive is likely to reallocate the sector when it's written to. Issuing writes to all sectors of a spinning disk can help the drive firmware with reallocating problematic sectors, although there's free tools to do that.
> The fact that you don't understand the difference between what amounts to a very dressed-up bad-block scan and ZFS scrubbing shows how under-qualified you are to be forming opinions on these subjects.
I thought I was pretty clear that I thought SpinRite was no longer relevant because of better file systems being available. Perhaps I should have elaborated in my comment that specifically the checksumming and redundancy provided by ZFS (and other filesystems) are what makes it scrubbing able to recover corrupted data. Data corruption that obviously SpinRite would be blind to.
I haven't RE'd SpinRite myself, but I remember reading from some others who did and concluded that it was basically very similar to ddrescue's algorithm, but with some additional use of "read uncorrected" commands and statistical analysis. The latter is implemented by other DR software too:
If you listen to the Security Now podcast, Steve Gibson often says it hasn't seen an update because it hasn't needed an update. I'd imagine there's at least some truth to that. Drives change all the time, but does hard drive interface technology change all that much?
Yes, he SAYS that but it's not true which is part of what irks people. It has needed an update for a long time. He or Leo on the podcast have long said or implied 6.0 is more or less "bug free." But for example, read spinrite forums about unrecoverable division overflow errors, divide by zero errors that pop up with a red background. "Try updating your bios" or "try it in another computer" is the typical advice given, not a fix. And I think there is a drive size limit that has long been an issue.
> If you listen to the Security Now podcast, Steve Gibson often says it hasn't seen an update because it hasn't needed an update. I'd imagine there's at least some truth to that. Drives change all the time, but does hard drive interface technology change all that much?
It definitely needed one. I briefly had it several years ago, and I don't think it supported large disks that were common then (maybe 2TB was its limit) and it did not also support GPT partition tables.
Years ago I bought Spinrite. Past the 30 day return policy I was disappointed to learn that it was limited to 2TB HDDs. At that time most of my HDDs were larger. I asked about an upgrade and was told "Real Soon Now." Years ago. I asked for a refund and was told (paraphrasing) "you took too long to determine that Spinrite doesn't meet your needs."
Back in the day before integrated drive electronics, Spinrite was very useful for adjusting the sector interleave to improve disk performance. With drive electronics and management firmware improvements, Spinrite was useless.
> Steve Gibson often says it hasn't seen an update because it hasn't needed an update.
I remember more than 20 years ago, there used to be a website grc.com as well as grcsucks.com, a site that was very much criticising the former.
Grc contained, next to a link to hdd recovery software, a section titled 'shields up' that you could use to do a port scan on your own IP address with some advice by Gibson and some 'rants' about Microsoft supporting raw sockets, even though he warned them that it was a very bad idea.
The whole Gibson franchise felt snake oilish back then, for a big part because it sounded snobbish, very 'listen to me the smart guy!'. But still I have no idea whether or not it was legit info...?
My recollection of his networking utilities is similar. The raw sockets thing was obvious nonsense and demonstrated a fundamental inability to understand security (software/capabilities of remote parties are completely independent from your local environment - removing a feature from Windows that can facilitate attacks would not affect Windows's defensive vulnerabilities!). You'd meet plenty of types of these people in the 90's/00's, possessing security "certifications" but no actual mental model of how anything worked. I once had to suffer a university head of IT who was scared of individuals running Linux connected to the campus network because "there was no company to sue".
Gibson's whole schtick felt like a snake oil salesman that got a cult following of a bunch of Windows noobs that found his simple utilities useful (perhaps SpinRite was one of these) and then extrapolated from that into believing the vacuous technobabble marketing. In the circles I ran in, "Gibson" and "Shields Up!" were more punchlines for jokes than anything else. The concurrent "Hack the Gibson" meme didn't help that either.
I've thoroughly enjoyed posts on this blog. Based on the title I was actually worried that this post was going to be talking about GRC uncritically, and was greatly relieved when it did not.
It's a great blog, discovered it a few months ago and read every post within a few weeks. Love the random tangents - sometimes more interesting than the topic under discussion.
While Gibson is overly pompous, I should point out that SpinRite works below the file system structure, and not all filesystems are robust like ZFS, etc. Second - there are two main SpinRite modes - Read/Check and Read/Write/Correct. SSD's should obviously never use the second mode. I suppose the first mode might be used to check if there are problems on a SSD.
SpinRite - last time I used it, was painfully slow - like days or even weeks to run. He's been working on a faster SpinRite 6.1 for at least 10 years now. FWIW, here's the current (2021) roadmap - https://www.grc.com/miscfiles/GRC-Development-Roadmap.pdf
> What does Spinrite actually do that is materially different than ddrescue?
Pretty sure Spinrite repairs/recovers, as you imply in your very next sentence (which I agree with, it's not a good idea to play with the filesystem of a failing disk.) I've never used it, though, so I may be wrong.
The real question for me is why would somebody pay for it over using ddrescue to get an image, and TestDisk/PhotoRec to do the filesystem recovery (on the image)? They're free and very good.
I had an Team Group SSD that very occasionally would commit a successful write only to be followed by a read failure several weeks/months later. Eventually it got to the point where some blocks just wouldn't read at all (or just read corrupted data) I ended up getting a RMA replacement.
On the replacement drive I used the badblocks utility to do a read/write test to ensure every block on the SSD was fine after a write/read of every sector.
Probably not best practice but how do I check the SSD is fine in the first place, especially a blank SSD? My issue is reading a black SSD is likely to work just fine as presumably if there is no written data yet the SSD controller can short-circuit and just return a zero-filled response. This means the underlying media isn't tested at all if I am understanding it correctly.
The first SSD I got seemed fine (even SMART kept parroting that everything was fine even though some of the more detailed SMART data were showing worrying trends) and it was only when I noticed that some files on the NTFS partition was not reading correctly that I started to suspect disk failure. At best it would read "fine" but with corrupted data but over time it'd start to simply hang on a read and fail to read.
Luckily I had md5 sums of some of these files and was able to confirm that several files were corrupted from between when the file was written (and the md5sum computed) and several weeks later which is how I ended up running badblocks on the first drive to confirm the defect. I wish I used ZFS and not NTFS.
> Probably not best practice but how do I check the SSD is fine in the first place, especially a blank SSD?
The only way that is certain to check the memory cells is to overwrite the whole drive, flush all disk cache (power cycle the system), read all the written bytes, and check that the values read are the same as the values that have been written. This could be accomplished e.g. by setting up encryption on the whole drive on the block level (e.g. on Linux, LUKS), writing zeroes to the open (decrypted) volume, and after power cycle, opening (decrypting) the volume again and checking that all bytes read are zero.
A simpler, less reliable, but still worthy test would be to do the same, except instead of checking the read values, just throwing them away (e.g. on Linux, redirecting to /dev/null). The disk firmware should still try to read all the sectors, and if it is not lying too much, show read problems/reallocated sectors in the SMART data.
The original product was for low-level formatting and checking of WD, MFM, and RLL (pre IDE) drives where the drives either had or could alter the magnetic arrangement of tracks and sectors. For example, taking an MFM drive from an MFM controller and placing it on an RLL controller, DOS wouldn't be able to read, write, or format it. However, SpinRite could low-level reformat it to work and it would be both faster and a higher capacity (thanks to the RLL controller).
> last time I used it, was painfully slow
Yep. That's the breaks of format, write, and read every single sector while waiting for it to come around at least 3 times on a 3600 RPM HDD.
This whole "discussion" indicates only one real thing: people who have been scammed by a sales tactic will most likely double down on their past poor decisions instead of reevaluating the facts, as they are now emotionally invested in their past decisions.
I never used Spinrite, yet I can clearly see many people online just jump on the takedown train and berate Mr. Gibson as charlatan, snakeoil salesman, with very poor arguments, often not giving any actual experience with the software, just their ideas why it can't be useful. Those who used it and really found it useless have much higher credibility, but those seem to be a very vocal minority, while there are many people who find the software useful.
The original SpinRite was a lifesaver back in the days of 100 meg magnetic disks, I paid for a copy and used it to resurrect both my own and my friends' hard drives a few times. It was indeed able to (usually) recover bad sectors that the OS refused to fix.
Many years ago, I made the mistake of using SpinRite once to try and recover a hard disk failing with bad sectors.
It was only through the process that I realised that instead of letting the hard disk run and get worse, I should have instead tried to image off all the data ASAP with ddrescue.
> I should have instead tried to image off all the data ASAP with ddrescue.
It's interesting since for those familiar with the claims of SpinRite and how 'it'll keep working as long as it takes to read a bad sector' (a key part of its promotion) ddrescue can do the same thing (and resume where it left off) but unlike SpinRite it's simultaneously making an image of the drive as it does so.
But if you already knew the drive was damaged, why push it to the point when you can't possibly have the time to image it, which will take forever? Feels like advising somebody to push through their asthma until they're about to lose consciousness.
This. I naively thought that SpinRite could "fix" the bad sectors (forcing the drive controller to copy the data to a spare sector) and let me copy out of the data through a normal process after, but bad sectors are often an indicator of a larger mechanical issue.
Did your disk get worse as a result of running Spinrite? My understanding is that the author and many people suggest using it to get the problematic drive work better, and be able to read data it couldn't read before. According to testimonials online, this sometimes does succeed. Of course, it can also cause the drive to get worse or fail completely. But we don't have statistics data to determine which outcome is more probable, so it would be unfair to denounce the advice or the author (as some people do) just because the advice didn't work to user's satisfaction in their particular case. I agree that Spinrite or any such program should, as the first step before doing anything risky, advise the user to try to make an image of the disk and save it to another reliable disk.
I get the huckster vibe he described as that was my impression long ago when I’d see it advertised all over, hear Steve speak, etc. I did in fact buy Spinrite once when my HDD was pretty well screwed. It didn’t improve things, but credit where due: GRC did immediately honor the money back guarantee.
Your analogy is terrible. Software is infinitely reproducible, and SpinRite undeniably took a lot of effort to write (even if it “doesn’t work”). Steve Gibson hate HN threads are always awful because the greybeards come out of the woodwork and throw intellectual integrity out the window because talking about how bad the guy is is sufficiently fashionable…or was, 20 years ago.
Isn't part of the reason modern hard disks seem more reliable, that they have much more sophisticated file systems on them? In the old days, merely turning a machine off at the wrong moment might introduce inconsistencies that could cause data corruption later. Or on Lin*x, when's the last time you've had to look for the wreckage of files in the lost+found directory? Does anyone even remember what it's for?
I've had even modern HDs go gradually bad and the remaining undamaged files are generally readable without trouble.
The author of this piece seems to think highly of his abilities but by his own admission doesn’t understand how spinrite works. Information that’s not that hard to come by or imagine.
On SN, Gibson explains for SSDs that SR rewrites marginal cells to avoid subsequent error correction delays, speeding things up.
Notice the author doesn’t say, I tried it on ten dead drives heading for the garbage heap and it did nothing… no, he says basically I don’t think it could work. Unless you actually wrote similar code you’d have no idea. Did this person even attempt to develop a low level disk tool of any capacity, much less one of production quality?
Also complains about the website and SG being old-fashioned—well let’s see your website when you’re 70? and had decades of career behind you. I still prefer vintage to a react/electron monstrosity.
>Gibson explains for SSDs that SR rewrites marginal cells
Gibson knows this is a lie, but he keeps repeating this. You cant target particular flash cell on SSD, its impossible. There is FTL between you and raw flash.
What you can do is force SSD to reinitialize whole flash with ATA Secure Erase or NVMe Secure Erase. You dont need to spend $80 on snake oil software pretending to do something else to do Secure Erase.
I used that word because they aren’t actually sectors, but it’s immaterial if it is a cell or the enclosing sector. They can be overwritten as a full block of course.
Whether this actually resets the voltages might vary by drive, who knows—would need to see hard data to properly verify. Without that data, both Gibson and critics are talking out their asses.
Gibson is a known quantity and more reliable than internet nobodies however. It sounds feasible and I (so far) have no tangible reason to disbelieve him. If you want to prove him wrong, do the work.
So we went from "SpinRite rewrites marginal cells" to "reads all the data on SSD hoping SSD firmware to magically fix itself"? aka $80 for dd if=/dev/nvmexxx of=/dev/null
You've now veered into the territory of the other posters which I already addressed.
No one (sane) calls Dropbox a fraud because rsync exists, or Adobe because Gimp exists. Or that McDonalds is a fraud when wild Ungulates and arrows/knives exist. These products simply target groups who desire convenience.
If you think normal folks are comfortable with the CLI, much less dd (probably the most dangerous command), you are decidedly out-of-touch. Restoring access/speed to important data conveniently via menu/progress-bar is often worth 10x that price. I've seen people buy brand-new several thousand-dollar computers to fix such problems. e.g. We were given a free iMac because the owner didn't know how to buy/install memory.
I like how you're providing the knee-jerk haters with rational push-back, using facts, logic, and also quite an eloquent language. I strive to do the same when I can. I think this is important in discussions on many topics, including political ones, because people online (but also IRL) are too prone to adopt some hateful stance towards someone to get the dopamine hit, without giving them the benefit of doubt and rational equitable analysis.
Thanks, I do have a bad day once in a while, though have been doing this particular kind of discussion since the slashdot days. So it doesn't tend to rile me up.
It’s 2024. You can’t develop a low-level disk tool, because there is no such thing as a low-level disk interface anymore. Using the rawest available interface, your tool will be several levels of abstraction away from the level Spinrite claims to operate at. It’s like saying you’re going to get AWS S3 to rewrite an object by accessing it through the HTTP API. I mean, sure, maybe, but any copy will do that just as well.
It may “work”, but the question is whether it works better than simple ddrescue.
Can you say explicitly here what exactly does Mr. Gibson or Spinrite software claim about the level Spinrite operates at? Because I've heared him say Spinrite uses either the slow BIOS interface, or his own (IDE,AHCI?) drivers. These modes of operating the disk are, as far as I know, quite possible. Operating systems use them as well.
Doesn’t matter, people are familiar with what they are familiar with. And that is still M$ Windows, to a huge degree.
Just had a client who only allowed MS App 2FA email for “security reasons.” Meaning the IT head couldn’t be bothered to understand anything else. Even better, free, open standards like TOTP or FIDO2.
I remember running Spinrite regularly on my 286 as a kid in the early 90s like some sort of maintenance ritual, watching the magical blob pulsing like it was doing something deeply important. Meanwhile, each time it ran it would either 1) test a sector that had been marked bad and then unmark it as ok, or 2) test the same sector and mark it bad again.
The idea that that same app is doing something useful 30+ years later strains credulity.
The interfaces by which the system communicates with the drive have been heavily abstracted though, which means spinrite is now similar to a block copy with failure retries.
Im guessing the author of this has somewhat of an issue with Steve Gibson and GRC, and has obviously spent some time mulling over how to write a very wordy and seemingly in-depth bashing of the their Spinrite software. However if you like myself have seen it work, and actually take a previously unusable hard drive to a usable state to allow a successful recovery of data, or in recent times, take an SSD with poor performing read and write speeds to a significant improvement after running Spinrite on the drive, you will be able to skip much of the diatribe in this post and actually see that it more of a character assassination on GRC and Gibson himself. Is the software 100% guaranteed to work, nope and I probably wouldn't recommend it for critical enterprise data recovery if you have the budget to spend on commercial recovery services, but as a low price maintenance tool it works well for many. The Author of this posts seems pretty knowledgeable, and probably has alot of offer, which is why its a pity his ego and spiteful nature seeps into his writing.
This article didn't read like character assassination to me, personally - most of the time spent on GRC/SpinRite (after the overall topic of disk recovery is introduced) seems to be either observations about Gibson's style with which I think many would agree - e.g.
"It doesn't help that Steve Gibson's writing is pervaded by a certain sort of... hucksterism. A sort of ceaseless self-promotion that internet users associate mostly with travel influencers selling courses about how to make money as a travel influencer."
Or substantive critical points about the software, e.g.:
"This gives the flavor of the central problem with SpinRite: it claims to perform sophisticated analysis at a very low level of the drive's operation, but it claims to do that with hard drives that intentionally abstract away all of their low level details."
And I think it's fair to ask someone who is selling a piece of software for $89 to provide some backing for their claims beyond ones that would only pertain to largely-obsolete hardware.
I think you are dead on. I recall -- perhaps incorrectly -- that Gibson has been just silly amounts of incorrect on some things, but SpinRite itself, I've never heard anything but "... and then everything worked like a minor miracle." And you're correct, Gibson has a certain, uh, Wolfram-y habit of selling himself whenever possible, which doesn't help matters, but I hope people can manage to separate the personality from the product.
Observations of Gibson's style are all negative, except at the very end, praising the user interface. But that last element read more like a quick self-cleaning ritual for the author of the piece, rather than effort to provide a balanced description.
Gibson's style may very well be overly self-congratulary and deserving criticism, and many could agree on that. But this piece still reads like inordinate amount of effort just to show somebody and their work in negative light, without actually checking their product and evaluating it rationally and equitably. Even if there are bad things to say about Mr. Gibson's style or his software, the software may still be working and useful, and no attempt at serious evaluation was made.
> And I think it's fair to ask someone who is selling a piece of software for $89 to provide some backing for their claims beyond ones that would only pertain to largely-obsolete hardware.
I agree it is fair to ask, and Mr. Gibson seems like a reasonable ,easy to talk person. Did you try? He has a podcast, Twitter and a newsgroup discussion forum.
Listen to the "Security Now" podcast. He explains how SpinRite works every 3rd or 4th episode, with testimonials on every show.
He seems open minded and mostly harmless, both in his tool (which I find works better than free alternatives), and in his armchair security analysis. Sometimes though he oddly contradicts his own best practices, like nearly blind faith in LastPass for years based on (IIRC) a white paper and the early execs being very chummy and accessible. Thankfully the audience calls out the questionable stuff.
The podcast is called “Security Now” but what it should be called is “privacy now” because Mr. Gibson fails to understand a lot of contemporary security problems yet is quite sure that Windows collecting telemetry is the most severe problem on the planet today.
unless you use his software to fix it, that is.
Every episode having a 15-minute commercial for spinrite (via testimonials which all sound like they were written by the exact same person) should be more than enough for anyone to start to question the guy.
I didn't listen to that show for a while now; but it seemed that it was the only show out there that explained in details computer security news. I remember him explaining the speculative execution exploits when they first appeared really well when they first appeared. Does the people I know who works on blue and red teams listen to him? No, they already know that stuff, and yeah he could be more up to date, but he does his researc, does his homework and is a great pedagogue.
If he has implemented mitigations for all of the applicable risks of the software he's using, how is that "not the behavior of a security expert".
To my mind, a security expert is someone who understands the functional details of specific vulnerabilities, and explains how to mitigate them, not someone who makes vague, cargo-culty judgments about entire applications or OSes.
He was browsing the web, that's pretty high risk. And sticking to reputable sites isn't enough when their ads could contain malware. While it sounds like he doesn't use XP anymore, (IIRC) he was using it for the Internet well beyond its EOL.
He also admitted to having trouble getting his dev environment working on newer OS's. My guess is he was rationalizing the choice to stick with XP to avoid the friction of upgrading development tools. Which is odd since he's not afraid to delay things for years and ultimately has upgraded his environments anyway.
Steve Gibsons was always a bit of a laggard in adopting things through. He was writing pages about how assembly languages create small programs in the late 90s when that advantage was no longer relevant, running a newsgroup server and hooking up a web UI as a web forum, and so on.
Considering final application size as well as CPU and RAM usage will always be important, whether people believe them to be or not.
I won’t ever go so far as to recommend that others write stuff in Assembly, but I’d love to be able to do that.
CPU and RAM will matter so long as users are billed by those metrics. More RAM will always be more expensive than less RAM, and faster CPUs will always be more expensive than slower CPUs. If you write software that is used as scale, I would consider it a moral failing if you do not consider how many resources your application uses at scale and you do not make some effort to increase the efficiency of your application in some way.
Accordingly, I have almost zero respect for JavaScript developers, especially server-side JavaScript developers. Server-side JavaScript developers know that JS is inefficient and they choose to use it, anyway. How much coal has been burned exclusively to allow JavaScript developers to run Node on the server, instead of some other, more efficient language? A LOT, I guarantee it.
Performance and efficiency matter a lot at scale. At the small scale, no user has ever complained that their application was too fast or that it didn’t use enough RAM.
When you invoke a Lambda trillions of times per year, every last byte of RAM and every millisecond of CPU time matters. My employer has a few Lambdas which are invoked tens of trillions of times per year, and we saved a lot of money moving from Python to compiled languages. We’d save a lot more if we knew how to write assembly.
> especially server-side JavaScript developers. Server-side JavaScript developers know that JS is inefficient and they choose to use it
I'm in no way a JS fan, but this take is wrong. The main reason JS is on the server side is because it makes the transition between server side and client side trivial. Not everyone runs SAAS with billions of requests every seconds.
In terms of not only money and time, but also resources and energy spent, this increase in software productivity it is worth it in most cases.
The advantage of writing code in assembly was relevant then, and remains relevant now.
Given the vast regressions in usability and compatibility of software generally that we've seen in the past 10-15 years, someone maintaining and extending the functionality of superior older technology is doing something unequivocally useful.
And yet AFAIK he seems to be doing fine.
If you run the same stuff, only allow and visit the same addresses, and disable ECMAScript and in addition to other mitigation measures such as 2FA then I don't really see the problem.
> That is not the behavior of a security expert.
Your image of "security experts" must come from movies. I know security experts IRL. Their security at home amounts to not use their work computer for personal stuff and 2FA.
You’ve never had an ad on a webpage serve you malware via a browser exploit that does not require JavaScript, I see. Nor ever used a compromised supply chain. You think that luck will hold out forever? It won’t.
Turing off JavaScript and using 2FA everywhere are good steps, but like using a firewall and saying “I have a firewall, I’m completely safe” is myopic, saying “disabling JavaScript and using 2FA make me secure” is just as myopic.
You must apply security fixes. Sticking to Windows XP because you prefer it over newer operating systems is absolutely foolish if you connect it to the Internet in any way.
If Steve Gibson were a security expert, Windows XP would simply not have been an option the instant it went out of support.
He has had some very fun episodes over the years. Blue pill back in the Vista days blew my mind.
Another episode: "Blue Keep", had me calling everyone I knew in charge of Windows Domains, with many thanks coming back my way because it was a pretty big deal to get patched on unsupported systems.
If you think of Steve Gibson as more of a technical minded journalist and less of a "security expert", then the show is very enjoyable. There's a lot less grave errors now than there used to be, his voice is pleasant and he usually covers relevant and interesting news.
There is in depth information on its workings, on the website itself, in the newsgroups and in the podcast. If the author of the article were to look it would remove any "magic" of its workings. The author apparently has an axe to grind, for whatever reason , having said that , it may be for a very good reason but for transparency sake this should be included in the article. Instead its just a weird ramble about what he thinks of other tools and that he thinks Spinrite is a "scam" without technically explaining why, boiling it down to essentially a technically worded opinion piece.
Which, while not directly dated in the content of the document, references a "screaming Pentium II 333 MHz", which would theoretically put it ~1998. Is the claim that operating at a "low level" on hard drives in 1998 is the same as in 2024?
the simplest explanation for what spinrite does that I have heard is that on spinning rust drives, it simply tries to access the same bad data over and over until it finally (sometimes) gets a result. which makes sense that it would work (sometimes) because hard drives that are going bad tend to do so intermittently.
This is more or less also what (GNU) ddrescue does[0]. It first tries to do a linear copy of the full disk, skipping any errors, then goes back and tries to re-read the error sectors until you either cancel or it succeeds. It also keeps track of everything it's doing so you can stop and start the process without it redoing work.
As someone that’s listened to Security Now on and off for 16 years, with some light memory jogging I’d probably find that I know far more about SpinRite than would ever be useful to me.
This is what I was about to say. I've used it some drives and it worked 4 out of 5 times for drives that I had given up all hopes for.
These hit piece articles are all the same: very well contrived phrases that stops short of making definitive statements and overly rely on the reader making assumptions as a mean to avoid libel lawsuits.
My issue with Steve Gibson is that he spews technobabble, exploiting the delta between "stuff people who work at drive manufacturers know" and "stuff computer users, even highly educated ones, know about how hard drives work", in order to sell what basically amounts to a commercial version of badblocks with a bunch of fancy graphical animations.
Spinrite kinda worked back in the days of MFM drives where they had to be low-level formatted with sector track information the controller then uses to figure out where the head is on the drive, and that sector information is refreshed during writes. But it was still quasi-snake oil, using a lot of mumbojumbo to say "I just note the original value of a sector, write it a zillion times, and then move to the next. This causes the MFM controller to refresh the sector tracks." Yes, those drives did benefit from low-level formats done in the condition the drive would be operated in - with that particular controller, at that temperature range.
He claimed that spinrite could detect not just whether a particular bit was a 0 or 1, but get the analog value directly from the drive by "bypassing" the BIOS to talk to the controller directly. And Spinrite used to have an ASCII "graph showing these supposed values.
Post MFM - IDE, SCSI, SATA, FC, etc - controllers are built-in to the drive, and low level formatting was handled by the drive's controller itself. The drive is sent a low-level format command. Gibson might have still had some claim to legitimacy left there.
But then...drives shifted to using servo tracks written at the factory. The drive itself is physically incapable of doing anything to those servo tracks, and if you degauss the drive, you permanently destroy the drive because the servo tracks are wiped. The drive certainly doesn't expose via its IDE/SATA/SCSI interface any of the super-duper-low-level stuff he continued to claim to be accessing.
He kept spewing the same nonsense...that his utility would boost the strength of the analog 'signal' on the drive by writing it a whole bunch.
People who worked at drive manufacturers tried to work with Gibson because they were under the impression that he simply hadn't kept up with changes in hard drive technology, when the reality was (probably) that his product was snake oil and he knew it, or he was deluding himself. Example: https://radsoft.net/news/roundups/grc/20060123,00.shtml
Any value Spinrite has is achieved via simply trying to read the same data over and over. If there's a failing block, the drive will remap it, and boom, your not-quite-fully-failed drive is "working" again. Huzzah! Except...you can do the exact same thing by simply running badblocks - free and open source - or if you're trying to recover data, use ddrescue or one of its variants, also all open source. It's basically a "dd" that doesn't give up - hoping that the drive might successfully read a particular area if you try enough. The better variants use a binary search to try and get every possible sector. I've used it, and it works well - I've had drives where I was able to get everything except well less than 1MB worth of data, if you gave it enough time to run.
These days he's even claiming that Spinrite can improve SSD performance by repeatedly reading/writing data, which is absurd. All that is happening is Spinrite is a)wearing out the flash and b)maybe influencing what drive sectors are migrated to the SSD's SLC cache (most drives use an area of flash configured as SLC as a cache for reads/writes because it's significantly faster and more wear tolerant than areas configured as MLC, TLD, or QLC.) As a flash cell's electrical charge is reduced with each read, flash controllers automatically refresh a flash cell when necessary when a sector is read.
So, your takedown is "it does things you can do with badblocks"? Not everybody wants to use command line application that can't interpret large integers properly, some people prefer nice user interface and active developer that can work with larger hard drives.
> he's even claiming that Spinrite can improve SSD performance by repeatedly reading/writing data, which is absurd.
Is it really so absurd if it works? Did you do some careful tests of Spinrite on SSDs and did you find it never improved their performance? You seem to be describing 1) your mental model of the SSD drive, and 2) your belief that this model prevents Spinrite from working as advertised. How it prevents that? If SSD firmware does relocate data to other cells when read problems are detected, or refresh the cell charge where the data is, why performance can't improve?
"I feel" are the first two words, therefore it is opinion article, the kind of opinion that does not stick to the facts, and rewards opinionated hivemind consent manufacturing. I stopped reading after those two words cuz it's dangerous signaling of ideologies in my nonfactual nonobjective opinion.
Granted, but to be fair, I feel like the whole piece is "I feel like spending inordinate amount of time to describe Mr. Gibson and his software in bad light", without an attempt at an equitable evaluation of the product itself.
I feel that jkhanlar's still going to read the rest of this comment where I call that behavior short sighted and stupid even though they said they wouldn't, because I also started my comment with "I feel". But the problem is, not only did they stop reading there, but they felt it necessary to inform the rest of us about it. Which only makes them look even more like an idiot. Thankfully, by applying their own logic to their post, and halting reading of their comment after the first two words, which are also "I feel", we can save ourselves the trouble. Unfortunately, we don't know to stop there unless we've read the comment, so we're stuck in a paradox.
You're criticizing a mode of behaviour (not reading the thing and commenting on it anyway) as stupid. Which I agree with.
Then you say the problem is they informed us about that stupid behaviour of theirs. I'm not so sure that is the problem here. Maybe it is their strange encrypted way to ask for discussion, or help. And we should explain that behaviour is bad and should evolve for better.
But then you're proposing we should adopt that behaviour to save us the trouble with them. Thus you're proposing using behaviour which you criticize on others, or in general, because it sounds clever/funny in the present case. But it isn't, because as you've realized, it does not work.
If you want to save people trouble from stupid posts, my advice is, explain why they are stupid, but do not propose using any stupid behaviour, including behaviours suggested in other posts, even if it looks like it could work towards the end goal. The reason is that end goal is not important enough, and suggesting people adopt stupid behaviour in one case to save trouble, is still stupid, and unfortunately, promotes use of that stupid behaviour in general.
Reading first two words isn't sufficient work to arrive at such conclusion. Thus you seem to have jumped to conclusion based on just two words, or more likely, you are showing off/asking for interaction with a seemingly cleverly constructed text that has all the negative attributes it criticizes in the other text.
I did not stop reading your comment, because I didn't think it is dangerous to do so, and I thought it was funny. I write here to you because I think now it is not that funny, and you should abandon this behaviour and change it for better. For example, before commenting on an article, I recommend you first read it all to understand what is it that the article says. You will then be in much better position to make a useful and funny comment here.
I will always be nostalgic about Steve Gibson and Leo Laport on Screen Savers airing on G4.
I loved the GRC website as a kid, but was not surprised many years ago when I learned that Gibson sometimes spoke from a place of authority where he was a bit out of his depth. The way he has spoken about Vitamin D cones to mind.
The metadata from this page [0] is from 2009 and makes some, uh, interesting assertions about how Big Pharma is why folks don't "know" about Vitamin D.
According to this page, there is plenty of research to show that Vitamin D is a cure-all that folks just don't know about:
> We see expensive commercials every night during prime time for expensive and patented pharmaceuticals ... but we never see any similar advertisements mentioning that study after study has shown that simply (and inexpensively) maintaining sufficient levels of Vitamin D can work to prevent rickets (that's well known) but also 17 types of cancer ... lower blood pressure, improve immune system function (prevents colds and flu), autoimmune function, inflammation, multiple sclerosis, autism, allergies, preeclampsia, both type 1 and type 2 diabetes, osteoporosis (also well known) depression, muscle and bone weakness and generalized pain.
The natural supplement industry is a multi-billion dollar industry, and has been for a long while. If Vitamin D was the panacea that Mr. Gibson is representing it to be here, I'm sure that GMC would be running ads about treating "autism and cancer" with Vitamin D non-stop.
Further, if Vitamin D did all of the above, I can assure you that pharmas would be researching the mechanism of action to create effective derivatives.
If a treatment is effective and it isn't a super rare disease, you'll find a pharma somewhere selling it somehow.
FYI, if your SSD suddenly stops showing up (especially after a power off event), then the first thing to try is power cycling it: https://dfarq.homeip.net/fix-dead-ssd/
I had an SSD go bad in another way, with requests constantly timing out or returning corrupt data. That started happening more and more frequently, with the system crashing, and after rebooting fsck would show tons of "fun" messages about some file or another being 141831373107584798 bytes large with a modify date years in the future. Managed to get most of the files off to an external hard drive, put the dead one in a drawer and forgot about it.
When I recently read about this trick, I figured it couldn't do any harm even if it didn't solve this specific problem - it didn't, and by now the drive failed pretty much immediately when trying to read anything.
But that got me trying other things, and what appears to have solved it is using the "hdparm" utility on a Linux install CD to send trim commands for every single sector - pretty much the closest thing to a "low level format" you can get today, AFAIK. For some reason, even with all the confirmation options it demands for safety reasons ("--please-destroy-my-drive"), it will only let you do 65535 at a time, so this required a script to feed it many separate ranges covering the entire space.
Ended up with a seemingly perfectly usable blank drive. Obviously, I still don't entirely trust it to keep working, but okay to put in a crappy old laptop with nothing important on it.
That's basically "let the firmware try to sort itself out", which might work if there was temporary corruption in the FTL metadata and it knows enough to try to recover that by doing some sort of block-scanning, but if there's further damage beyond that, it won't have any effect. Still worth a try, however.
I found myself agreeing with much of this article, but I have to say, I used SpinRite to great effect maybe 20 years ago on a drive that had the “click of death”.
I tried all kinds of methods of recovery, and nothing worked until I tried SpinRite. I can’t speak to how it did what it did (who knows, maybe it was a total fluke), but it gave me the opportunity to scrape all of my data off of it before buying a replacement drive.
Clock of death means HDD bootloader is unable to find service area and load actual firmware/setup data from the platters. The only software able to fix that will have a database of drive models and firmwares for reinitialization (PC-3000). Spinrite does none of that, its just an MFM formatting utility that grew to be a snake oil over the years.
Before talking about whether SpinRite was ever any good or not, it's good to consider the hard drives that were in use at the time it became popular. These early drives pretty much all came with a ST506 interface, as well as in MFM and RLL variants (there were also ESDI and SCSI drives, but since these were exclusively used on high-end systems, they're safe to ignore due to their rarity. The distinction between MFM and RLL can also be ignored, as RLL drives were really only differently-specced MFM drives that allowed the controller to do some rudimentary compression on the bitstream in order to expand usable capacity).
The thing about the ST506 interface is that it was really, really simple: you (the disk controller) could seek to a track, the drive would tell you when that was done, then you could select a head, and the drive would tell you when the start of the track passed under that, and then you could read or write bits to the thus-selected cylinder. Again ignoring some finer points like precompensation and read recovery, you really only cared about three drive parameters: the number of tracks, the number of heads (multiplying these gave you the number of cylinders), plus how many bits you could approximately write to each cylinder. If you take a look at the OEM manual for the ST225 (a very popular ST506 drive at the time), the simplicity just jumps at you: https://archive.org/details/seagate-st-225-oem-manual-oct-85
You'll also notice, though, that there is no real mention of 'sectors' anywhere just yet. That's because that wasn't a drive concept, but something managed by the controller, which was responsible for dividing cylinders into sectors holding 512 bytes of user data. That division would mostly be timing-based, but to allow for error detection and recovery, the controller would add some metadata to each sector: typically, a sector number and simple checksum, which was sufficient to perform timing recovery and retry reads as required.
After connecting a new drive to a given controller, the user would therefore need to run a 'low level format', typically by invoking some code in the controller card ROM BIOS, or by running a vendor-supplied utility: this would then go through all cylinders and write the metadata, including all-zero user data, for each sector. For some sectors, this would yield a write fault, and such sectors would be added to a bad sector list, resulting in a drive with a capacity slightly below that advertised. Another parameter used for this low-level format was the 'sector interleave': instead of dividing a cylinder into sectors 1-2-3-4-5 and so on, the controller would do something like 1-100-200-300-400-2-101-201-301-401-3, with the exact numbers depending on the capacity of the drive, obviously, but mostly the speed of the host system. Because PCs were really slow at the time, demanding tasks like 'reading 2 sectors sequentially' would result in a miss on the second sector, meaning having to wait for an entire disk rotation before trying again. By interleaving sectors, this miss could be avoided, greatly improving performance, but also hindering performance in case the selected interleave never or no longer (i.e. after an upgrade) matched the host speed.
Now, let's turn to SpinRite. Its author, Steve Gibson, understood the relationship between disk, controller and BIOS really well, and cleverly improved on the default experience, which only provided destructive low-level formatting and virtually no diagnostics tools. By combining this understanding with a database of drive parameters and controller BIOS details (mainly: what is the address and calling convention of the per-sector read, write and format routines) and a fancy GUI, he provided some real value to early hard drive users.
Initially setting up a drive with SpinRite required no obscure DEBUG commands or utilities, nor guessing of the optimal sector interleave: SpinRite would perform some tests to figure the latter out, and then perform the low-level format, with a progress bar and all, which was unheard of in vendor tooling. Even better, Gibson figured out how to do a non-destructive low-level format, and that was the true SpinRite superpower. Upgraded to a faster system and now stuck with a nonoptimal sector interleave? SpinRite could fix that for you! Did your drive degrade a bit (due to platters/heads getting out of alignment and/or the stepper motor wobbling), SpinRite could literally restore it to factory-fresh performance!
So, fancy GUI and incredibly impressive marketing aside (which Gibson was really good at as well), the basic algorithm that gave SpinRite its legendary reputation was quite simple: invoke the controller BIOS to read all sectors for a few tracks into a ring buffer in RAM (employing as many retries as needed to recover the data if at all possible), re-order those sectors if changing the sector interleave, invoke the controller BIOS low-level format for the first half of these sectors (because you don't want to low-level format too close to data you haven't touched yet!), then write back the data. Rinse, repeat, with some clever handling of bad sectors and saving checkpoint data to an unused disk sector, so restarting the system during a SpinRite run rarily resulted in data loss.
This worked really well for a long time, but broke down completely when disks stopped using ST506 and migrated to "Integrated Drive Electronics" (IDE) interfacing. When using IDE, the responsibility for managing "sectors" moves from the controller to the disk itself. This greatly simplified the controller-to-disk interface, as the former no longer needed to know about track or head counts: instead, the disk simply presented sequential sector numbers, LBAs. LBA-to-physical-sector mapping became a disk vendor responsibility, which it remains to this day in newer interfaces like SATA, SAS and NMVe.
IDE did away with a number of things, including sector interleave, which due to the increase in host speeds was no longer relevant. But it also made it impossible to perform a 'low level format' type operation, since that was now fully a drive responsibility. Of course, the drive firmware might provide equivalent functionalty to the controller, but, especially in the early days of IDE, it definitely didn't do so.
This meant that SpinRite's magic no longer worked on IDE drives. Sure: it could still attempt to read all your data (and with enough retries, that did allow for data recovery in some situations), then write it back (which might trigger a sector re-allocation in the IDE drive, but who knows), but that was about it. This did not stop Gibson from continuing to market it, and 'The Internet' from continuing to embrace it. There were rumors of SpinRite having special backdoor access to IDE controllers and such, but that was basically all nonsense. SpinRite was now `ddrescue` with some `chkdsk` and `smartmontools` thrown in, just with a much nicer user interface.
So, SpinRite started out as an extremely useful tool that was worth its money, then degraded to a no-longer-magical shell of its former self that continued to be over-hyped, often passionately, for about a decade after the point it should have faded into obscurity. There is a lot of hazy discussion around these facts, but simply by looking at how the underlying tech evolved, the picture becomes pretty clear...
> One time I had two NVMe drives in two different machines do this to me the same week.
I had been buying silicon power drives and they all failed prematurely (9 months at the longest). I switched to Samsungs and never had a single drive fail in years now.
The cheap Samsung enterprise m.2 drives you see on eBay have been trouble for me and others. The things have a tendency to suddenly fail with the firmware version displaying as “ERRORMOD” (error mode) and only showing 1GB of unusable space. They can potentially be put back into a usable state, but it’s 100% data loss when you see that message.
Samsung’s consumer and enterprise SSD departments were (unsure if still true) basically separate entities that didn’t talk with each other, which was further confused and compounded by the OEM customized firmware nonsense which results in a lot of great hardware essentially having buggy firmware with no fixes.
I believe their very latest (non OEM) enterprise m.2 drives have streamlined the availability of updating the firmware, so the problem is seemingly less for those.
For the silicon power drives, if they just suddenly stopped showing up, then you may try doing a power cycle to see if that brings them back: https://dfarq.homeip.net/fix-dead-ssd/
I’ve had Intel and consumer Samsung drives I’ve brought back with power cycling. Older NVMe Drives seem especially prone to becoming unresponsive in older NVMe enclosures. Seems like less of an issue these days though.
The silicon power drives all went read-only. Power cycling never brought them back, and they had various levels of lost. It was clear at least one chip failed and the drive was doing it's best to at least let me recover what I could off of it, but given how many went this way, I just gave up entirely on the brand.
I've only bought the pro level of the Samsung consumer drives (nvme m.2) and they've been stellar. I have no experience with their enterprise drives. I also wouldn't buy drives off of eBay. I don't think of eBay as cheaper than amazon/b&h/etc, and the risk of a fake is just so much higher on eBay.
There is something strange to what the article says about modern drives dying hard and fast now. They used to just slow down a whole bunch and you knew you had time to grab info off them. (i assume the seal just broke eventually and they lost the special inert gas in there?)
But now they just die sudden, which i feel is worse. I think i have seen USB drives that die in a fashion where they become read-only which seems like a half decent idea
Modern SSD drives typically die to controller failure, e.g. unclean power offs corrupting internal state that crashes the firmware, internal counters overflowing after time, memory errors on the internal DRAM, etc. All from to the complexity of needing the FTL to pretend it's a regular disk to the OS.
Disk encryption is what kills off these kinds of tools, they're useless on encrypted volumes. Some disk vendors themselves still ship specialized tools for smart block image recovery, if you've gone looking for recovery tools and don't want to fork out on forensics tools that's a good place to start when tools like ddrescue fail.
It was an essential utility to low-level format IBM PC and compatible hard drives with certain split HDD controllers like MFM and RLL before the advent of IDE. It served a purpose on these drives+controllers where it (re)wrote track and sector layouts but could brick some of the transitional IDE-like hard drives that weren't meant to ever be low-level formatted. Later HDDs had no need for such a utility.
It could also be done on some controllers from debug without any UI or sector testing like so: https://kb.iu.edu/d/aaoa
The advantage of SpinRite was it was a gold-plated tool for a specific purpose and could check for errors along the way.
Anyone that had to deal with the aftermath of a failed HDD Raid set certainly was glad SpinRite was around.
After the rise of shingled writes and SSD, the software was not as applicable to modern systems.
Notably, most good SSD manufacturers provide automated hidden maintenance tools built into the firmware. Thus, some may be surprised to learn leaving an SSD with no power for more than a year is a really bad practice.
Flushing the SSD cache after a system update is usually good practice once a week in off peak service hours. Most Linux distros will similarly defer the trim operations to a scheduled weekly cleaning cycle as well (lowers drive wear).
If you still run old spinning HDD in equipment, than SpinRite can still save you a $2k recovery bill.
I've done repairs that way sometimes, but older OS can wear SSD out a lot faster than normal. It is a less than ideal solution, but does speed up old platforms a lot (thus, usually we provided the recovered disk image file backup on another drive.)
Most modern HDD include proprietary sector error-mapping in flash on the controller chip. Thus, unless you have the vendor software you are not going to access anything helpful, or get away with a donor drive controller pcb.
Sometimes, people learn the hard way about backing up regularly, and testing the recovery set periodically. The rise in remote workstations has had its downsides for sure.
To all the sceptics: been using SpinRite since 2007 on many HHDs and it has saved us _many_ times. Steve Gibson knows hard drives and their recovery better than anyone. Highly recommend it if you need to recover a drive.
He uses the same marketing speak since 40 years ago when MFM drives were the norm even though hard drive technology has changed fundamentally at all layers.
If you have questionable media the safest thing to do is make an image, not try to “repair” the device in place.
I’m not a sceptic, I just know what I’m talking about. Anyone with a slightly detailed knowledge of modern hard drives knows that the SpinRite claims are hogwash.
It would be harmless if it didn’t actually waste precious time on marginal media not simply trying to make an expedient image.
Yet there are testimonials here in this thread that it can make a damaged drive worse. For some use cases the professionals are worth their high fees. SpinRite is for low stakes situations or folks who cannot afford better.
Disagree. SpinRite is for a hard drive that still runs but the computer does not recognise. i.e. it's a great diagnostic tool!
If the drive doesn't power-up or "clicks" (because the reader broken) then yes, of course take it to "pros" first to see what they can do.
But the "pros" use SpinRite and charge you $50/hour.
And if you live somewhere remote where there are no competent technicians ... ¯\_(ツ)_/¯
I've used SpinRite to recover "photo backup" drives for relatives where their local PC repair shop said "nothing can be done". For that relative it's their life on the HHD, couldn't be more high stakes than that (to them) and I trust it without any hesitation. SpinRite just goes to work and a few hours (sometimes days) later it magically works again.
Data recovery experts definitely do not use SpinRite (I cannot speak for every corner computer shop and random 13 year olds on the next claiming to be data recovery experts)
I've found ddrescue does a good job recovering data from bad media by doing strategic retries.
But mostly ... I don't have need of a utility like this since I started using zfs.
I don't think less demand for this type of tool means hard drives are better now. I think in today's world of periodically trashing broken devices rather than attempting repair, most normies just ignore this problem space. They might put something really important on an online service, other than that they accept data loss when shit breaks, and get a whole new laptop or whatever.
In 2006 I worked for a company that regularly used SpinRite for recovery and it did indeed work miracles. I wouldn't assume it has any utility today however.
Back when hard drives were MFM and up to 20 MEGAbytes and then RLL controllers came to make 20MB into 30MB and drives would constantly fail because they didn't even auto-park the heads on powerdown, SpinRite saved my butt many times.
I bought it up to version 5 if I vaguely remember correctly and I think I still have a sealed spare copy around somewhere (used to resell to customers).
If SpinRite was "fake" it sure did something for me.
TLDR: 30% success rate for a $89 expense over 200 drives in 15 years is more than enough anecdotal evidence to say it was worth it.
I bought it 15 years ago and have run it on about 200 drives.
My anecdotal results are in thirds with about equal results
for drives(both spinning and ssd) that have measurable slowdown from new.
1. No visible improvement of the situation.
2. Moderate improvement in speed or identification of "bad sections of the drive".
3. Significant improvement in speed or readability.
For the third group especially backup times were horrible or failed so the idea of doing the backup not the rescue first were not successful so we had no option but try the recuse.
In this case after the rescue the backup worked. I then backup up the backup because one copy is not enough (min 3 on different media and at least one offsite). Then restore/image the disk and boot up the system with a new disk and put the old one on a shelf until you have appropriate back ups of the new one. Then destroy the old one so no one can use it.
30% success rate for a $89 expense over 200 drives in 15 years is more than enough anecdotal evidence to say it was worth it.
These get trotted out every time Gibson is mentioned in any capacity. For the most part, this stuff is 2+ decades old. It feels like character assassination because it's written exclusively as "debunking" or calling out Gibson's missteps. It makes no effort to followup and state if he's corrected himself, changed his opinion or shared further information. These folks have it out for this guy - whether or not he's wrong.
Regarding character assassination and those links: walks like a duck, quacks like a duck...
Some people has a feud with Gibson 20 years ago for what looks like the old days of feuds in Internet Security where you either a "good" hacker hacking stuff for fun or you were a bad sellout earning money with security products.
If this sounds ridiculous check "project mayhem hackers" on Google, it was peak juvenile rebellion.
I have checked once those links and it all looked like just people ranting about misunderstandings and different opinions but taking the side that they had the perfect understanding and the perfect opinions... an immature approach I would say.
It's two decades old, but so is the stuff on his site.
Go look at Shields Up - it's all written like people are still plugging their computers directly into their DSL modems and cablemodems. I couldn't find a single acknowledgement that his service isn't necessary for 99% of users on the internet who are behind a NAT router. Nor does he acknowledge that most ISP's have long since adopted policies of blocking traffic on common windows network service port numbers.
Anyone behind a NAT router doesn't need to worry about any of this unless they've modified its configuration to forward some ports.
He's also now hawking spinrite for people with SSDs, claiming it helps their reliability. It's complete nonsense.
For what it's worth, the FAQ has a section about NAT. If you click on 'Help' it also specifically mentions the use case of: "... checking and verifying your NAT router's WAN-side security..."
Indeed, would not love to see my list of mistakes over the decades, especially putting on a live show. The perfect-forward-podcast will not ever exist.
Bullshit. You can go to the GRC website and forums to see that he has never changed his opinion on the snake oil of Spinrite for modern drives.
And the very fact that he continues to sell something with claims that cannot possibly be true for modern (past 30 years) hard drives (not to mention SSDs) is alone enough evidence.
I cannot read his mind. But my guess is that he actually believes a lot of the nonsense he is spouting, it isn’t malice. That doesn’t make him any less of a crackpot though.
Can you give some specific example of a claim that "cannot possibly be true" and explain why? And please do not just post links to an article by someone else, because I'm interested what claim you have found incorrect and what was your string of checks and arguments to that belief.
This was quite an unpleasant read. Long wall of text on data recovery from HDDs and SSDs and how they changed... just to set the stage for what seems to be the intended message of the article - the assassination of Gibson and his product Spinrite?
> And solid state drives... while there is no doubt performance degradation over their lifetimes, failure modes tend to be all-or-nothing.
Not all drives are perfect until they suddenly die. In my experience, even modern-ish SSD drives (mainly Samsung 870 EVOs...) do develop problems continually. I've seen them both failing quite quickly, and also RAID controller rejecting several of them after some time of working well, then the drive still lived and could be used elsewhere, but in SMART data had bad numbers, including bad sectors (I know, it's SSD!). It's not all-or-nothing failure with SSDs.
> So what about SpinRite?
Yeah, what about it? Why spend so much time (ours time) prepping ground for taking down something you do not really use, or have analyzed in detail? Why do you care?
> he doesn't seem to have updated his website in some time... By association, I suspect that GRC's flagship product, SpinRite, doesn't get a lot of active maintenance either.
What kind of a novel thought construction is that? These things need not be correlated. And if you checked, you would find he actually does update Spinrite, recently to 6.0 and 6.1.
> It seems reasonable on the surface, but it wouldn't make much sense with a drive with internal error correction.
It does make sense, because error correction isn't perfect, and sometimes you do get different data from the same sectors, after repeated reads (I've witnessed this on an 18TB drive, I don't know if it was due to a bad drive, or some bug somewhere along the way to my monitor, but it did happen. The drive works well now.).
And if the problem is an unreadable sector(s), and you want to read it, the more times you try, moving the head from different directions, with different internal state while reading (temperature, firmware, platter), the greater the chance one read will succeed.
> even tools dedicated to that purpose (like the open-source badblocks) are becoming rather questionable in comparison to the preemptive capabilities of modern HDDs.
Tools "like badblocks" - writing and reading the whole drive - are not questionable. They are the only quick way of checking if your new drive from the supplier has defective media. I don't do it with all drives, but when I have time and I want the drive to be solid, I do the test, and drives that log bad sectors get returned to supplier.
> their mentions of how SpinRite is a very powerful tool that one shouldn't run on SSDs too often, absolutely reek ...
Maybe there's a good reason for that, like there is a reason for not doing too many write benchmarks on SSDs. Flash degrades on writes, and maybe even on reads a little.
> I don't think SpinRite started as a scam, but I sure think it ended as one.
Based on what - it seems, old claims from an old webpage and bad logic arguments on why the claims are not believable for modern hardrives. Without checking and reporting on the new Spinrite on real drives with problems.
I've never used Spinrite, but this article did not convince me of anything, except the author doesn't like Gibson/his product, and writes a bad blog post about it.
It is my opinion. What else? I mean, what was the point of your response? You didn't even offer an opinion on the topic. You offered an observation and lazily concluded whatever-it-was with a rhetorical question?
I'd like to recommend Security Now, the podcast the the author of SpinRite, Steve Gibson, hosts. While at times Steve has a non-consensus take on things (other commenters have noted his preference for out-of-support versions of Windows), he is pretty good at explaining the technical details about the topics he covers. It's a good way to keep up with security news. Probably only the Security, Cryptography, Whatever podcast is better (more in-depth discussions of low level topics with experts), but it less consistently published.