(b) doesn't really matter, working reality trumps theoretical spec.
Do file systems directly issue SCSI commands? I would've thought they tell the storage driver to do something and the driver would do it with the most efficient means available.
And yes, some filesystems do - ESX, for example, uses what they call VAAI, which is a set of optional (standardized) SCSI functionality, like WRITE SAME, COMPARE AND SWAP (iirc), and server side copy.
Is there an alternative non-optional strategy for achieving secure delete (or revocation semantics of some kind)? If not, this is a fundamental capability that you can't paper over by slapping an abstraction layer on top any more than you could turn a 1TB HDD into a 2TB HDD with an abstraction layer. If so, it seems to me like the bug is very much in the hard drive / standards, not in the operating system.
Issue normal data writes of blocks that are filled with zeros. The same way regular data makes it to the drive just fine will also of course work for data that's all zeros.
The above is a discussion about whether the filesystem driver or the block device driver would issue the SCSI commands.
This would never happen from userspace.
Is it though? There is probably a big drawback in terms of resource consumption if this is not supported. Not all environments may be ok with this.
This command is not mandated to be supported. Therefore, if an OS assumes it is supported, that's an OS problem, not the drive.
These standards probably define mandatory and optional commands to certify disks as compatible with these specs IMHO.
If the command is optional, then it's OK, but if it's not, then there's some bug fix what WD shall make.
smartctl isn't really designed to handle SCSI protocol I think. It can do basic things but for anything deep you better use sg3_utils.
Thanks again. :)
The OP shows errors that are reported to the OS by the drive when it attempts to use the command. Even if it can't pre-determine support for the command, it can fall back upon receiving an error.
The issue is that some command ops may be doing double duty in a different drive. Famously, a few CDROM drive vendors reused the "clear buffer" command to instead mean "update firmware". Linux used support for "clear buffer" to detect if a drive is a CDROM or CDRW drive. As a result, using such a specific CDROM drive under linux would quickly cause the CDROM drive to become permanently bricked.
You can't trust the response because it's likely that at that point, the damage is already done. Even if you get one, you might not know what it means.
That applies to any command that the drive does not advertise support for via appropriate SAS and SATA commands. In some rare cases you might manually have a whitelist of commands supported by drives outside this list but you should never try to automatically discover it during runtime.
I still don't get this. If the damage is already done, then how is issuing the fallback going to change things? Again: I'm not arguing about whether discovery should be done or not. All I'm saying is, if the device says invalid opcode, you should use the fallback, whether or not there was any discovery that led you to use the initial opcode.
But it is much easier to rely on what is known to work instead of issuing potentially non-working commands to the point that there is no reason to have a fallback other than "rediscover what it supports".
I don't get why you would even want to use a fallback command on a drive that is in a potentially unknown or undefined state.
If discovery led to an invalid opcode the drive is faulty, end of story. The SAS and SATA standards are very clear on what is permitted and what is forbidden and that falls very far on the side of "not allowed".
Discovery has of course improved this, so we know what a harddrive can and cannot do. Harddrives that lie about what they support shouldn't have the appropriate seals and trademarks of SATA or SAS on them, as they must be certified by those entities.
I haven't noticed any abnormal behavior, but I also don't mount/luksOpen them with -o discard (if that's what it takes to trigger this).
Weird. That's interesting. Why did I think WD and other companies would just keep churning SATA interface drives forever and ever.
Seagate's 3.5" Backup Plus Hub drives mask the disk itself intentionally to show itself as a different, generic Seagate device, probably to prevent people from messing with the disk settings and identifying the disk itself.
In addition to devices, the host controller has to support UASP too - in early days of USB3, it was separately licensed option.
At best perhaps it's a method of enforcing product segmantation - preventing users from buying & shucking what are strangely cheaper external drives (discounted so that WD can sell you (or sell 'you' via telemetry) some value addon software. I don't see a technical reason for how a drive could physical be unable to support this SCSI command.
Don't ask me how I know this.
Also they have a lot more tolerance for disk failure than almost anyone.
But I don't understand how them maybe not being top quality would make part of the SCSI protocol disappear in the firmware. That indicates that these drives had a different firmware developed for them with missing features which makes no sense.
They implement certain parts not pass through and do indeed have separate USB firmware.
There's no SATA connector so you can't salvage the drive or the enclosure. But there are SATA test points so you could wire it that way in theory.  
Toshiba does the same, I found out the hard way after prying open one of them to salvage a hard drive for my PS4
Kind of surprising that the drive control board in the Passport has the USB connector built right in. It makes me wonder a few things:
1. What are volumes like for 2.5" spinning rust drives? I understand that the vast majority of 3.5" drives go into servers, desktops, or storage devices where they operate on a SATA bus, so the small volume of USB drives are most cheaply made with a housing that uses the economies of scale of that industry and adds a USB conversion motherboard. A decade ago, I would have said most 2.5" drives are used with SATA connectors in laptops, but who's buying laptops that don't use solid state storage anymore?
2. What's the cost difference for a drive control board with optional pads for both SATA and USB, only one installed at a time, vs one that only supports SATA?
3. Can you pull off the control board and replace it with one from the same lineup that uses SATA, like you would in a data recovery operation where some IC on the board burned out? Or is the mechanical component also specialized?
My first generation Passport 320GB disk was also has a different firmware for enclosure based operation.
IIRC some WD disks doesn't have SATA ports but USB ports are directly soldered to their drive boards.
WD is an interesting company.
I have a completely unusable 2tb drive at home that for some reason only gets detected by macbooks, not from windows or linux pcs.
I now have a collection of internal drives in enclosures, and the first two, out of old laptops, have now outlasted any external drive I've ever had.
Only 1 external drive I've had has been good in my life. That's a Seagate. Dunno if it's a fluke but I'll just buy that brand in the future until I find out.
It is surprisingly difficult to ensure deliberate data loss
This can be the case for something like an ATA Secure Erase command, which is why the Sanitize commands were introduced to ATA, SCSI and NVMe. Those do explicitly mandate that all user data be erased, including from all caches and any storage media that is not normally accessible to the host system (ie. old blocks that haven't been garbage collected yet).
2) While it's been "common knowledge" since Gutmann that data from old writes can be recovered (thus the advice to write multiple passes of random data), this turns out to have been iffy in Gutmann's day and an outright myth today. Multiple university teams have tried and failed to recover data using advanced techniques (such as SEM tomography) after a single zero pass. Generally the success rate for single bits is only slightly better than random chance. Gutmann himself criticized multi-pass overwriting as "a kind of voodoo incantation to banish evil spirits" and unnecessary today.
3) By far the larger concern in data recovery, for platters as well as SSDs, is caches and remapping performed in the firmware. As a result, the ATA secure erase command is the best way to destroy data because it allows the controller to employ its special knowledge of the architecture of the drive. However, ATA SE has been found to be extremely inconsistently implemented, especially on consumer hard drives. The inability to reliably verify good completion of the ATA SE is a major contributor towards preference for "self-encrypting" drives in which ATA SE can be reliably achieved by clearing the internal crypto information, and the US government's recommendation that drives can only reliably be cleared by physical destruction. Physical destruction is probably your best bet as well, because self-encrypting enterprise drives come at a substantial price premium and you still lack insight into the quality of their firmware. In other words, the price of a drive with an assured good ATA SE implementation is probably higher than the price of a cheap drive and the one you'll replace it with after you crush it.
It's true that multiple overwrites are overkill. But for SSD's it's has been shown that it's possible to read data after a full overwrite .
There is a potential benefit to multi-pass random write to SSDs in this case, but this paper shows exactly why you shouldn't do this: because the improvement in security from random overwrites is stochastic at best and cannot be guaranteed without full knowledge of the behavior of the controller, as can be seen in the paper in the drives which continued to contain remnant data after many passes.
As the paper finds, multi-pass overwrite is not a valid technique to sanitize SSDs, and is still cargo-cult security.