Hard drives used to be sold by the powers of 2 bytes. I distinctly remember when all manufacturers suddenly started stating drive sizes using powers of 10 bytes and the explanation that made the most sense wasn't a sudden desire to conform to SI prefix standards, but to market their drives as having greater capacity.
In fact, I've got a 20MB SCSI drive here which I should hook up to verify...
The article is a strawman: No one argues that the SI prefix G- is base 10. The example: "A 200 GB hard drive holds..." is blatant question begging ("It's 10^9 because it's 10^9").
The real questions are: Why did hard drive manufactures move from a (misnamed) base 2 to base 10? They were confused before but then saw the light (decades later)? Why did all OSes and utilities use base 2? Why do most still use base 2? Why can't we use base 2 now?
Making the consumer think he gets more for his money doesn't give you an advantage over your competitor (if he's doing the same thing), but it does put more money into the industry as a whole. Imagine a home ice cream machine fad. All manufacturers might rise more or less equally, but they all make more money now that people believe their lives are better enhanced by putting their money into ice cream.
As disk sizes started to increase, the difference started to matter more and more. 1KiB is only 2.4% more than 1KB, but 1TiB is 10% more than 1TB.
I think it makes sense that this didn't become significant until disk sizes started reaching GB levels, and therefore manufacturers' choices to switch to base 10.
Besides it making more difference for bigger disks, there is a huge performance penalty on x86 for disks that use sectors that are not a multiple of 1kiB. And since earlier models were MB sized, they couldn't even get those few percent back.
Except when they do. Amazon Web Services uses 2^30-byte GBs for bandwidth and EBS disk sizes. They measure EC2 ephemeral disk sizes in 10^9-byte GBs, though...
Re: AWS using 2^30 for bandwidth. That's a serious error - I have never seen bandwidth ever measured in anything other than SI Prefixes. In fact, on the few occasions I've seen data transmissions measurements called out as GiB, I've done a bit of research, and discovered that the author was incorrect, and that the actual data transmission was really GB. Memory is the only place you should ever see GB mean 2^30 - and, it would be nice (though unlikely) if everyone could just switch to GiB when referring to memory, and then GB=GB, and GiB=GiB.
Is there any proof that hard drive manufactures actually switched? It's always just repeated as truth. I remember megabytes being all over the place in the days of floppy disks, 1000 * 1000 or 1000 * 1024 or 1024 * 1024. Hardware engineers have always been more likely to use 10^9 (ie megabits) where software developers have always been more likely to use megabytes.
Why did hard drive manufactures move from a (misnamed) base 2 to base 10?
A similar question is - why do all gas stations sell gas for x.yy9? It's impossible to sell something for 9/10 of a penny but in the U.S. they all do it.
If you ask the people doing it, you'll get the answer that it serves the consumer better, it's just a coincidence that it happens to make their product look artificially better/cheaper/whatever than it is.
Indeed, the original PC/XT 306-4-17 drive contains 10,653,696 bytes, or slightly more than the 10MB (10,485,760) it was advertised as.
Ditto with flash devices; due to their addressing architecture, they are inherently binarily-capacitised(?) I have here a 16MB USB drive from when they first came out, and it stores exactly 16,777,216 bytes, or 8,388,608 512-byte sectors. Back then, flash memory was all SLC and it was reliable enough that only the few spare bytes on each page were needed for remapping/ECC and the OS's filesystem bad-block management could be used.
In fact, I've got a 20MB SCSI drive here which I should hook up to verify...