To me this reeks of Western Digital style dishonesty[1][2]. I was wondering how long it would take for the new parent company to rub off on SanDisk and I guess I have my answer. My suggestion is to stay the hell away from anything Western Digital (and now SanDisk) if accurate specs are important to your use case.
What supplier would you recommend? WD and Sandisk have been my go-to for all things storage, but I prefer to pay a premium up to 20% for better quality.
My experience with WD spinning disks has been great. I've had two Sandisk SD cards fail which was a bummer but they both went into a read-only mode so I didn't lose any photos. I've switched to Samsung's high-endurance line for those.
Here's my problem: I'm glad to pay a premium for Sandisk if it guarantees reliable, honest, and accurate devices. I can buy generic flash drives for a fraction of the cost.
If Sandisk is starting to cheat, why would I pay the premium? And if I'm willing to pay a premium, whom should I buy from?
Sandisk, prior to WD, was expensive but honest. I was concerned this might happen, since WD has a bad track record here.
Same story with many brands.
Rather than raise their prices, they cut quality. Do companies not understand why people were buying them to begin with?
Bounty paper towels are the latest brand that I noticed. They always were superior, thick large sheets. Now they are on par with generics.
Remember "brands" are not a thing, all companies are people and people care about themselves. If you join a company and cut the costs to "save money", you'll probably get a big bonus and then jump ship somewhere else so you don't suffer the consequences.
At a certain point in the lifespan of the brand the reason the brand is well known becomes less the quality of the product than the recognition and "stickiness" of the brand itself.
A certain mindset and approach has taken over a lot of companies where they want to pull a fast one on their customers and improve margins at the same time. They rely on the brand name to keep customers coming back (because research has shown that people are less likely to try a new brand in a product space over their chosen one, "'ol reliable", even if they complain about its falling quality) while lowering the quality of that product until the brand identity is all that differentiates the product from a competitor. This makes their margins better, competing with competitors, at the expense of the quality of the product.
At least until customer perception sours. Then the brand engages in "artificial" rebranding by "returning their product to quality" and advertising it as such. A recent and famous example of this cycle was Domino's pizza, which was a stellar marketing campaign built on...their 'old' pizza sucking. Many hyper margin conscious companies rinse and repeat this cycle.
It’s like the probably fake story of the MBA that removed an olive from the salads in the airline. They way they should finish by the story is with another MBA removing the rest of the olives, then another the space between seats, then another started to charge per bag…
Their consumer-level portable flash drive stuff has been trash for a decade or two...overpriced, slow, high failure rate, and terrible/gimmicky cases.
It's only their high-end stuff that was decent. For a long time, they were kings of the high-end CompactFlash and SD card market for photo/video work with their Extreme and Ultra lines. Lexar was the other major player there.
A lot of other companies offered cheaper, better performing products particularly for USB flash drives.
I don't need that many write cycles, though. I need a good controller, which can give me both high speed and high reliability with more bits per cell.
So "should be" is not very enticing, especially if they want to charge 10x. As far as I can tell you can get generic flash chips and run them as SLC, so they should only need 3x the flash and 1x as much everything else.
And actually looking at the prices here, it's about $120 for 32 gigacells. If you get the SLC model then you're not paying 10x the cost per byte, you're paying 40x the cost per byte.
Samsung has lines that are marketed for use in security cams, these have a high read/write cycle lifetime. They’re worth it for embedded devices like raspberry pi.
I switched to Samsung from SanDisk about 7-8 years ago (when I first discovered this cheating on one of my SanDisk USB sticks) and haven't had any issues.
BTW, there is no need to buy anything fancy for RPi, I just buy the cheap stuff, but I buy much higher capacity than what I need. Overprovisioning this way is cheaper and more reliable in my opinion vs. buying something more exotic.
For example for the most basic RPi install, I'll buy a 64GB microSD card. The card may only ever see less than 10 GB worth of data, but I know it'll last for years with plenty of chatty logs written to it daily.
Source: I've been running RPis from the very first version, and have never had a single card fail on me, ever.
About two decades ago, there was a nice article about a hierarchy of memory cards. The chips are binned:
- Reliable chips were placed in name-brand products
- Defective chips have the defective parts disabled, and were placed in products on tier down. A 16MB chip might be sold as an 8MB.
- Unreliable chips were given various error correcting techniques, and are placed in cheap, off-brand parts, as well as non-critical applications (such as, at the time, answering machines). A common technique was to scatter addresses, so errors were randomly distributed. For answering machines, this meant a little bit of static. For other applications, this meant a bit of ECC would make them quasi-reliable.
Cards and chips down the line failed a lot more than at the top.
I have had a few memory cards fail before. I am glad to pay for top-line parts. The premium I'm willing to pay is in the low tens of dollars, which is what Sandisk used to do.
I've used Kanguru in the past but haven't really looked at USB drives since switching to IODD external SSDs.
You might also look specifically for "Windows To Go Certified" flash drives - not because you're actually going to run Windows from them but because they should meet some decent minimums for speed and flash endurance.
It's long been suspected that PC hardware companies have been doing this sort of stuff.
Camera manufacturers are also known for shipping review sample lenses that have been hand-assembled and perfectly calibrated, nowhere near the quality of calibration/assembly that ends up coming off the production line.
I've tried to find relevant EU law about this. IANAL, of course.
According to 'misleading and comparative advertising' - https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CEL... - it seems this could fall under article 2b and 2c, defining misleading resp comparative advertising. The definition of misleading here is:
[..] any advertising which in any way, including its presentation, deceives or is likely to deceive the persons to whom it is addressed or whom it reaches and which, by reason of its deceptive nature, is likely to affect their economic behaviour or which, for those reasons, injures or is likely to injure a competitor [..]
This is a directive, unfortunately, so the actual law resides with (and is different in) each member state. OTOH, this means inhabitants might complain to their regulator, and don't need to sue sandisk themselves.
I was hoping this one would help. It says which errors are allowed when measuring things. Unfortunately it only speaks of weights and (physical) volumes, not other measurements like number of bits. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CEL...
In general, I don't have much idea about how to navigate the EU consumer protection landscape. So while I do have a lot of protections, it is sometimes hard to know which ones. Does anyone know a good site or something?
So they need to overprovision, and compensate for effects - that is fine.
However, only advertising "usable size" is truthful, everything else is misleading.
It is sad that the legal system tolerates such cheating because it makes it harder to find the actual properties of the product. Or even makes it impossible to actually know how the product will actually work.
It's shockingly common nowadays to advertise things with an upper limit, rather than something like expected value. They're just saying, "we won't tell you what it is, but there's no way it's more than this". Bizarre that it seems to be effective.
Was just looking at this with internet service last night. Though I can understand where they're almost forced into that, since explaining that you can't get 1Gig speed from a random server that isn't serving that fast would be impossible to a lot of people.
But, the cable company doesn't disappoint -- since I can't find the price of an upgrade without calling, nor mention of upload speeds.
Cable companies are the worst I wish politicians will have a look at their practices (and their more than likely illegal price fixing) but their lobbyists are strong.
> I saw a billboard for a lottery, it said "estimated lotto jackpot 55 million." See, I did not know that shit was estimated. That would suck if you won and they go, "oh, sorry, we were off by two zeros. We estimate that you are angry!"
> I saw a billboard for a lottery, it said "estimated lotto jackpot 55 million." See, I did not know that shit was estimated.
It’s estimated because they don’t know exactly how many tickets will be sold leading up to the next drawing. I’ve not tracked it, but the times I’ve seen, were under 1-2 percentage points. If it bothers you that they said “estimated $55M” and instead you win $54,333,187.42, is it really that bad?
I can imagine a perverse population of customers whose elasticity function prevents the estimate from ever being correct by always buying too many tickets.
> If you don't know the answer, underestimate and overdeliver.
Make up your mind, is estimating bad or not? The example I gave isn’t the only possible outcome, some times it’s 1-2% more than quoted when more tickets were sold.
Not sure how this is about democracy or otherwise. Regulation exists in various countries, for example regarding internet plan speeds. In Germany, ISPs can’t just slap on “up to 300 Mbit/s” on the plan and then say “oh it’s only 50 Mbit/s at your house, tough luck”. Performance sucks (conditions apply)? Pay less or cancel immediately (without notice period).
Ah, but they can slap “up to 300 Mbit/s” on the plan and then hope you won't notice, which most user's won't. If the worst that can happen to them is that you get out of long contracts (which shouldn't even be the norm to begin with) then that's hardly an argument that we have effective regulation.
There's nothing bizarre here: there are enough people in those democracies, who believe regulating this is either not a good idea or not high enough priority
I doubt think that's true at all. In most representative democracies (UK here) we just don't have the ability to vote on such fine-grained legislation. If you asked the person in the street "should companies be held to absolutely strict truth in advertising for quantitative claims", once you explained what it meant (!), I'd warrant the vast majority would agree, yes they should.
It's anti-democratic, and anti-capitalist to allow misinformation on products. Markets can't optimise if you allow misinformation.
>If you asked the person in the street "should companies be held to absolutely strict truth in advertising for quantitative claims", once you explained what it meant (!), I'd warrant the vast majority would agree, yes they should.
This overestimates support because you're vague on what the policy actually is, so everyone thinks that it's going to be their preferred variant being implemented. See: the brexit vote which got a majority vote for "yes", but in reality the none of the individual proposals got majority approval.
If hypothetically 10% of the population said A but you slice up B into specific enough buckets then A wins even if the overwhelming majority dislike A.
Brexit was only a policy question if you combined two different questions. “Should we stay in the EU?” and “What kind of foreign policy should we have?” People answering Yes to the first question also had plenty of diversity in how they wanted to answer the second question.
> If hypothetically 10% of the population said A but you slice up B into specific enough buckets then A wins even if the overwhelming majority dislike A.
Yes, if you're only allowed to vote for a single option. If you're allowed to vote yes/no for each option, or rank them from best to worst, then this problem doesn't happen.
Well, you can ask your representative to represent you, right?
I live in a place with direct democracy and it's much less regulated than the UK. Therefore the link between having the ability and actually regulating more is not obvious to me.
> anti-democratic, and anti-capitalist to allow misinformation on products
Is it though? I mean, it might be bad, but I don't see how it's necessarily anti-democratic, as it has little to do with the governance model.
They do, especially outside of the United States, but there’s also a ton of corporate money spent to promote libertarian ideology so a non-trivial number of voters believe that regulation is inherently harmful.
Read the top comment about misleading or deceptive conduct in the EU. There is a similar regime in Australia and companies don’t do it lest they face steep fines and lawsuits with good payouts and very few defences available. Good democracies regard this kind of law as a necessary part of capitalism, because it harms not only the consumer but the entire system. Maybe yours hasn’t figured it out yet.
I dare say that "actual storage less" (and other equivalent phrases) are now past the point of covering for the discrepancy.
5%+ of missing capacity is definitely large enough for the actual expected capacity to be used for marketing.
This is also something that should find its way into smartphones. Samsung's are notorious for using significant amounts of the advertised storage for bundled marketing apps that cannot be removed.
> I dare say that "actual storage less" (and other equivalent phrases) are now past the point of covering for the discrepancy.
I think the original reason for that disclaimer was due to filesystem formatting, not for the GB -> GiB discrepancy or providing less usable bytes on the block device than advertised.
If you read the article, the same is true in the SanDisk case. The module is still a 16 GB module, and “A portion of the total capacity is used to store certain functions including optimizations of the memory that support performance and endurance and therefore is not available for user storage.”
I think it's slightly different in that it changed though. A 16gb iPhone always meant that there was a 16gb storage module inside the device and then some of it was used for the OS (how much of it has varied over time however, which I agree is a problem)
In the sandisk case, at least from what I understand, it that 16Gb used to mean like a 17Gb module, with 1Gb used for system functions, now it's a 16Gb module with 1Gb used for system functions
Since there are no 17 GB modules (to my knowledge), they would have had to combine modules of different sizes. The f3 output in the article also states 16 GB for both drives. The old drive already only exposed 14.9 GB (GiB) of usable space. So it seems to me that is is only a quantitative change, not a qualitative one.
Phones are actually worse in that a mere OS update can decrease the amount of free space, which is something that presumably does not happen with USB drive firmware updates.
Are all the sizes really that even? Despite weird layer counts and stacking counts and typically doing 3 bits per cell?
Still, a 16GiB module would be 17 billion bytes. No need to combine different sizes. Unless you're saying a single module would have a decimal capacity?
If you take a 16GiB module and sell it as 16 billion bytes then that gives you 7% overprovisioning. Overprovisioning 10% would give you a 15.4 billion byte drive. If you want 16GiB then it's a quantitative change of how much you lose. But if you go by their promise of 16 billion bytes then it's a qualitative change that they broke the promise.
Even iPhones aren't immune to getting bad blocks. The system files category is a catch-all that includes the reserved blocks for remapping by FTL when blocks go bad. As storage tiers get bigger, so does the number of reserved cells, in absolute terms.
Why do I as the user care about how much storage is physically on the system? Are they going to start counting the firmware storage on the WiFi chip next?
As the user I care about how much storage I can use, and how durable it is. If the storage is used by the OS it isn't helpful to me, other than the "runs iOS" feature point.
(This does get slightly complicated if the OS space changes with updates, what number is best to explain to the user in this case. It isn't entirely clear but I can ensure you that if an OS update takes 30GiB more space users aren't going to care that that storage is still in their phone)
> Why do I as the user care about how much storage is physically on the system? Are they going to start counting the firmware storage on the WiFi chip next?
Most Android phones use separate partitions for OS and user data so they could advertise how much user storage they are giving you (as they are very unlikely to re-partition during an update). However AFAIK all manufactures report the full storage size, not just user storage.
I have had the same happen to me with Toshiba MicroSD cards. I ended up returning them.
I also bought a Transcend 128GB USB stick, which i unfortunately did not return (yet). I have communicated with Transcend support and they said it was an issue with partitioning and filesystems - which is wrong, because the tools i use show the capacity at a lower level. They kept stubbornly repeating these statements.
I may end up returning it which shouldn't be a problem even after almost 2 years because the defect has been there since the beginning.
Here's what diskutil has to say about it:
% diskutil info /dev/disk4
Device Identifier: disk4
Device Node: /dev/disk4
Whole: Yes
Part of Whole: disk4
Device / Media Name: Transcend 128GB
Disk Size: 123.7 GB (123718336512 Bytes) (exactly 241637376 512-Byte-Units)
Device Block Size: 512 Bytes
For comparison, i get this for a Lexar 32GB USB stick:
Disk Size: 32.0 GB (32008830976 Bytes) (exactly 62517248 512-Byte-Units)
You can clearly see that more than 4GB are missing on the Transcend 128GB stick. It should be around 119.2GiB (128GB) but it is 115.2GiB (123.7GB)
I'd be curious what the output is of the f3probe tool used in the article. I assume you have a 128GB module with the missing bits used as reserve for when blocks fail.
I'm fine with that if it's sold as a 124GB drive, not 128GB. I like to buy Intel drives because they'll sell you a 480GB drive and under the hood it's a 512GB module with tons of reserve.
Which is great. And if they would have followed their own logic (1GB == 1 billion bytes) and labeled it as a 15GB or 15.5GB stick everything would be fine.
I don't think memory mapping a defective stick really need additional memory, regular memory sticks already have to internally remap flash memory because of how flash memory can only delete multi-MB blocks at once.
The author looks at one of the drives, but they bought 2. What size was the other one?
It may just be that there's some distribution of "usable flash blocks" in every flash chip, and they happened to get one which fell at the bottom end of the acceptable range. You'd have to sample quite a few before you could be certain this isn't just an outlier.
Not strange at all; first they make the batch, then they test them to see how many of the bits are actually writable, and sort them according to how well they turned out.
This is also how phone screens are made (a huge panel is produced, then lit up; some pixels are stuck, but phone-screen-sized pieces can be cut out of it where there's enough good pixels).
This is also how most CPUs and GPU models are made - test each sub-component, disable the parts that fail testing, then price the result according to how much of it works.
In my experience, it doesn't handle "unusual" boot disks well. For example, images not using GRUB / ISOLINUX, or the usual modern windows bootloader. I've had issues with Haiku OS and Redox OS for some examples. I still keep a VenToy disk for most uses, and another one for when VenToy doesn't work
Yes I would sort of expect it to struggle/fail with more unique disc images. I only ever use it with official ISOs for this reason. If I don't see the OS listed on the compatibility list I tend to not waste my time seeing if it works or not.
I'm not OP, but I've had all sorts of bad experiences getting "non-official" bootable USB media to work properly, in various edge-cases and weird hardware.
And by "official", I mean an .iso or .img directly dd'ed onto a drive - this is the setup that I've found to be most reliable.
I haven't tried Ventoy, and I suspect things are slightly nicer these days now that EFI has matured, but I'm personally not prepared to spend any longer than necessary debugging why my bootable USB isn't working properly.
This makes sense, I guess when I read they didn't "trust" VenToy my brain went to "I worry it might modify my install ISO with malware" or similar rather than the reliability/compatibility aspect.
I've not used VenToy with anything but official ISOs so cannot comment to more unique uses but I've never had an issue with the official images of Fedora, Debian, Windows, Ubuntu, etc.
For me VenToy has been great at making life a little easier and saves me time. I have a 128GB USB drive with two dozen ISOs on it. Much easier to work with the one drive vs half a dozen I have to keep rewriting ISOs onto.
For what is worth, I use Ventoy and it works very well for me. Only problem is it fails to boot if you have secure boot enabled, but I didn't even try to look for a solution since I don't use it anyway. So maybe there's an easy way to have it signed so it works with secure boot too.
I read that as VenToy being relatively new to them, compared to years of using a stick per OS, and they'not found enough time to play to prove it would work in all circumstances they might need/want it to.
Fair point. I would highly recommend to the author they take an hour and try it out. The amount of time VenToy as saved me over the past few years has to be in the 100+ hour range by now. Fantastic little tool.
Depends on what he's using it for. Ventoy won't boot on lots of intel Macs and certain questionable EFI systems. If they're refurbishing machines and time is more important having separate USB sticks for each OS is worth not having to fiddle with it.
This is a good point and feels like three kind of thing Conway would have written about.
One good way to find rational approximants is to look for large values in continued fractions. Looking at log_2(10) we get the continued fraction representation
[3;3,9,2,2,4,6,2,1,1,3,1,18...]
The 9 isn't particularly large, but if you truncate just before it you get
log_2(10) ~= 10/3
and hence
2^10 ~= 10^3.
If you keep going and cut just before the 18 you find that
2^325147 ~= 10^97879
This is correct to six places, as you can easily verify
$ python3 -c print(str(2**325147)[:8])
10000003
No doubt manufacturers will use this to further swindle us as soon as we have storage that requires 40 KiB pointers.
This is simple to resolve. If I buy a 16 GB flash drive with fewer than 16 billion bytes of usable capacity then I will demand a refund and return the device at the retailer's expense. I will also complain to trading standards.
I don't know if it is possible where you live, but can you demand a refund without returning the device?
That's how people in China countered fake 2TB thumb drives on taobao. They knowingly buy the device and ask for refund (because it's fake) without returning. They have an online forum full of "which merchant is selling fake drives today".
The trick is there are "Mass Production Tools" (I think it actually means firmware configurator) for the flash controller used in these scam drives, which can turn them back to their original capacity (usually 4GB or less). So by doing so they effectively win some free tiny-ish thumb drives.
In England & Wales, goods must correspond to their description [Sales of Goods Act]. If not then the seller must offer a full refund including the cost of returning the mis-sold item [Consumer Rights Act and other acts relating to the implementation of the EU Distance Selling directives].
If all or part of the payment for an item is made with a credit card then the credit card issuer is jointly and severally responsible ["Section 75"]. So if a retailer doesn't play along will find themselves answerable to their payment processor; the customer doesn't have to get involved.
It's not unheard of for sellers to issue a refund and request the customer dispose of a faulty or mis-sold item instead of bothering to return it. But that's only if the customer is OK with it; if I wanted to return, say, a washing machine and the seller refused to co-operate then I'd be in my rights to dispose of it at my expense and refund the balance minus my reasonable costs to the seller.
Hmm. I also have a 16GB SanDisk Ultra made in China like the old one. I don't remember exactly when I bought it but it was several years ago. Mine shows up with a usable size of 14.77 GB (30965760 blocks) from the same f3probe command the article runs. Testing my terrible no name 16GB stick shows 14.65GB usable.
There are tools out there that let you reconfigure the flash controller to have less overprovisioning. You're losing reliability this way of course, but if you want to get exactly 16'000'000'000 bytes out of the stick there are ways to accomplish this.
Do you know tools to change the amount of SLC in a modern NVMe? A part of the MLC is often configured as SLC, and used for caching. I'd like to set up drives to be 100% SLC, even if it divides the usable storage by say 4 for QLC
The same way that unlimited data plans are capped at a few GBs/month, unlimited SMSes and minutes have limits as well, unlimited bank transaction have a limit of a few ATM transactions per month and so on…
If it's mobile data plan i think you just have not reached that limit yet. Try constant download through out the month and see what happens. I would guess the speed will go down.
At least for my "unlimited" data plan there is contract clause mentioning "reasonable usage".
The only part about "reasonable usage" is that the ISP reserves the right to "temporarily control" your traffic once it exceeds 200 GB/month during a period of congestion "if the congestion is exceptional or temporary".
If they just slapped an overall speed cap on you after 200 GB/month, they'd be breaking their own terms of service.
And that's for a mobile plan. The fixed connection has no "reasonable usage" provisions and just has your generic "reserves the right to shape traffic" stuff.
Unlimited also generally run a lower QCI priority overall so that any other users on the tower will push you out if things are busy. Most people don't even notice this bit. Reddit for nocontract has some posts tracking plans QCI priority levels, and you need a rooted phone to get that info as it isn't published anywhere. I'm happier running a prepaid limited line with QCI8 and being able to get data when the unlimited plans can't.
There's a reason why AT&T Firstnet can do the things it does.
There is also the benefit from AT&T taking those federal firstnet dollars; they sunk a decent amount of money into infrastructure so these priorities / capacity matter less when they have much higher capacity now than they ever have in the past.
>Unlimited also generally run a lower QCI priority overall so that any other users on the tower will push you out if things are busy.
It's very hard to obtain a phone plan here that is not some form of unlimited. The cheapest phone plans either have absolutely no Internet, or are capped to like 0.25 or 0.5 Mb/s speeds, meaning that your data cap is just a function of your slow speed.
I don't know where your here is but true, that's another way to do it.
In the US there's competition showing how fast the service is in advertising. Which of course has fine print; "during times of congestion you may experience slower speeds" which is where these priority levels kick in.
If that's on the box. If this can't reasonably be expected and is hidden in an FAQ somewhere, not obvious when purchasing, then a box that says 12 eggs should contain on average 12 eggs. Dunno to what extent variability is allowed (when not purchasing huge amounts); for discrete items like eggs probably not but for gram weights usually it's +/- some amount.
Was thinking the same, but with it not being called out in the article at all, I was wondering whether to trust these numbers. The tool says "probe", not "benchmark". Not sure what to make of it
Tangent, but I think the practice of shrinkflation should be made illegal altogether (although not sure how you would enforce it). If you need to increase prices, fine, but it's a complete waste of packaging materials to just give less of something for the same price in the same package, not to mention the environmental harm.
Reducing package size can be in the interest of the user sometimes, e.g. if most people need less or won't use the whole package in time for perishables.
However it should be required to advertise the size reduction on the new packaging, including before and after price per amount.
I find this is what supermarkets do. They sell you basil and other fresh herbs in MASSIVE packs that you will likely only use 1/4 of for any meal. The rest goes bad and gets tossed out.
It's basically a scam to make you feel like you are getting an ok deal because you are getting so much. But really it is just a way to charge you more without seeming like a price raise.
I wonder why there are so few usb flash drives that have a physical "read only" lock.
That was so useful to defend against viruses.
A cheap flash drive will cost say 15 dollars and one with a physical button/lock to make it read only will be 150?
The cost of materials should be maybe 5 more dollars...
On a side note, if you go into one of those "print shops" that can print things for you and use an USB stick, then how can you sanitize your stick from the multiple viruses / worms that you will get?
Only thing that comes to my mind is to use a computer where the hard drives are physically disconnected and some live CD linux. But even then it is unclear if you can really format the usb flash memory to make it safe.
If you're on top of an OS, I can't say (depends on the OS)... but reading the switch is the responsibility of the reader, not the card. The card has no idea if it's write-protected (by the switch mechanism) or not. [1]
Just using the same SPI operations you always use for write should work fine. There's not even a lock-status register or anything to check/clear first for the switch mechanism.
Perhaps it's not within the capabilities provided by the OS or the driver, because they're "well behaved", but the position of the switch is in essence a boolean sent to the host device and not a barrier to electrical signals that would write data.
If you zero the USB drive out using `dd` or other equivalent tool that literally writes zeroes to each LBA, its safe. To avoid excessive wear and tear you can really just get away with zeroing the first couple MBs - without a valid partition table or recognized file system, no OS will do anything when it's plugged in other than ask you to format it.
It would be nice to to have something like caniuse.com or dpi.lv, but for “usable storage space on device”. I doubt it would shame most manufacturers into actually providing what they advertise, but it might just create enough visibility for one or two manufacturers to commit the extra resources to delivering on the promise.
This has been going on my entire career, starting in at least 1996 or so. Maybe SanDisk was an exception? People have been complaining about this for a very long time so I don’t think it’s quite right to call it Shrinkflation.
But, maybe I’ve been conditioned into complacency. I do agree, manufacturers should have to post usable space. I don’t see this changing though.
Edit: Removed note about 1,000,000,000 bytes. The article isn’t specifically about that, but I’m not convinced that’s ever been the only factor.
What the article describes is not that. They're saying its an extra shrinkage ON TOP of the usual 1,000,000,000 bytes = 1Gb thing drive makers have already done for a long time.
This is just GiB vs GB. The usable / announced size is more or less 16 GB (roughly 14.91 GiB), which you can also verify by multiplying the number of blocks by the block size (512B).
Most likely they produce a flash chip with an actual 16 GiB.
But to reliably use that chip in a USB flash drive, the flash controller needs both:
1) some spare blocks to use to replace blocks that fail during usage.
2) some amount of space to store its own data (spare block map, in use block map, failed block map, etc.) and the controller just uses part of the existing flash chip for those purposes.
The difference between 'module' and 'actual' is likely "controller data overhead".
Module does not seem to be a physical characteristic of the storage device, but rather its capacity rounded up to the nearest power of 2, for legitimate devices.
Sure, not saying there isn't any discrepancy-- just that the discrepancy is in the measuring methods. I was showing that there _is_ a way to produce a 16GB number (albeit in terms of GB, not GiB) given the number of blocks physically available, that's all.
Any other reasonable explanations to explain a measurement error? Partitioned differently? I might be a bit too critical but it seems farfetched for me that they've withheld 579 mb only for no one to notice before. I'm predicting this will become major news on tech blogs. People love "company X misleads customers". It just seems too good to be true.
Unlike RAM, where you always get exact power-of-2 number of bits per chip, modern flash storage normally ships with defects, plus error-correction codes to deal with those.
Number of defects vary. Chips coming from the same factory, even same batch, are likely to have different number of defects, and will be binned accordingly.
Devices with larger (but manageable) number of defects will simply have larger ECC region reserved - leaving smaller space to show to host computer. OP's new USB stick is like that, that's it, I reckon.
There's more going on here: the read and write times have also changed. This looks like a completely new hardware spin. And I'd bet it's got TLC flash now instead of MLC, or something of that sort.
The 32GB and 64GB SanDisk variants are actually cheaper on Amazon than the 16GB one. Yes, the advertising is misleading, but OP could've saved money and quadrupled his space if he wanted to.
I did not buy mine from Amazon, I was ordering other things from a local IT retailer and had room in the budget for two of these USB sticks. The price made sense in this context for me and I had little use for that extra space.
Ordering from Amazon is like gambling. Will you receive a genuine product as pictured or described, or a broken out-of-the box piece of crap? Only Jeff knows!
I'd rather buy from a local electronics retailer. Sure, might be a few bucks more expensive, but at least they have a brick-and-mortar location where I can come and complain if the products suck.
The problem I have is that other retailers tend to be more awkward about returns, whereas Amazon have never questioned them, even when the manufacturer has refused a repair[1]. It feels more like a case of pick your poison, rather than having an overall better alternative.
[1] A fairly expensive pair of headphones failed 18 months into a two year warranty, but the manufacturer put it down to (IIRC) wear and tear. I contacted Amazon, and they agreed to swap them for a new pair.
I've used only Toshiba drives the past 10 or so years. Not a single issue so far, so I'll happily swear by that brand. Currently I use a stack of their L200 2TB 2.5" drives because of how little power they use (<2 watts) and because of not having intermediary flash storage like e.g. Seagate's 2.5" drives do - because I'm convinced that the flash portion is just a liability which will fail sooner than the actual mechanical drive. The L200s are SMR drives so low write speeds (about 25 MB/sec seq. at worst) will occur intermittently if that is a concern.
Apparently multiple antiviruses flag it [0]. A tool that can inject whatever during the OS installation process should be closely scrutinized. Some also have trust issues with software that comes from China.
Well, antiviruses usually flag also pentesting tools like nmap (this is just an example, I didn't actually test it but I wouldn't be surprised if it is flagged) because those are "hacking tools" even if they don't do anything by themselves. So it would be more interesting to know why it gets flagged, just saying that it is flagged doesn't mean much as like in the previous example, they could decide to flag something just because it is unusual.
one thing that isn’t clear from this article is where he bought it from and what it said in the listing. all he says is that he bought one of the same size and brand. if the listing says 15gb, then I don’t see a problem. it’s up to you to see that and perhaps choose a different option
I would expect the same for smartphones and computers.
it was really frustrating back in the times, when your brand new 8GB android smartphone only had 1 GB of free space out of the box
(The 8GB edition of the Sony Xperia M4 Aqua only had 1.25GB of space for the user !)
Nowadays, devices have larger memories. But an android with mandatory apps and 'necessary data', or a Windows 11 can take around 20GB away.
It can't be neglected on low end systems. There are still devices with 32, 64 or 80GB out there ! (your redmis or galaxy A XX)
Since most smartphone manufacturers won't let you use another OS or a custom ROM, all of that space should be reserved for them, and not advertised to the customer !
It's the same for computers. Most users buy the computer with the Windows, and will never replace it.
Yes, the margin for potatoes is there because potatoes are discrete. A single potatoe weighs something around 100-300 grams, so hitting exactly 1kg is not that trivial. But bytes are bytes. If it says 16GB, I want at least 16GB.
I think allowing a small margin is fair, for the same reason as potatoes. When you manufacture a module some portion of the module will be unusable and disabled in testing.
In CPU-land this is done by manufacturers already - they manufacture many of the same generation of processor as the most performant version, and then test the individual items produced, disabling poorly constructed parts and downgrading the specification as they do it until they're left with a range of products at various performance points.
Totally agree. Something similar should also apply to laptops and smartphones. If your phone comes up with 30GB of bloatware on a 64GB device you should definitely not be allowed to advertise 64GB of space.
I understand that 16GiB (16*2e30 bytes) chip is present (delivered to user), but some of it is broken and not available (industry standard). Therefore capacity is less than 16GiB. Also I believe that there is a threshold, that manufacturer is using to throw away chip that are too broken to be sold. Ok so far.
So proper declared capacity should be threshold value (minimum) or threshold to maximum value declared range. I understand that this proper capacity is not very nice round number.
Problem has past: Around 2000, manufacturers started using GB (10e9) instead of GiB (2e30) that gives them some margin (7%) for chip defects. Later margin was not enough, so they started using more misleading specs like this. It seems that margin is at least 10% nowadays instead of <7% in the past.
Personally, I would be satisfied with stated margin size (like 7%, 10%, 12% etc) to maximum capacity, f.e. "USB flash drive 16GB max capacity, at least 90% usable."
> Problem has past: Around 2000, manufacturers started using GB (10e9) instead of GiB (2e30) that gives them some margin (7%) for chip defects
It actually started somewhere around 1995 -- with mechanical disk makers all choosing to advertise GB drives using the power of ten instead of the power of two, while maintaining the same labeling on the outer box.
And the reason was not to account for "chip defects" -- it allowed them to ship slightly less capacity, but print on the box a number that appeared larger to those who did not read the 4 point type fine print on the back.
I think it would be fair to make the regulation either way, so long as they put 1kg flour in the bag.
I'd imagine the trading standards would only check to about +/-0.5% (5g); whilst the flour on the bag's surface is probably around 1g for a kilo bag (1kg)?
Well come on, imagine flour was a bit more sticky and if you put 1kg in the bag, only 900g is usable because the rest sticks to the walls. Would that be 1kg bag of flour or 900g bag of flour?
> Using the rules of food packaging… 958 / 1000 * 16GB = 15.76GB.
The figure was 985g, not 958g, but that's what you seem to have used in your calculation anyway. (Amusingly, when I pulled out bc to test which way you computed it, I immediately made exactly the same typo.)
I don't think it qualifies as "socialist" to require that products sold match the description.
Maybe, at an extreme Mr Fantastic-style stretch, it comes closer to what many people classify as "socialist" to also require clarity in the description, i.e. the description needs to be written in a way that everyone understands it, not just the very brightest[*] readers.
The same company selling such a product would not appreciate it I paid them "up to the requested amount", they want it all. Then they need to deliver what they said they were selling.
I won't even get into how not requiring that completely skews the idea of competition, which capitalism is supposed to thrive on ... Okay, maybe I just did, a little. :)
[*] Or "most fit", "best educated", or whatever. Talking about human intelligence is always hairy, hopefully I got the meaning across.
That's not socialist, that is just consumer protection. Capitalism doesn't just come in one form ('Ayn Rand'-style absolutist capitalism).
Individual consumers don't have the clout to make a dent in their sales. So, you set some rules as a society (such as don't lie in the product description) to ensure that individuals have a recourse when they are scammed.
Sadly OP is behind the times here. The storage industry has pretty much co-opted GB to mean 1000^3 bytes, which is why you see folks in the know refering to GiB for the power-of-two-numbers. This is super-frustrating, but it's been like that for decades (literally: this was finally "officially" resolved in the late 90's by the IEC).
It's frustrating that SanDisk used to give you extra bytes and they stopped -- and everyone including me HATES it when products get worse with no external indication that they changed.......but let's be honest: it's kind of SanDisk's primary MO to buy the cheapest NAND they can find and sell it on the consumer market.
> The storage industry has pretty much co-opted GB to mean 1000^3 bytes
G has always been 1000^3, M 1000^2, and K 1000, in the storage and communication industries. OS designers, and programmers more generally, started using 1024 instead for convenience, but that came later. The storage industry is doing it right, using the correct meaning of SI units, and programmers co-opted GB to mean 1024^3, it isn't the other way around.
There were times when the storage industry and programmers worked together to really stuff things up and cause further confusion by mixing & matching: the 1.44MB of HD 3.5" floppy disks was actually 1474560 bytes so 1.44*1024*1000.
OK, had always been until the 90s, and then it was a mistake that wasn't correct by either definition, and only applied to one type of storage not the other types or storage or any forms of non-storage-based data communication. The cock-up this is the naming of 1.44Mb disks (and “720Kb”, and “360Kb” ones before them) doesn't alter that other storage media (hard drives, CD, …) were not counted in what would later be attempted to be disambiguated as MiB.
Even then it was only the marketing portion of the storage industry playing that game, when not talking to the general public floppy disks were referred to by their maximum rated information carrying capacity, 2Mbyte, not their OS-formatted capacity at all.
You are entirely correct, and the person you are responding to is wrong. But the author does themselves no favour by including this incorrect tidbit in the article:
> [...] Operating Systems define 1 GB as 1,073,741,824 BYTES.
Mac OS, iOS, Ubuntu, and Debian operating systems at the very least all use base 10 for representing disk and storage space.
Windows uses GB (gigabytes) to mean GiB (gibibytes), MB (megabytes) to mean MiB (mebibytes), and so on because basically noone in the real world adopted IEC's renaming scheme.
Linux (to include Android) and the BSDs (to include MacOS, and iOS?) use GB (gigabytes) to mean decimal GB (gigabytes), a conversion factor basically noone in the real world adopted because multiples of 1000 are meaningless.
So the vast majority of people and Windows to this day understand a kilobyte as 1024 bytes, a megabyte as 1024 kilobytes, and so on. Meanwhile, Linux and the BSDs and drive manufacturers/vendors understand a kilobyte as 1000 bytes, a megabyte as 1000 kilobytes, and so on because it's pedantically correct (and the conversion factors are commercially convenient).
This dissonance in understanding leads to endless "why is my drive smaller than what's printed on the box?" complaints.
> But the author does themselves no favour by including this incorrect tidbit in the article:
As I undertood it, that tidbit comes from Sandisk and probably targets Windows user. I also believe Windows does (or did) use base 2 for file and disk sizes
[1] https://arstechnica.com/gadgets/2020/05/western-digital-gets...
[2] https://arstechnica.com/gadgets/2020/09/western-digital-is-t...