Hacker News new | past | comments | ask | show | jobs | submit login
Mistakes to Avoid When Buying a MicroSD Card (2018) (makeuseof.com)
118 points by peter_d_sherman 63 days ago | hide | past | web | favorite | 72 comments

> Counterfeit cards correctly report the capacity shown on the packaging, but actually contain far less. You won’t notice this until the card fills up unexpectedly quickly.

Oh, it's much worse than that. Counterfeit cards are often manipulated to lie to the OS about their capacity as well. They don't "fill up" at all, they just throw away your data, or overwrite it silently.

This is my experience. Bought an SanDisk card to pop in my phone before I went on holiday. Changed my camera settings to save to the SD card. Great. I can stop worrying about filling up my phone's internal storage with photos now.

Then the card corrupted all of my photos.

It must've been a counterfeit. SanDisk is a reputable company headquartered in US owned by a US company. I don't know consumer law (and IANAL in general) but I'd imagine if they were selling fraudulent sd cards they'd probably get hit with fines, a class action lawsuit, and possibly worse.

Happened to me. Before we blame SanDisk the problem exists in older Andriod OS.

That is downright evil. I suppose in order to find an evil card one would have to fill it with a file and then verify that files integrity, a very time consuming process form larger cards.

Or instead of files just write to the block device with one big random file with the supposed byte-accurate size of the device, then read it back and compare. Something like:

  dd bs="$(blockdev --getsize64 "$device")" count=1 if=/dev/urandom of=random-file
  dd if=random-file of="$device"
  cmp random-file "$device"
Using cmp instead of something like md5sum is better since it would stop on the first differing byte, instead of reading the whole thing for both.

EDIT: I'm not sure if my use of dd's bs and count is a good idea, though. It might be better to do:

  head -c "$(blockdev --getsize64 "$device")" /dev/urandom > random-file
to get more reasonable buffer sizes. Documentation on head says -c works on bytes, not characters, so that's good.

For that second line, I often prefer to do:

  pv random-file > "$device"
to get a neat progress bar.

EDIT 2: As discussed in other threads, one should make sure to invalidate the kernel's cache of the device between steps 2 and 3. My safest bet on how to do that is to reboot the computer.

Yeah that's the only guaranteed way to find them I've found. There's a couple programs out there that do exactly that. Just start writing files to the SD card and read then read them back. I think they should be a bit more intelligent though, seems like right now they write the whole value you enter as the stated/test value then try to read it back, seems like you could detect bad devices much sooner by doing smaller read tests at set intervals.

the German Computer magazine heise made a tool[0] to verify the capacity by filling it up with random files:

[0] https://www.heise.de/download/product/h2testw-50539

You don't even need such a tool if you're on mac/linux or have msys/cygwin/WSL installed on windows. just do

    dd if=/dev/urandom bs=1048576 count=5000 | tee test.bin | md5sum
(substitute bs=1048576 count=5000 with the actual size of the sd card) then do

    md5sum test.bin 
and check whether the two hashes match.

Watch out there: the kernel will cache what it wrote out, and all you'll end up checking is the cache and not the card. dd has a "nocache" option but it's only a request. "direct" might work; I don't know.

Maybe someone will come along and provide references to a a definitive answer on what the behaviour will be.

My point is: it's not trivial to ensure that you're actually testing the right thing.

The trivial solution is to to do the two MD5 sums on two different machines.

Wouldn't removing the card, putting it back in on the same machine and then doing your second hash be sufficient?

You can always unmount and mount the SD card between writing and verifying. The unmount should force the kernel to truly flush everything and do the physical write.

> The unmount should force the kernel to truly flush everything and do the physical write.

True, but this doesn't necessarily guarantee that the subsequent read will come from the card and not the cache.

It might as currently implemented; I don't know. But without specific evidence that it does and will continue to do so indefinitely, I wouldn't assume it for verification purposes.

I make the same point again: it's not trivial to ensure that you're actually testing the right thing.

I would hope that unmounting invalidates the cache, because the kernel can't know that the device wasn't modified between mounts. I like the sibling's idea of using two separate computers though.

The kernel is still in control of the underlying block device, and handles all reads and writes through that block device even when it is unmounted. So it can know that the device wasn't modified between mounts (or, more likely, it'll keep the block device cache maintained).

Unplugging and replugging the card really should invalidate the cache though.

This discussion got me thinking that it might be possible that storage devices might expose a datum that lets the kernel know if there were any writes since it last saw the storage device. Something like total number of writes to the device, last modified time, or something. I'm just wondering out loud. It really seems safest to reboot the computer to destroy whatever cache one might have.

In any case, if one reboots the computer, that has to remove that possibility.

Maybe eject the card between tests?

Add the `oflag=sync` flag to dd.

Not to mention that counterfeit cards are normally much slower, further increasing the tediousness of this process.

> hardware that supports microSDXC slots won’t automatically support every size of card in this format. The Samsung Galaxy S9, for example, officially supports cards up to 400GB. There’s no guarantee that your 512GB card will work.

This part drives me nuts. I wish devices would just follow the spec, and accept up to the maximum capacity. But it seems like device manufacturers end up testing the biggest card when they make the device, and as time marches on, you have to search Amazon reviews to see which cards work and which devices have hardcoded smaller limits on capacity.

> But it seems like device manufacturers end up testing the biggest card when they make the device

I mean, they can't test hardware that doesn't exist yet, so what is the alternative?

If there's a hardcoded maximum, that's one thing, but we don't know that's what is going on, do we?

The lack of available hardware to test hasn't stopped PC manufactures from supporting media larger than available for a given interface.

There are occasionally hiccups, like the whole debacle where the MBR partition table (from 1893) was limited to a few terabytes and the industry switched over to GPT, but even today if you put in a >2TB disk into an ancient machine that does not support GPT it can usually at least use the portion of the disk that it can address.

Just read the spec and write your software to support it. It is not like we are asking for support of terabyte SD cards on machines released with the maximum SD card size was 128MB.

> Just read the spec and write your software to support it

I wish PC hardware manufacturers would do that. You don’t even need too old hardware or software to find compatibility bugs, here’s an example: https://superuser.com/q/807871/31483

> MBR partition table (from 1893)

I still think it's quite an impressive feat to support that much data before computers even existed though lol

Maybe not, Babbage might disagree: https://en.wikipedia.org/wiki/Analytical_Engine

It is a showcase of how little many device manufacturers care after they have your money.

But... Doesn't every MicroSDXC card support up to the maximum available addressing limit? If it supports a 64GB card, it should also support a 512GB card, though the manufacturer probably doesn't want to guarantee anything that doesn't exist yet.

Also A2 performance requires operating system level support, and all of these tips go out the window if you're using the card for a Raspberry Pi.



> MicroSDXC cards use the exFAT system by default. Windows has supported it for over a decade, but macOS only since version 10.6.5 (Snow Leopard).

Snow Leopard was released a decade ago[0], so seems like their choice of wording is indicating a bias there.

[0] 10.6.5 was released in Nov 2010 however per Wikipedia, so almost 9 years. Still feel my point remains.

Worth adding that exFAT has aspects that Microsoft hold patents for. Yes they do license for a flat fee, though not aware Apple holds such a license. Also it wasn't until circa 2009 that the SDCARD collective adopted it as a standard. That and wasn't until 2013 that a GPL Linux driver was available.

So yes, very unfair to bash Apple about over this, you raise valid concerns.


> Worth adding that exFAT has aspects that Microsoft hold patents for. Yes they do license for a flat fee...

When exFAT was released I added this to the Wikipedia entry only to have it reverted within hours. Hmm.

The article says it was "updated in" 2018, but the Wayback Machine has a copy of it from 2015 with that line included: https://web.archive.org/web/20151024052332/https://www.makeu...

> First, hardware that supports microSDXC slots won’t automatically support every size of card in this format. The Samsung Galaxy S9, for example, officially supports cards up to 400GB. There’s no guarantee that your 512GB card will work.

Does anyone have a real life anecdote of a device which supports a given version of the standard not be able to use a card of the same version larger than the device supports? I've yet to run into it and always considered it a misunderstanding from issues when newer versions of the standard came out with higher capacity being conflated with what the manufacturer was able to validate on release date but maybe I've just been lucky.

Olympia NC560 Cash counter: can update to new bills from sd-card, the manual states "Micro-SD memory card with a capacity of 1GB up to 8GB".

We tried a 32GB card first, the update seamed to run fine. To finalize the update file is deleted from the sd-card. This all happened, then the device was stuck in firmware-update mode (undocumented state). Tried it again with a 4GB sd-card, update worked fine.

We assume the update file was spread beyond the 8GB boundary

That's a dangerous game to play when you were counting money! I seriously admire your hacker courage.

I'm scared to try an unauthorized card in my Nintendo Switch because I could lose my Zelda save file.

Game saves are stored in the console's internal memory. You couldn't put your save on an SD card if you wanted to.

The article mistakenly says "microSDXC slots" when it means "microSD" slot. Because SD, SDHC and SDXC all have the same physical form factor and slot you put them into, but vastly different electrical interfaces and protocols.

And yes, older versions like SD literally did not have the bits to even communicate cards with 512 GiB.

SD, SDHC, and SDXC are all electrically the same (though newer cards can also optionally support higher-speed interface variants with different signalling voltages), it's just that the original SD protocol didn't have enough bits in the protocol to support higher capacities. SDHC and SDXC are basically the same except that one uses FAT32 and the other uses exFAT as its standard format. You can often use cards above the official 32GB limit in SDHC devices if you can format them as FAT32. A lot of older SD card slots can also be made to support SDHC with a suitable firmware or driver upgrade because the protocols are really similar, but with some fields just being interpreted slightly differently.

> The article mistakenly says "microSDXC slots" when it means "microSD" slot.

No, I think it's correct. From Wikipedia [0]:

SDSC: 1 MiB to 2 GiB

SDHC: 2 GiB to 32 GiB

SDXC: 32 GiB to 2 TiB

SDUC: 2 TiB to 128 TiB

[0] https://en.m.wikipedia.org/wiki/SD_card

Yes, all four of those as "micro" go into the same "slot".

My point is that a "400 GB" card and a 512 GB card both are SDXC cards...

But the reference is to a slot and not a card.

So it's a microSDXC slot that doesn't support all microSDXC cards. Q.E.D.

Sure. The ecoboost Ford mustang. If you put the engine from a Mustang GT in the Ecoboost mustang, the crankshaft will explode. You can totally mount the engine from the GT in the ecoboost, it's the same frame, but the rest of the car is not engineered to support that much horsepower.

Not so much the crankshaft (which is inside the engine), but more the transmission and remaining driveline (driveshaft, differential, etc) become the new "weak points".

This assumes that the Ecoboost version isn't just a engine swap - it could be that Ford uses everything the same from transmission back, and just pops different engines and badges on. I don't know if this is true or not, though.

For instance, on their old Ranger pickup trucks (basically rebadged Mazdas), the rear axle on virtually all of them was way over-engineered vs the rest of the pickup:


While there were some differences between models, most of them used one of two sizes, and they just dropped a different engine and transmission in. In this case, depending on what you had, you might end up breaking other parts before the axle gives (the Ranger axles are a popular swap upgrade from the Dana 30 axle on Jeeps, due to it's robustness)..

I think the parent meant specifically with SD cards. I've personally never had issues using higher capacity cards in devices that didn't officially support them...

Back a few years I bought a large CF-card for a Canon 5D camera, which was just not recognized.

Ohhh my mistake, I thought they were looking for examples of this in other fields.

Presumably the crankshaft is in the engine, so it should be fine. The transmission, driveshaft, differential, and axles may not be up to the task, however. But it will be torque that breaks them, not horsepower.

> We’ve all owned flash memory cards that have stopped working for no apparent reason.

MicroSD cards and SSD drives use exactly the same technology (NAND flash memory), so it’s a mystery to me that microSD cards are unreliable, but SSD drives are extremely reliable. Genuine SanDisk cards have failed on me for no apparent reason, but I’ve never had a problem with SSD drives with much greater usage.

It's the controller and the reserved/backup flash. SSD have a powerful ARM CPU running and optimizing memory cell wear and replacing these on the fly with the reserve.

SD cards doesn't have that mostly for cost and size constraint.

oh but they do https://www.bunniestudios.com/blog/?p=3554

The difference is in using lowest quality leftover Flash memory.

I must say that I have never had an SD card fail on me.

I use two in my primary camera that are setup for mirroring just in case. One in my pocket camera and one in a Raspberry PI. So maybe that's why I haven't experienced any failures. Don't know if model of card has any relevance, but I only buy the SanDisk Extreme Pro cards.

The SD Card Association seems really short-sighted when it comes to capacity. Compare to the world of harddrive where ATA-6 introduced LBA48 which allows for capacities up to 128PiB all the way back in 2003, while SDUC introduced in 2018 is still limited to 128TiB.

What an absolute mess. I’m a technical nerd, and I can probably do the research necessary when I need a new card... but we expect ordinary people to figure it out? Not likely.

Which part are you having trouble with? Seems very simple and straight forward to me. Your "ordinary people" comment doesn't mean much TBH. It is no different than buying a tire, you have your speed rating, section width, load index, maximum inflation pressure, wheel diameter, etc, etc. Wow.. so many crazy numbers !! How do "ordinary people" ever buy a tire.. :)

They don’t buy tires on Amazon. They buy them from someone who’s going to install them and make sure they fit.

Personally, I always thought knowing how to buy a tire is common knowledge. They even sell tires at Walmart, and it's not hard to figure out, you just match the numbers. You've made me curious, though.

Tbh, most people get it wrong. Tire codes in the usa like 205/55r17 96R pack a ton of info into a small space. Sure, most people can match those, but knowing which you can adjust is helpful.

205 is the approx width of the tread in mm

55 is the height of the tire, expressed as a ratio reletive to the width

r usually meant Radial type tire, but has just become part of the standard notation iirc.

17 is the inner diameter in Inches

96 is the Load rating expressed as non linearly scaling variable, needing a chart,in this case 1565lbs.

R is the speed rating, which is usually, but not always in alphabetical order due to weird grandfathered codes. In this case, it means the tire is safe to speeds up to 106mph.

While not hard, it is complex. You can run a taller(sometimes) or shorter(usually) sidewall, or wider(often) or narrower(almost always) tread (within reason), but you can not under rate the load, use a different inner diameter, or signifigantly slower speed rating.

I’m fairly certain my mom would have no idea how to pick out her own tires.

Um, you can do the same with a memory card. Ask someone who knows.

(Also, you can buy tires online)

I don't get why they can't reuse the speed class: Class 30, Class 60, Class 90.

Reading about all this kind of shows why Apple must have decided not to include SD slots in their iDevices.

I think a more likely reason is so that they can charge $150-200 for an internal storage upgrade of 200-250GB and push their iCloud service. This wouldn’t work so well for a manufacturer like Samsung who is not as differentiated from it’s competitors.

I've had cards that I could fill with the designated size, but in speed tests would be slower than the manufacturer specification. Does anyone here also have these kind of problems?

My takeaway from the article is that SD cards are a mess information-wise and that I will continue to not buy or use them.

Something the article doesn't mention is that there are older SD readers with firmware limitations that drop their size limit even below the 2GB SD standard. An example of this is the reader built into the TC1100 tablet pc, which is limited to cards 1GB in size or less.

Any advice on where to get good cards for Raspberry Pi usage?

I mean it seems like most I have had wear out way in a few months even with next to no use.

Also make root fs readonly. DietPi distro claims they do it out of box. Same for openwrt, they mount /var dir as tmps and do writes only there. For raspbian http://blog.pi3g.com/2014/04/make-raspbian-system-read-only/.

Previous discussion suggests using cards designed for Industrial use, and/or minimizing writes (e.g. mount /var on a tmpfs) where possible.

More info in the comments here: https://news.ycombinator.com/item?id=16775768

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact