Hacker News new | comments | ask | show | jobs | submit login
Raspberry Pi microSD card performance comparison (jeffgeerling.com)
270 points by geerlingguy 10 months ago | hide | past | web | favorite | 125 comments



This was a good comment from the author --

"I've tested the ODROID-C2, Orange Pi, and am finishing up testing on the ASUS Tinker Board, and all of these boards are leagues beyond the Pi in terms of I/O performance (both networking and local storage). The problem is most of these boards are either priced the same as or more than the Pi 3 B+, and have a much worse initial onboarding experience (grabbing a disk image, flashing the card or onboard memory, first boot, then figuring out where to go next) than what you get with the Pi and it's handy, well-written tutorials."

That is the only reason that the Raspberry Pi maintains its position of where it is. It isn't a great computer, it is a great ecosystem. So often when I talk to people about my efforts in that space I get the "but my computer is so much faster than that" and I just nod. When the environment is the same on all machines (like it has been in the WinTel era) then its all about the specs of the machine. But when the environments are all different, its all about the environment.


Indeed. I can grab RPi and accessories virtually at the local mall (or at least i keep seeing ads for them from various store chains).

ODROID? Forget about it, has to order from abroad. Tinker Board, not sure. I think i have seen one specialist online store offer up the Orange PI.


I would just be happy if my rpi would not randomly corrupt the memory card for no reason all the time. Have it on a quality adapter, hooked up to a ups, with a quality card, yet it still corrupts itself for no reason.


I worked on a project professionally to find out why we were corrupting SD cards. Turned out that the SD card firmware is just garbage in a lot of cases. Ended up with a test program that could corrupt an SD card from a given manufacturer within a couple hours with slow (80kb/s) continous reads and writes from any hardware running Linux. The manufacturer couldn't have cared less and essientially blew us off despite having a support contract.

Long story, short, buy 'industrial' SD cards if you care about them not getting corrupted.


Been in a similar boat. Management said we couldn't afford industrial cards because it would kill the margin on the embedded devices they were used in, so we had to make it work with 3$ p.o.s. cards, which would corrupt after 6-12 months.

Ended up mounting /var on a tmpfs - ensuring practically no writes to the card - and fetching the device configuration from a server in the network at boot. PLenty of work, with zero (or negative) profit for the company, but at least I learned a thing or two doing it.


Yup, been there too, way, way back.

We wrote simulation tools for typical access patterns and ran them heavily for testing out new cards, both for performance and failure rate. Luckily there was enough margin on the devices that we (R&D) could select the more expensive industrial graded cards by good manufacturers, but I did get to see some really shitty cards that purchasing preferred (because they had gotten a great price on them).

However in our case the corruption was due to a buggy FAT-driver for the obscure RTOS we used. In the end though I learned a lot about FAT which was fun (<sarcasm>and is a highly marketable skill these days</sarcasm>).


Haha, I ended up tapping all read and writes to the card, intending to catch our crappy FAT driver in the act of corrupting.

Turns out the card itself was shit and replaying the trace would just kill arbitrary cards.


Haha, I spent so much time debugging their crappy FAT-driver. Another time the whole system would just hang when you created a new file in a directory that already contained some files. Turns out that their long-to-short filename algo was:

For the first 4 files with identical prefix use the ~N scheme as in LongFile.txt -> LONGFI~1.TXT For the following files it was: 1) "LONG" + hex_str(hash(long_file_name)) + ".TXT" 2) if there's a name collision repeat step 1.

Compound that with a the fact that hash() was implemented something like:

    int h = 0;
    for(int i = 0; i < strlen(long_file_name); ++i)
    {
      h = (h + long_file_name[i]) ^ 0x42424242;
    }
It's pretty easy to see where this falls apart...


How does one tap all the reads and writes to a card?


Just intercept the read and write calls! On linux you can just use 'strace $prog'; try it on a minimsal program that reads and writes a file. For a custom approach, this can be done with a library (using LD_PRELOAD). Plenty of articles about how to do this, e.g. http://www.linuxjournal.com/node/7795 (essentially the same technique on Windows, or other OS with shared libs). Slightly more hardcore is to hook the kernel syscalls; not normally necessary (on linux this is effectively hooking the other side of glibc, where it talks to the kernel) but if all you've got a static binary it's one approach.

Note that all these techniques only show you the read and write calls made to the library/OS and - importantly - not what actually happens to the card. To see that, the next level down is to instrument the card driver to track the actual I/O operations (i.e. so you see what the card is really being asked to do, sans all the caching and buffering).

Note that's not the end of the story; there's what the hardware controller decides to do and when the hardware actually reads/writes the flash array. That's the level where the quality of the firmware in the controller(s) matters.


I suspected the filesystem driver itself, so I had to modify the driver rather than strace (also this was WinCE quite unfortunately).


Doesn't that mean you then have to write an event to disk for every write-to-disk event?


I would log by RPC or whatever to a remote computer.


Modify the SD card driver to open a socket to a computer in the lab, and just log it's commands and responses over TCP with timestamps. The network was orders of magnitude faster than the crappy SDIO interface, so it was easy this time. I've had to do a similar thing with SAS, and you basically just have to pay tens of thousands for a special purpose protocol analyzer, or build your own with an FPGA.


And write what you need to write to disk using an FEC so you can read it back off when it is corrupt. Effectively do what the card should be doing in the first place.

Image the system, and confirm the checksums on first initial boot, then never re-write any of them.


Sounds like a good description of all those "It runs Linux because here is the kernel error" images that bounce around the web. Most of them from signage solutions or similar that stay on for ages at a time, and invariably comes down to the storage device going belly up.


this is the story of my life (I run a signage business).

The rpi2 seemed to be the worst for fs corruption, I've never had a pi1 fail on me. Jury is still out on the pi3.


Have a similar situation. Would be helpful (and grateful) if you would expand on what you learned to minimize writes.


Yeah, industrial grade SDs is the way to go. Here is an interesting graphic comparing different types of NAND memories:

https://www.cactus-tech.com/resources/blog/details/slc-pslc-...


You'll pay for it, though. Truly industrial-grade SD cards (i.e. SLC flash, industrial temperature range, specified terabytes written/MTBF) apparently cost over $10/GB unless you're buying in huge quantities. I guess that might be a good motivation to slim down your root filesystem.


A lot of the newer industrial cards are 'aMLC', which basically means you take an MLC NAND, but only stuff 11 or 00 into a cell and sort of treat it as SLC. You can get for close to $1/GB at sane quantities.


I bought a SwissBit card off of Ebay for my Raspberry PI, because I wanted the longevity of SLC media.

The prices seem to be trending up for SwissBit.


Are you allowed to share the results of your experiment? i have a similar issue with a very limited market (SD Brands/models), it would be great if you provide more info :)


I'd be hesitant to name any names due to NDAs and what have you.

I'll just say that all consumer level cards pretty much don't really care about your data, even the good brands.


I think I speak for everyone that we really appreciate what you did share here.

For how to use what you've shared if we trust you, can you throw us a bone more than:

>buy 'industrial' SD cards

For example can you name a specific card? Or give us enough clues that we can do so?

I for one take your advice very seriously but I don't know how to use what you've just shared. I've seen ATM-style Pi-based kiosks with corrupted SD cards that wouldn't boot. It looked expensive.


Take a look at ATP, specifically the AF4GUD3A and AF8GUD3A, for 4GB and 8GB respectively.

Digikey[1] and Arrow[2] generally sell the 4GB for $15 and the 8GB for $25. They make larger versions too, if you need them.

The 'A' stands for aMLC, they're using normal MLC flash (which most consumer SD cards no longer use, but is generally much more reliable than TLC) but they use it in 1-bit per cell mode like SLC. They make traditional SLC cards as well, but the price skyrockets.

The aMLC cards have very good endurance ratings, but they're still cheaper than SLC cards. The firmware and controller are designed to prevent sudden power loss issues, which is apparently the root cause of a lot of SD card corruption on the Pi.

They're also supposed to have lifetime (i.e. SMART) monitoring, but it's a vendor specific command set rather than something smartmontools can read. ATP has a tool for it that probably only runs on Windows.

I've been using those aMLC cards in a bunch of Pi3 and Pi Zero W devices for months, I've never seen them become corrupted or fail to boot even once, despite being pretty hard on them, compiling stuff, yanking the power, etc.

For comparison, a Samsung Ultra+ card became corrupted after a single power loss. The device was running Windows 10 IoT Core at the time, it never booted again and had to be re-flashed.

[1] https://www.digikey.com/product-detail/en/atp-electronics-in...

[2] https://www.arrow.com/en/products/af4gud3a-waaxx/atp-electro...


Could a lot of this be mitigated by having better file systems?


A small amount of it, yes, but it's the card itself causing corruption issues most of the time, it happens even on devices where the filesystem is read-only. The controller will write and move things around for maintenance purposes even if the host hasn't issued a write command.

The cheap SD cards just aren't designed for anything except being used in consumer devices with batteries, where sudden power loss is rare and losing data isn't going to cause a plane to crash or result in someone not receiving a dose of insulin.

So when they suddenly lose power, they aren't always capable of ensuring that whatever task they were carrying out at the time is actually completed and did not accidentally destroy data.

And apparently the consumer SD card controllers are really there to manage and remap parts of the flash that were defective before ever leaving the factory.

It's probably cheaper to build over-provisioned cards with a simple controller that can deal with manufacturing defects in the field, than to do QA on 200 million thumbnail sized NAND die every month and still try to profit while selling them for a fraction of a penny each.


It seems very much possible to fix data corruption of a broken flash translation layer of SD cards with yet another translation layer using erasure coding. Whatever the controller does or doesn't it still can only write in blocks, we just have to make sure erasure codes are at least a block size apart from the data.


At least for me, the failure mode in most cases had the card simply erroring out on accesses (even writes!) to particular sectors. Other sectors would allow to write, so it wasn't that the card had run out of sectors with write lifetimes. When you're not given anything to work with, even corrupted data, extra ECC isn't going to do you any good. My conjecture (with no data) is that the card's FTL was itself corrupted.

EDIT: and in no cases I saw did the cards I was testing give me truly 'corrupt' data. Just either error codes, or stale data, or occasionally data from another sector entirely. They've got metric shittonnes of ECC internally (to make up for the crappy NANDs), and will do a better job than you can at detecting errors.


Right, so, if we use virtual blocks the size of the physical blocks and either parity block or one of the data blocks are destroyed it is still possible to get the data. And with writes if a write fails our layer can just try to write a bit farther and farther until it succeeds.


In general you don't have access to the physical block size. And it can change even within a given lot.


Exact block size is not necessary, big safe choice is fine too. But I suspect it's trivial to detect it with benchmarks, as writing even a byte over a block size would require overwriting two physical blocks instead of one, which is much slower.


Btrfs will detect these corruptions and result in EIO so at least corrupt data isn't propagating beyond the file system. While you can set metadata and data to DUP, and thereby get automatic recovery from corruption, it could still fail if the card colocates the two copies into the same page or block on the sd card. So if the corruption cause can affect multiple parts of a page or block, recovery may still not be possible. And also you're doing double the writes, so half the write speed, and half the lifetime. Another option is Btrfs raid1 if you can stick two cards into the device.

XFS has metadata checksums, enabled by default when using xfsprogs 3.2.3+ but data is a much bigger footprint so you can still get hit with silent data corruption. And ext4 any day now is going to start to default to metadata checksums as well.

For those file systems, you can use dm-integrity or dm-verity. https://gitlab.com/cryptsetup/cryptsetup/wikis/DMIntegrity


Depending on what the errors are, you could design a filesystem where the apparent block size is a bit smaller than the underlying disk's block size, giving you room to add some sort of error correction code to the bits you write. Then a single-bit error (even if the controller moves a block around within the disk without telling you) can be corrected by the filesystem driver, and the correct data re-written.

(I also want a filesystem that does this so that you have room for a proper authenticated encryption mode for your full-disk encryption - if your apparent block size is the same as your physical disk block size, either you have no room for an authentication tag and you're using a pretty fragile scheme for making your ciphertext tamper-resistant, or you kill performance because you need to read the authentication tag from another block. Current disk encryption software tends to choose the former.)


Then a single-bit error (even if the controller moves a block around within the disk without telling you) can be corrected by the filesystem driver, and the correct data re-written.

In my experience, flash corruption of the type found in SD cards are completely blank (00) or erased (FF) blocks, not single-bit errors. Remember that SD already has a layer of error correction to handle those from the raw flash.


ATP Inc. has treated us very well, both their products and their support.

But in general 'industrial' is the keyword to get the good shit from manufacturers who'll treat you like an adult.


Search for SLC type memory cards (in the Technology filter):

https://www.digikey.com/products/en/memory-cards-modules/mem...


Industrial is more than that. A big one for me was that the manufacturer will work with you and notify you when they change their BoM, letting you reverify and track internally.


Why not name the manufacturer so we can avoid them?


NDAs.


Wow I had no idea to even look for different NAND geometries for SD cards! More expensive obviously but probably well worth it.


Any idea where industrial micro SD cards can be purchased in Europe? A casual search didn't yeild much info.


I've bought Swissbit cards from Mouser/Farnell.


RS Components has them


Yes, SD cards are garbage. eMMC is far, far superior.


Any recommendations?


I've had a number of Pis (currently 6) in continuous operation around my house for over two years now. One is running http://www.drupalpi.com/ currently (uptime was 300+ days until last night when I re-flashed it)... I only use SanDisk Extreme and Samsung Evo+ cards now, but have not had any corruptions on any of these actively-used devices (most are either storing photos (e.g. time-lapse) and then dumping them to the cloud every week or month, or storing time-series data, so there are frequent writes.

I wonder if you could provide what type of card you're using? Also the type and brand of power supply? Besides any of the cheaper cards (basically any brand besides SanDisk and Samsung, it seems) being flaky in my experience, the power supply being flaky (usually when I used a cheap 500 mA supply that came with some device for free) is the only other thing that _ever_ caused corrupt data for me.


I have similar experience with SanDisk and Samsung EVO+. 9 Pi's (not raspberry) running continuously for ~2 years. No kernel panics/PostgreSQL failures. I run with data checksuming enabled. I also disabled options in PostgreSQL that cause periodic and unnecessary write activity.

Frequent regular updates, periodic database activity (web scraping, environment monitoring, ...). Running on F2FS.

I have logical replication setup in PostgreSQL, because I don't trust the SD cards anyway, but I have had no issues so far.


I came here to suggest f2fs. Glad to see you're already doing that.


It works surprisingly well and it makes many operations faster than ext4. I noticed it especially when doing system updates.

And I've just read yesterday, that F2FS in Linux 4.17 will have further optimizations for low end systems. Yay.


I recommend netbooting the Pi off a server. Not only is it easy to back up and restore, the performance on small reads and writes is probably better than you can expect from any mSD card, if the specs here are to be believed. Plus you don't burn out SD cards that cost at least half what the Pi does.


Absolutely the way to go. We run loads of Pi's like this; older ones with an RO SD card holding the boot (so never get corrupted), later ones using the built-in PXE loader. Excellent for development as well; never be scared of doing a hard reset or Magic-Sysreq (this goes for PC's as well as Pis).

It's unfortunate that the built-in watchdog doesn't work during boot and shutdown, so a hang at these points won't be recovered without cycling the power. This can be addressed with an actual hardware watchdog connected to the P6 header (so if it's not being poked every so often, it does a cold boot).

The main issue for most home users is DHCP; normally the router provides DHCP and a lot of them are not compliant with the spec (making it tricky to set up a second DHCP just for netbooting). The solutions I know are using a separate network, or running your own DHCP for everything (my preferred solution).


Are there any good guides on how to do this? I never once got it working. Also, where do you persist data? NFS share?


Yeah, it's unfortunately not the easiest thing in the world. I suggest cloning a working Raspbian installation to your server, pointing tftp at /boot (with the "--secure" option set so as to allow relative paths). You can then set up a rw NFS share on / for the Pi to boot off of (and lock it down with your firewall).

You also need to point the Pi to the server via DHCP, which you can do in dnsmasq like so:

dhcp-option=43,Raspberry Pi Boot

dhcp-option=66,$SERVERIP

I wrote up a more complete guide here a while back: https://adamfontenot.com/post/how_to_netboot_a_raspberry_pi_...

The other really cool thing you can do with this is install qemu emulation support for arm on the server and then use systemd-nspawn to chroot into your Raspbian installation. Then for any commands that don't need access to the Pi's ports, you can run them directly on the server. It's really nice to do updates this way, much faster than looping them through the Pi and its slow processor.

Love following your projects, by the way.


Ah that doesn't sound too bad, I should be able to get it up and running, thanks for the instructions!

I have a new project that's almost finished that was a lot of fun to work on, writeup coming soon! Sneak peek, it's what produced these test photos: https://m.imgur.com/a/mDR8y

Thanks again!


Thank you so much for posting these two options; The RPI kept sending TFTP requests to the machine running the DHCP server despite my attempts to specify another IP (the "official" documentation specifies pxe-service=0,"Raspberry Pi Boot" to which I added the filename & ip as per dnsmasq man page; same thing with dhcp-boot parameter)


I do not know if this is an issue with RPI, but some crappy devices only support TFTP from the DHCP server, you can solve this with two dhcp servers or iptables port forwarding.


And as a bonus the Pi 3 B+ gets ~230 Mbps over LAN (whereas all the older models maxed out at ~93 Mbps), so netboot performance should be even better than in the past.


You have to put absolute trust in the network in this scenario, not really acceptable architecture in 2018.


I don't know if it's an issue with the rpi or not, but I know that SD cards can draw current in short spikes during writes and need a good supply with good decoupling capacitors so there's not voltage drops, as these can cause corruption. In an embedded device I've designed (with careful capacitor selection), I've very rarely had any SD card issues (3 corruption cases in ~1000 devices over a couple years). I'm using Sandisk and Samsung class 4, 6 or 10 (depending on availability).


My first go around with the original Pi, I ended up burning out two micro SD cards in short order. The solution at the time was to mount the file systems read-only and put /tmp in RAM and do without swap (if I recall correctly). For my purposes this worked out fine, since the Pi was being used as a signage kiosk. Later I retired the original Pi and bought the series 3. I bought a much higher quality card, based on a similar performance and longevity article. Maybe something changed between versions of Pi, or maybe these cards really are better, but I haven't had the problem reoccur even with a read/write filesystem.


And that's why I moved over to MMC based Pi clones (specifically the Rock64), I am overly tired of SD cards mysteriously corrupting themselves. As of yet, I haven't had an MMC go upside down, but that's certainly far from guaranteed as well.


Do you know how much of that is due to connector and signal paths? I worked on a design that had soldered on eMMC and it was rock solid (and 8 bit wide) but I always assumed SD problems were due more to the flaky form factor. Is there a standard for removable eMMC modules?


Is it possible for the rPi to boot from a USB device? One solution would be to hook it up to a $20 32GB SSD drive instead of relying on an SD card.


Depends on the model, or actually technically it depends on the bootcode but which version is preloaded depends on the model.

The brand new Pi 3B+ ships with the latest bootcode and will boot from USB mass storage or PXE by default. Their PXE is a bit nonstandard but it's close enough that anyone who's PXE booted a PC will understand it.

The older Pi 3 supports USB and PXE booting, but has those modes disabled by default. There's a bit you can set in one-time programmable memory to enable it permanently. PXE booting has some quirks in this mode and doesn't get along with switches that take more than a second or so to activate ports.

Earlier models do not support anything but SD on their stock bootcode.

In all cases, a SD card containing nothing but the latest bootcode can be inserted which brings the new features to the older models, allowing them to boot the rest of the OS from whatever.


At least for the older models you still needed the card, but only for a marker that tells the Pi to boot from USB. I've been using that setup with Kodi/OSMC with a Pi 2 since USB is faster than the SD card.


Also losing power seems to result in a corrupt sd card 80% of time.

Obviously it's bad to lose power while on, but that's crazy.


I had a GoPro develop a flaky voltage regulator so it would sometimes brown out, that thing killed microSD cards like a champ.


Once corrupted, it can't be re-used after formatting?


You just have to reflash it and it'll be just fine, only the data is corrupt. But that means that the whole OS has to be wiped; any saved files, programs, configurations, etc. are gone and have to be redone.

Not the end of the world, but if that happens almost everytime you unplug something (possibly accidentally), it can be a huge nuisance. I use my pi with octoprint so I'm not even doing many writes (only on file upload).


Thanks for clarifying.


I built a product with an embedded rpi and dealt with this same problem. The solution was to purchase "industrial" SD cards. I ended up using ATP micro SD cards. You can find them on Digikey. They are almost double the price, but I have not had a single SD corruption since switching to them.

In my experience the samsung SD cards would be corrupted when the power was pulled around 30% of the time.


Do you have an overclock running? Specifically sdram_freq.


My overclocked pi's & abruptly shutdown pi's were the primary reason for my corrupted SD cards.

In my experience Toshiba Exceria M302 (built for action cameras) scores higher in longevity than Samsung EVO. I've abused the Toshiba SD card to get it corrupted & I've yet to see one.


after how long? i've been running my pi off the usb port of my server. knock on wood, no issues yet.


Is raspberry pi trapped in its own architecture? The Broadcom SOC chosen 5 years ago may have seemed like a good choice then, but the raspberry pi foundation is now facing an uphill battle since Broadcom doesn’t seem to be providing any significant updates to the raspberry pi, other than clock increases. You have the whole codebase based around this one chip, and I worry that this will mean painful migrations if raspberry pi ever switches to a new architecture


Er... what? The SoC has received multiple major updates. The Pi 3+ has a a quad-core, 64-bit SoC -- it's the same SoC family as the one in the original Pi, but a rather different CPU core.


Yup. The Pi has been based on ARMv8 architecture since late 2016, the same architecture that the latest Apple mobile chip(A11) is based on. There's plenty of room to grow in the Broadcom line. The only part that really needs attention is the bundled GPU which IIRC has never been updated since the RPi has been introduced.


> Yup. The Pi has been based on ARMv8 architecture since late 2016, the same architecture that the latest Apple mobile chip(A11) is based on.

that doesn't mean anything, other than that it runs arm64. in every other way it's nowhere close to A11.


And yet they have already said there is nowhere else to go with this 40nm process. They'd need to port it to a newer node and that's a huge job which may not happen.


Okay, sure. They changed the arm core. But the IO, the single USB port (internal), the video core, the lack of sata, poor SD card speed, these all remain.


It's a hobbyist computer/learning platform whose biggest draws are price and community support. Demand/support isn't flagging so performance and connectivity doesn't seem to be an issue for the target market. The biggest community wishes were answered, more USB ports and bundled wifi. If the RPi can't keep up with what you're doing there are plenty of other more powerful and more expensive options out there.


The problem right now is that you have to choose between good community support or a better/faster SoC.

I wish there was a Raspberry Pi 4 with 2x USB 2.0, 1xUSB 3.0, 1xUSB-C 3.0, Gigabit Ethernet and 802.11ac.


The 3+ has half of what you're looking for: 802.11ac and gigabit ethernet (although it's limited to ~300 Mbps by USB2).

That all being said, I feel like what you're looking for is a small-form-factor computer, not a development board. :)


There are plenty of more powerful and cheaper options too, it's just that the Raspberry Pi has mindshare amongst people writing about hobbyist and maker stuff. I have decidedly mixed feelings about the Raspberry Pi community as a whole; while they're certainly vocal and numerous, they used to have an unfortunate cultural tendency towards blaming any problems on the users being too stupid to use a Pi, even ones caused by major bugs in the Pi itself.


Wondering if you can link some examples of this? Genuinely curious. I've been using the Pi for some for-fun projects lately and haven't encountered limitations caused by the platform just yet so I haven't experienced this (yet? hopefully ever?).


What are some more powerful cheaper options?


Orange Pi and ODROID-C2, and there are a few slightly more expensive but vastly faster and higher specced options (like the Tinker Board). But the onboarding experience and ongoing support is usually far worse than with the Pi.


At least in the UK, the ODROID-C2 is around £50 (compared to £35 for the latest Rpi). So Yeh more powerful, but ~50% more expensive.


Hopefully the RockPro64 will be released soon.

Major features

2x A72 + 4x a53. (4000 in geekbench)

mini PCIe connector/PCIe x4 (supports SATA expansion card)

USB 3.0

Prices

Rockpro64 2GB board, $59-65

Rockpro64 4GB board, $79

Rockpro64-AI 4GB board, $99

https://forum.pine64.org/showthread.php?tid=5614


Isn't the Pi3+ the same processor as the Pi3, just overclocked?


Basically, yes. CPU speed is marginally improved. The networking speeds are the major upgrades this time around.


I got 3 Samsung EVOs (from 2015 or so) that got stuck in a kind of "read only" mode after some months in the Pi. All seems normal for some time, but writes are not persisted. After some time linux gets confused and panics. After rebooting, FS is always in exact same state.


Same. I have a 32G Samsung EVO+ I use in a NUC with Linux that just went "read only" this week. 10 year warranty, and it's already setup for warranty replacement, which is sorta hilarious seeing as it cost $12, including the adapter (which itself is only warrantied for 1 year). I got 360 days out of the card, non-continuous use. But also interesting, they want the bad card returned to them - they're including a prepaid return shipping label.

The usage was just as a boot drive no user data: EFI FAT, ext4 /boot, Btrfs / using zstd and ssd_spread as the mount options.

FAT will mount ro or rw, but any writes fail

    [140718.615921] print_req_error: I/O error, dev mmcblk0, sector 2048
    [140718.615998] Buffer I/O error on dev mmcblk0p1, logical block 0, lost sync page write
Ext4 is similar but a lot more complaints from the mmc block driver, including an SDHCI REGISTER DUMP [1] which I can't make heads or tails of, but this is somewhat more revealing:

    [142132.340226] f28s.local kernel: mmc0: Card stuck in wrong state! mmcblk0 card_busy_detect status: 0xf00
Btrfs also has lots of complaints [2] but mainly because it never gives up trying to write. So cancel that and mount with '-o ro,norecovery' and everything is there. One thing you'll see in the Btrfs output, is "corrupt 10" which is not a current event, just a counter that I never reset. A few months ago I did have a data file (not fs metadata) become corrupt which Btrfs caught. I was able to reinstall the RPM that provided that file. So who knows how that corruption would manifest without being caught by the file system.

The blkdiscard command succeeds without error, but also doesn't actually do anything, all data is still there.

[1] https://drive.google.com/open?id=1Lbypuut21PreXnzHj9uC0x6lxK...

[2] https://drive.google.com/open?id=1340GQN29j8Ougtj_E0ey3A_RUJ...


Sounds too cheap, are you sure it's not a fake?


Local reputable store, f3 had no complaints and is the claimed size. Samsung wanted a copy of the receipt and a photograph of the back of the card itself before replacing it so presumably they're satisfied it's legit. It might've been on sale for a few bucks off, I don't recall. Newegg has it for $15. shrug


I had a cheap off-brand Chinese SD card do that a while back. Was very confusing. Surprised that reputable brands do the same thing though.


I have been victim of 3 evos myself, i didnt get very far into the configuration process (db, transmission, etc) before them more or less being rendered useless. The ssd evos are great but the sd cards are simply horrible, tragically I saw them locally bundled with pi starter kits. I ended up buying a sandisk which has not had a single hiccup even after a little bit of abuse.


Thanks for finally finding the search terms I needed to discover this: https://www.digikey.com/product-detail/en/panasonic-electron...

Looks like my RPi 2 cluster is getting an upgrade!



Would be also interesting to compare power consumption. Different microSD cards can range from 0.06 to 1.34+ mA in idle, which can be more than a microcontroller like ARM M4: https://electronics.stackexchange.com/a/123386


Has anyone found there to be a noticeable difference between a Lexar (or the like) and a Samsung Evo+ when using a Pi-Hole?


When using Pi-Hole, the difference isn't that great. It's more network-latency-constrained (and maybe CPU) than anything else. But the difference is so small in my testing with Pi-Hole that I don't think it's worth worrying about _too_ much. For many other use cases, there would be a very large performance delta!


Thanks very much — both for this reply and for all your write-ups!


Why is that significantly slower than the specs of these SD cards (100-80 MB/s read/write)?


As @written says, there is a huge limitation on the microSD card reader on the Pi itself; but also, in my testing on a UHS-II USB 3.0 card reader on my MacBook Pro, most of the cards can't sustain more than 30-40 MB/s write even if the specs say they can.

Large block read speeds are usually pretty accurate, but manufacturers take quite a bit of liberty with their performance claims. And random I/O is pretty terrible in almost every case.

Remember that these types of cards are _usually_ optimized for large file I/O since they're used in dashcams, GoPros, and the like—use cases that are vastly different from a general computing device running Linux!


The RPi is more then likely the limiting factor... kind of makes the tests useless if you want to figure out which cards are faster


If I'm remembering correctly, the Pi's SD reader runs at 50MHz, doing 4-bit transfers, and not using any of the UHS signaling methods, because those use 1.8V signals that the Pi isn't set up to use for SD.

> kind of makes the tests useless if you want to figure out which cards are faster

But...which card is faster in a USB3 UHS-III transfer isn't useful information for a Raspberry Pi benchmark. It would certainly tell you which cards are faster, but the info wouldn't be directly applicable to what the tests are trying to measure.


The fastest cards in these tests were also fastest when writing the entire image on my USB 3.0 UHS-II card reader on my Mac. Large file writes are where most of these cards shine, and some can do 40+ MB/s when writing larger blocks of data.


I'm sure the interface is slower than what you would find in a laptop, but I'm sure the blocksize of 4k/8k isn't helping on those tests.


Host side interface limitation.


Is this also applicable for androids?

I got a cheapie sdcard, which slowed my phone noticeably. On amazon, reviewers say (as here) that random RW are the key metric, and (at that time), the sandisk sdcards were the best.


Android phones use the SD card in a more computer-like way than a camera-like way, so random rw is important. There's actually an "app" rating introduced in the last year or two, but I think most manufacturers haven't bothered to get their cards certified yet. But some Sandisk cards now have an "A1" or "A2" marking on them, which represents application performance.


The debian on rpi should not have its swap on the sd card, clear and simple.

A solution would be to have some filesystem on ram, and write new changes/deltas to the SD card either periodically, or at shutdown.


Nice test!

Perhaps nice to add comparison to network storage/boot?


Interesting. I wonder how bad the other cards are in comparison to not warrant a test at all.


See my post from 2015: https://www.jeffgeerling.com/blogs/jeff-geerling/raspberry-p...

I re-tested a couple of the cards (notably, the Sony and Kingston cards), and they were just as painfully slow. The benchmarks took like 20 minutes (with the faster cards they only take 3-4 min).

If you use knockoff cards (no brand at all, like one that came with one of my cheap drones), the performance is so abysmally slow you might think the Pi locked up for a few hours.


Do you think that these top-tier cards are necessary for use in a Pi-Hole in an average home? Or is something like a Lexar card good enough?


Pi-Hole ends up booting, then probably mostly running in RAM, anyhow, right? I think it's just running as a DNS proxy. So you'd slow down your initial boot, but probably not hurt performance of the device much (unless I'm wrong, and it does need to read and write data often).


Yeah, same here.


wondering if I should just mount my external spinning platter hdd as root instead


looks like that NOOBS card doesn't perform well at all


It's better than the no-name cards, and even slightly faster than some of the secondary-brand cards (Transcend, PNY, Sony, Kingston, Toshiba).

It's not a terrible choice for a starter card, but you can get the Evo+ cheaper for the same capacity, if you can stand to flash it yourself :)


This feels like an LTT video.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: