"I've tested the ODROID-C2, Orange Pi, and am finishing up testing on the ASUS Tinker Board, and all of these boards are leagues beyond the Pi in terms of I/O performance (both networking and local storage). The problem is most of these boards are either priced the same as or more than the Pi 3 B+, and have a much worse initial onboarding experience (grabbing a disk image, flashing the card or onboard memory, first boot, then figuring out where to go next) than what you get with the Pi and it's handy, well-written tutorials."
That is the only reason that the Raspberry Pi maintains its position of where it is. It isn't a great computer, it is a great ecosystem. So often when I talk to people about my efforts in that space I get the "but my computer is so much faster than that" and I just nod. When the environment is the same on all machines (like it has been in the WinTel era) then its all about the specs of the machine. But when the environments are all different, its all about the environment.
ODROID? Forget about it, has to order from abroad. Tinker Board, not sure. I think i have seen one specialist online store offer up the Orange PI.
Long story, short, buy 'industrial' SD cards if you care about them not getting corrupted.
Ended up mounting /var on a tmpfs - ensuring practically no writes to the card - and fetching the device configuration from a server in the network at boot. PLenty of work, with zero (or negative) profit for the company, but at least I learned a thing or two doing it.
We wrote simulation tools for typical access patterns and ran them heavily for testing out new cards, both for performance and failure rate. Luckily there was enough margin on the devices that we (R&D) could select the more expensive industrial graded cards by good manufacturers, but I did get to see some really shitty cards that purchasing preferred (because they had gotten a great price on them).
However in our case the corruption was due to a buggy FAT-driver for the obscure RTOS we used. In the end though I learned a lot about FAT which was fun (<sarcasm>and is a highly marketable skill these days</sarcasm>).
Turns out the card itself was shit and replaying the trace would just kill arbitrary cards.
For the first 4 files with identical prefix use the ~N scheme as in LongFile.txt -> LONGFI~1.TXT
For the following files it was:
1) "LONG" + hex_str(hash(long_file_name)) + ".TXT"
2) if there's a name collision repeat step 1.
Compound that with a the fact that hash() was implemented something like:
int h = 0;
for(int i = 0; i < strlen(long_file_name); ++i)
h = (h + long_file_name[i]) ^ 0x42424242;
Note that all these techniques only show you the read and write calls made to the library/OS and - importantly - not what actually happens to the card. To see that, the next level down is to instrument the card driver to track the actual I/O operations (i.e. so you see what the card is really being asked to do, sans all the caching and buffering).
Note that's not the end of the story; there's what the hardware controller decides to do and when the hardware actually reads/writes the flash array. That's the level where the quality of the firmware in the controller(s) matters.
Image the system, and confirm the checksums on first initial boot, then never re-write any of them.
The rpi2 seemed to be the worst for fs corruption, I've never had a pi1 fail on me. Jury is still out on the pi3.
The prices seem to be trending up for SwissBit.
I'll just say that all consumer level cards pretty much don't really care about your data, even the good brands.
For how to use what you've shared if we trust you, can you throw us a bone more than:
>buy 'industrial' SD cards
For example can you name a specific card? Or give us enough clues that we can do so?
I for one take your advice very seriously but I don't know how to use what you've just shared. I've seen ATM-style Pi-based kiosks with corrupted SD cards that wouldn't boot. It looked expensive.
Digikey and Arrow generally sell the 4GB for $15 and the 8GB for $25. They make larger versions too, if you need them.
The 'A' stands for aMLC, they're using normal MLC flash (which most consumer SD cards no longer use, but is generally much more reliable than TLC) but they use it in 1-bit per cell mode like SLC. They make traditional SLC cards as well, but the price skyrockets.
The aMLC cards have very good endurance ratings, but they're still cheaper than SLC cards. The firmware and controller are designed to prevent sudden power loss issues, which is apparently the root cause of a lot of SD card corruption on the Pi.
They're also supposed to have lifetime (i.e. SMART) monitoring, but it's a vendor specific command set rather than something smartmontools can read. ATP has a tool for it that probably only runs on Windows.
I've been using those aMLC cards in a bunch of Pi3 and Pi Zero W devices for months, I've never seen them become corrupted or fail to boot even once, despite being pretty hard on them, compiling stuff, yanking the power, etc.
For comparison, a Samsung Ultra+ card became corrupted after a single power loss. The device was running Windows 10 IoT Core at the time, it never booted again and had to be re-flashed.
The cheap SD cards just aren't designed for anything except being used in consumer devices with batteries, where sudden power loss is rare and losing data isn't going to cause a plane to crash or result in someone not receiving a dose of insulin.
So when they suddenly lose power, they aren't always capable of ensuring that whatever task they were carrying out at the time is actually completed and did not accidentally destroy data.
And apparently the consumer SD card controllers are really there to manage and remap parts of the flash that were defective before ever leaving the factory.
It's probably cheaper to build over-provisioned cards with a simple controller that can deal with manufacturing defects in the field, than to do QA on 200 million thumbnail sized NAND die every month and still try to profit while selling them for a fraction of a penny each.
EDIT: and in no cases I saw did the cards I was testing give me truly 'corrupt' data. Just either error codes, or stale data, or occasionally data from another sector entirely. They've got metric shittonnes of ECC internally (to make up for the crappy NANDs), and will do a better job than you can at detecting errors.
XFS has metadata checksums, enabled by default when using xfsprogs 3.2.3+ but data is a much bigger footprint so you can still get hit with silent data corruption. And ext4 any day now is going to start to default to metadata checksums as well.
For those file systems, you can use dm-integrity or dm-verity.
(I also want a filesystem that does this so that you have room for a proper authenticated encryption mode for your full-disk encryption - if your apparent block size is the same as your physical disk block size, either you have no room for an authentication tag and you're using a pretty fragile scheme for making your ciphertext tamper-resistant, or you kill performance because you need to read the authentication tag from another block. Current disk encryption software tends to choose the former.)
In my experience, flash corruption of the type found in SD cards are completely blank (00) or erased (FF) blocks, not single-bit errors. Remember that SD already has a layer of error correction to handle those from the raw flash.
But in general 'industrial' is the keyword to get the good shit from manufacturers who'll treat you like an adult.
I wonder if you could provide what type of card you're using? Also the type and brand of power supply? Besides any of the cheaper cards (basically any brand besides SanDisk and Samsung, it seems) being flaky in my experience, the power supply being flaky (usually when I used a cheap 500 mA supply that came with some device for free) is the only other thing that _ever_ caused corrupt data for me.
Frequent regular updates, periodic database activity (web scraping, environment monitoring, ...). Running on F2FS.
I have logical replication setup in PostgreSQL, because I don't trust the SD cards anyway, but I have had no issues so far.
And I've just read yesterday, that F2FS in Linux 4.17 will have further optimizations for low end systems. Yay.
It's unfortunate that the built-in watchdog doesn't work during boot and shutdown, so a hang at these points won't be recovered without cycling the power. This can be addressed with an actual hardware watchdog connected to the P6 header (so if it's not being poked every so often, it does a cold boot).
The main issue for most home users is DHCP; normally the router provides DHCP and a lot of them are not compliant with the spec (making it tricky to set up a second DHCP just for netbooting). The solutions I know are using a separate network, or running your own DHCP for everything (my preferred solution).
You also need to point the Pi to the server via DHCP, which you can do in dnsmasq like so:
dhcp-option=43,Raspberry Pi Boot
I wrote up a more complete guide here a while back: https://adamfontenot.com/post/how_to_netboot_a_raspberry_pi_...
The other really cool thing you can do with this is install qemu emulation support for arm on the server and then use systemd-nspawn to chroot into your Raspbian installation. Then for any commands that don't need access to the Pi's ports, you can run them directly on the server. It's really nice to do updates this way, much faster than looping them through the Pi and its slow processor.
Love following your projects, by the way.
I have a new project that's almost finished that was a lot of fun to work on, writeup coming soon! Sneak peek, it's what produced these test photos: https://m.imgur.com/a/mDR8y
The brand new Pi 3B+ ships with the latest bootcode and will boot from USB mass storage or PXE by default. Their PXE is a bit nonstandard but it's close enough that anyone who's PXE booted a PC will understand it.
The older Pi 3 supports USB and PXE booting, but has those modes disabled by default. There's a bit you can set in one-time programmable memory to enable it permanently. PXE booting has some quirks in this mode and doesn't get along with switches that take more than a second or so to activate ports.
Earlier models do not support anything but SD on their stock bootcode.
In all cases, a SD card containing nothing but the latest bootcode can be inserted which brings the new features to the older models, allowing them to boot the rest of the OS from whatever.
Obviously it's bad to lose power while on, but that's crazy.
Not the end of the world, but if that happens almost everytime you unplug something (possibly accidentally), it can be a huge nuisance. I use my pi with octoprint so I'm not even doing many writes (only on file upload).
In my experience the samsung SD cards would be corrupted when the power was pulled around 30% of the time.
In my experience Toshiba Exceria M302 (built for action cameras) scores higher in longevity than Samsung EVO. I've abused the Toshiba SD card to get it corrupted & I've yet to see one.
that doesn't mean anything, other than that it runs arm64. in every other way it's nowhere close to A11.
I wish there was a Raspberry Pi 4 with 2x USB 2.0, 1xUSB 3.0, 1xUSB-C 3.0, Gigabit Ethernet and 802.11ac.
That all being said, I feel like what you're looking for is a small-form-factor computer, not a development board. :)
2x A72 + 4x a53. (4000 in geekbench)
mini PCIe connector/PCIe x4 (supports SATA expansion card)
Rockpro64 2GB board, $59-65
Rockpro64 4GB board, $79
Rockpro64-AI 4GB board, $99
The usage was just as a boot drive no user data: EFI FAT, ext4 /boot, Btrfs / using zstd and ssd_spread as the mount options.
FAT will mount ro or rw, but any writes fail
[140718.615921] print_req_error: I/O error, dev mmcblk0, sector 2048
[140718.615998] Buffer I/O error on dev mmcblk0p1, logical block 0, lost sync page write
[142132.340226] f28s.local kernel: mmc0: Card stuck in wrong state! mmcblk0 card_busy_detect status: 0xf00
The blkdiscard command succeeds without error, but also doesn't actually do anything, all data is still there.
Looks like my RPi 2 cluster is getting an upgrade!
Depending on your location.
Large block read speeds are usually pretty accurate, but manufacturers take quite a bit of liberty with their performance claims. And random I/O is pretty terrible in almost every case.
Remember that these types of cards are _usually_ optimized for large file I/O since they're used in dashcams, GoPros, and the like—use cases that are vastly different from a general computing device running Linux!
> kind of makes the tests useless if you want to figure out which cards are faster
But...which card is faster in a USB3 UHS-III transfer isn't useful information for a Raspberry Pi benchmark. It would certainly tell you which cards are faster, but the info wouldn't be directly applicable to what the tests are trying to measure.
I got a cheapie sdcard, which slowed my phone noticeably. On amazon, reviewers say (as here) that random RW are the key metric, and (at that time), the sandisk sdcards were the best.
A solution would be to have some filesystem on ram, and write new changes/deltas to the SD card either periodically, or at shutdown.
Perhaps nice to add comparison to network storage/boot?
I re-tested a couple of the cards (notably, the Sony and Kingston cards), and they were just as painfully slow. The benchmarks took like 20 minutes (with the faster cards they only take 3-4 min).
If you use knockoff cards (no brand at all, like one that came with one of my cheap drones), the performance is so abysmally slow you might think the Pi locked up for a few hours.
It's not a terrible choice for a starter card, but you can get the Evo+ cheaper for the same capacity, if you can stand to flash it yourself :)