Hacker News new | past | comments | ask | show | jobs | submit login

When done properly, SD cards are reliable enough. For my digital signage product the custom OS is highly optimized around avoiding IO as much as possible. There are thousands of devices, some running more than 4 years now, without any SD related problems.

I also really like the fact that the Pi is pretty much stateless. So switch SD cards and you can be mostly sure everything still works. If storage is bundled, that's a lot more difficult.




For most use cases, SD cards on Raspberry Pi's are not reliable enough.

I realize you may be successful in running signage devices, but most people want to simply be able to run common class 10 card (e.g. from Verbatim, Kingston, Sandisk), using a standard OS setup and leave the Pi running without having it corrupting your files. This simply isn't possible, in my experience. Also look at the OpenHAB forums as a reference.

The more regular disk activity, the worse it gets.

We went this route in our office and every 3-6 months we had corruption on each of many dozen Pi's. Once we switched to using USB drives (spinning or SSD) the issues were sorted out.


> but most people want to simply be able to run common class 10 card (e.g. from Verbatim, Kingston, Sandisk), using a standard OS setup and leave the Pi running without having it corrupting your file

To be fair: try the same with cellphones. I have a shitload of micro-SD cards gone bad on me, and yes genuine SanDisk not cheap fakes off of Amazon.

microSD cards are the cheapest crappy flash chips you can get with a microcontroller strapped in front for error correction.


I don't even put SD cards in my phones anymore due to corruption. I always got the best SanDisk's I could find. Luckily my Pi that runs probably 8 hours/week hasn't failed, but I have the drive imaged just in case.


Cheap SD cards are going to return less than desirable results. Higher quality cards such as SanDisk Ultra Mobile or Samsung Pro series should last quite a while when used as a normal "disk" on a system like this. As an added benefit, using a card like a Samsung Pro will also get you MUCH better file system speed for both reads and writes on common Linux file systems like ext4. The controller inside a Samsung Pro series card is much more advanced than run of the mill cheap Kingston cards.

If you really need durability, there's also a handful of SLC (single level cell) SD cards on the market which will last an order of magnitude longer in terms of writes as compared to consumer level MLC/TLC cards. SLC cards aren't cheap but you get what you pay for.


> Higher quality cards such as SanDisk Ultra Mobile or Samsung Pro series should last quite a while when used as a normal "disk" on a system like this.

Nope, it's mainly marketing. Cards from those brands are fundamentally not great for sub million unit purchases. We tried to use some for the root fs for a fairly high availability appliance. If you don't want one in twenty going out in a month (and we figured out a way to make them go out after 8kb/s for 8 hours with a specific access pattern without power drops), then you need 'industrial SD cards'. They'll treat you like adults when it comes to support, will have lot codes that actually map to internal changes, etc. Trying to sell a product with consumer SD cards (yes, even the ones that call themselves 'Pro') is basically a futile exercise.


Sure, but I don't have the time to highly optimize a custom OS just to put up an MQTT server in my parents' house. I want to do the work once and have it Just Work after that.


Out of interest, how did you approach disk encryption? That would be my concern with removable storage. The other thing I'm keeping an eye on is Arm's LittleFS for storage longevity.


It's possible to write 256 bit of user data to the Pis internal OTP memory. You can store a non-changeable key there. Of course that's then easily readable if you can run execute commands on the Pi.

LittleFS (recently posted here) is indeed interesting, although from the documentation I couldn't figure out

1) if it works for larger devices as it seems it seems mostly intended for sizes of a few MB. I might be totally wrong with that.

2) if/how it handles corruption of complete blocks(?) of SD memory. As far as I understand, an SD card doesn't necessarily use 4K block sizes and corruption might hit multiple blocks at once. If LittleFS stores both previous and next version next to each other, I don't think that'll help in that case.


Oh, there's an OTP in the MPU? I didn't know that.

Yes with 1) I'm not sure either but there's some sporadic discussion in the Github issues with various attempts. From what I remember certain scenarios currently cause all blocks to be re-scanned which will obviously get slower with capacity.

With regards 2), I was under the impression each block is allocated according to the wear-levelling algo[0] so I don't think they're contiguous if that's what you mean. Also block size is configurable.

I tried implementing LittleFS on a 32MB SFDP flash with Mbed but couldn't get any decent performance out of it. It kept spewing out "bad block" errors an re-allocating everything. I hope it's a more viable option in future though as I do like the design.

[0] https://github.com/ARMmbed/littlefs/blob/master/DESIGN.md


I posted it (though I'll freely admit hackaday was the source that I discovered it through)

Personally I want to use it as an internal filesystem for various projects (like games - instead of zip/pak files, which are really read only)

even littlefs over https might be an interesting naval gazing project :D


I store the data in encrypted USB flash drives & store the key in eCryptfs on the SD card.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: