Hacker News new | past | comments | ask | show | jobs | submit login
Considerations for a long-running Raspberry Pi (dzombak.com)
510 points by ilikepi 11 months ago | hide | past | favorite | 341 comments



I started buying Lenovo mini PCs instead, 18cm x 18cm x 3cm so it's still really small.

And you can get them dirt cheap nowadays, has proper casing and cooling etc https://psref.lenovo.com/syspool/Sys/PDF/ThinkCentre/ThinkCe...

I have one right next to me, i5-8500T, 32GB RAM, 2x SSDs and currently 5W at idle with powertop auto-tune https://wiki.archlinux.org/title/powertop


No GPIOs, no I2C, no SPI. If you are purely in search for a small light seever, this is a suitable choice, but Raspis are popular also for the more unusual (for consumer/office devices) IO.

Good you could add a serial comnection to a microcontroller etc, but then the solution won't be as elegant.


Is it elegant is use GPIO on an rpi?


Sorry I don't really understand the question, but:

- if you ask whether the way of using GPIOs on a Raspi is elegant, I would say, yeah quite. Python and gpiozero are a good elegant way to go that route

- if you ask whether it is elegant to use GPIO on a Raspi in principle, I can't answer your question because that depends on what you use it for. There are extremely elegant ways and there are ugly hacks that shouldn't exist.

That being said if you quickly need analog input you can just add an ADC to the gpio and you have it. That is more elegant than having to add an usb peripheral.


Last year I've migrated my Kodi media player from a RasPi 4 to a N3350 mini PC and didn't look back, then recently also moved my NAS (N5105) and services (3215U) machine to mini PCs and unlocked Chromebooks I got used on either Ebay or flea markets. They're cheap and the computing power is on another planet compared to the PI while still maintaining a low power consumption.


I do the same, but with Dell 7060s.

There are thousands of them coming off lease for sale here in Australia.

8500T or 8700T chips are fantastic. They can officially run Windows 11 and have have hardware transcoding built In, so are great for a Plex server!

I’ve upgraded one of mine with 2x 2GB SSD and 64GB RAM.

Never any issues and it runs 247.


> There are thousands of them coming off lease for sale here in Australia.

Where would I find one and what do they go for?

As a Plex server, do you know if they can handle hardware transcoding?

Thanks!


I have a Dell Optiplex 7050 I'm using as a video streaming server and even that generation has hardware transcoding capabilities for all modern formats except AV1, and can do them fast enough for basic streaming too. Six 1080p simultaneous video streams from personal testing before any lag or stutter begins to creep in. And I know that the Intel integrated gpus of the generation after — so what would be in the 7060s I believe — got like a 4x boost in video transcoding performance as Intel kind of figured things out and optimized things, so yeah 7060 would probably be really good.


Same. Bought a Dell refurb and I’ve had so many fewer hassles. The SD card was perhaps the biggest source of issues and after factoring the cost of outfitting a RPi with something like an SSD, it just wasn’t cost effective anymore.


This I never understood. I have 3 different RPis running since they were available to buy. A 2, a 3 and a 4. All with the OS on microSD and no optimisation like writing log files to tmpfs or similar. It’s plain Ubuntu Server for ARM.

And in all these years, I had maybe one microSD going read-only on me. Cloned that to a fresh one, did an fsck and the Pi was up and running again. That’s it. No other issues despite various sudden power losses etc.

I don’t understand where these rumours about bad reliability of SD cards in Pis come from.


Pi 1 a/b were really fragile but since pi 2 this problem is mostly solved. Still if I need to write a lot I would use a external drive. We have >300 pies running 24/7 on 52 sites all over the world. Even though we use no name brand microSDs we had only 2 failures because of a broken SD card in 4 years. Our partner company was using our hardware as a basis for something. After 1 month they asked us what SD cards brand we were using because they had a high failure rate. Our secret. We use a self made distro with minimal writes, they used default raspian and were writing permanently to the SD card.


Maybe you're lucky or buying the right SD cards?

I've always used noatime for filesystem, journald set to ram logging, etc. Usually everything seems fine until a reboot triggers a fsck. Most were ext 3/4 filesystems.


The fact that you can run just about any linux distro is also nice when compared to ARM hardware.


Yup, this is the reason I refuse to buy anymore ARM SBCs.

You're always dependent on someone building a bespoke Linux image for your particular device into perpetuity, which rarely happens.


That's a good point, but the array of devices supported by the DietPi team is extensive: https://dietpi.com/


Correct me if I'm wrong, but I believe DietPi isn't maintaining the necessary kernel forks many of the devices require, and is dependent on upstream kernel fork changes from manufacturers/vendors.

Most of those boards can run mainline Linux, but certain features and integrations only exist in external forks that weren't put into vanilla Linux.

The end result is either that you're still dependent on someone maintaining bespoke Linux kernel forks in perpetuity, or you ship relatively up-to-date kernels that lack features, drivers, etc that only exist in the forks.


Similarly, I saw some youtube videos just the other day talking about intel's (newish?) N100 chip. You can pick up a small mini pc for under $150 new, runs windows or linux, and is on-par speed-wise with the 8500 iirc. Plenty fast for small server use cases, although you do give up the raw GPIO pins.

Also of note, it's x86, which has me interested as a field-portable ham radio computer. Most software runs on rpi (my current setup), but x86 is universally supported by stuff. Just need to figure out the monitor situation.... maybe vnc from an ipad? :)


I did a portable NUC for a while, just powered off some no-name li-ion backup battery (a slightly bigger one because you need the 12-20v out).

There are plenty of decent portable monitors out there. I ended up with an MSI one, but I think any laptop brand can make a monitor (I also had some no-name one which had a nicer resolution but ended up breaking pretty quickly).

Anyway, I eventually concluded that I was wasting time re-inventing the laptop. It was nice to be able to put the unnecessary extra stuff, like the monitor and battery away, when using it in desktop mode (saving desk space). And the ability to pick a proper keyboard is nice.

But buying part-by-part has some downsides. You pay extra because this is all niche stuff, it doesn’t slip as nicely into a backpack (otoh if I only needed to bring the NUC somewhere, it could fit in my cargo shorts or bike under-saddle bag which was neat). And random integration things—you need to bring all the right cables, the battery probably doesn’t talk to your OS so you have to check it manually.

I also did SSH and VNC from an iPad, it works fine, you’ll probably want a keyboard and end up with something like a laptop, but extra latency and your SSH/VNC client will waste some pixels. I also did SSH and VNC from my iPhone, got a little stand and everything. That combo actually got interested questions in a coffee shop, so nerdy niche computing mission accomplished, haha!

So, I hate to ask the boring question because it was a really fun project and I got years of use out of it, but: Why not a laptop?


The best solution would probably be one of those micro "laptops" like a GPD Pocket. I think it has the ports needed... a few usbs really.

My current setup is VNC from ipad to the rpi for ham radio stuff. It works well enough, with the ipad being keyboard & screen in a small space. And lets me take an ipad with me for other uses. But the rpi4 is so darn slow at a full desktop experience, especially when then feeding it over a network connection.


I got an N100 fanless computer for about $100, with the right Aliexpress deals. With 4x i266-v (2.5GbE). Just needed RAM and M.2. The total power draw is also quite low, I've seen people measure 7w total from the wall. At about $.32/kWh, this is saving me a lot over an eBay computer.

The CPU is enough for my needs, currently running Proxmox with OPNsense. And x86 was a major draw for me over the RPi.


you can get a usb powered hdmi portable 10 inch monitor on aliexpress for less than $100


Does anyone know of a low power, mini PC that supports 2 or even 4 SATA SSDs (both connector and physical space wise)?

Most, incl. the Lenovo above, seem to support at best 1x M.2 + 1x SATA.

The best choice I have found is using a N100DC-ITX mainboard with a generic ITX case, and those are huge.

I am looking to replace my Raspi / USB SSDs combination.


I recently migrated my NAS from a Mini-ITX mainboard to a mini PC. Although there are m.2 cards with multiple SATA ports on board, they would require either to keep the PC open or mount it into a bigger case with multiple disks bay. I chose instead a more costly but potentially more stable solution with a IcyBox IB-3780-C31 enclosure (8xSATA to USB3.1) which is well supported by either Linux and FreeBSD, which is important as I use XigmaNAS. The mini PC I used was the only one with a USB3.1 port on board CPU is a N5105 which is more than enough for 4 ZFS mirrors. Feel free to ask if you need more information.


I've lost data by having as my home directory an external drive connected via USB (on an ext4 file system). Then I found a comment on this site saying that Linux's USB driver is known to be unreliable when layering a file system (terminology?) over it.


these types of enclosures are going to be using UAS, USB attached SCSI, which is relatively new. It should deliver a more reliable and manageable experience for external drive enclosures in general (e.g. proper addressing of which drive you want to talk to). If nothing else it's a different code path though.

I've been using a multi-drive UAS enclosure over USB3 as the storage for a VMS for some time, it feels a little weird given USB's history but it has been very reliable and easily saturates the drives from a bandwidth perspective. I'll probably go the same route for bulk storage in the future, since it's a lot easier to get small machines with USB than small machines with 4+ SATA channels.


Which UAS enclosure are you using? Would you recommend it? Any quirks to know about?


I use https://www.startech.com/en-us/hdd/s352bu313r

The internal RAID is neat and I used it for a while, but lately I've had it configured as a JBOD and I let the VMS manage which drive it puts things on. But probably the best thing to do would be to configure it as JBOD (it has DIP switches to choose the mode) and use it with whatever software volume manager you prefer, lvm or btrfs or whatever.


In theory, USB could be avoided with DIY JBOD:

  - Thunderbolt to NVME M.2 enclosure
  - M.2 to SATA breakout
Or:

  - Thunderbolt to NVME M.2 enclosure
  - M.2 to PCIe slot
  - LSI HBA PCIe card to SATA


What I do is I keep using the same (Samsung T7) external drive I lost the data on, but now I use it only to transfer data from one computer's internal drive to another computer's.


The Dell Optiplex 7050 I have can take two SATA SSDs or one SATA HDD and one SATA SSD (my current config). Its processing power is pretty great (esp compared to an RPI), plus it's reasonably sized, sturdy, and it also has a low TDP. I use it as a server for several services (A discord clone, Nextcloud, my blog, video streaming). Got it for $20 at a surplus store, plus $80 for the HDD and $60 for the SSD (I went for very high quality ones) lol.


There are m2 adapters that break out to 4-6 sata ports.

Ex: https://a.aliexpress.com/_Ey8htlr


What is your use case? You can fit a few terabytes in the internal slots; if the additional storage were in the form of USB drives, would that be a problem?


Some of the Lenovo models e.g. m720q have a PCI slot

One option would be to use a riser and a PCI card that supports multiple M2 drives


There's a wonderful world of Mini-PCs and homelabs out there. The new N100 based ones are amazing for under $200.


The Ryzen ones are pretty amazing. I have one with a 8 core 16 thread “Cezanne” chip that performs all kinds of tasks. (game servers, jellyfin, etc.)


What's your definition of dirt cheap? These don't seems cheap by any means.


Certainly not compared to the Pi.


Great advice. I also use Lenovo Mini PCs in place of SBC's on some projects and they can be very capable.


Thanks. That is super helpful info on a post about Raspberry Pis.

The number of posts on this site that people want to turn into “show & tell” is very high.

Just make a Show HN post about whatever you want to show off and let the voters decide the fate of the post.


another option in this vein is a used google chrome box. they're rated for 24x7 operation in dusty high vibration environments. can change the boot loader and install any distro. low idle power, std sodimm and nvme drives. super cheap used. usb-c and ethernet


Any mini PCs with ECC RAM yet?

Would love to make a small all-flash NAS, but the only one I know about does not have it: https://www.asustor.com/en/product?p_id=80


$150 PC Engines APU2 (RIP) has 4GB ECC soldered RAM, SATA, mSATA and mPCIE.

$650 QNAP TS435XeU 1U NAS supports 4GB-32GB ECC SODIMM, 4xSATA, 2xNVME, based on Marvell/Armada CN9130 Arm SoC. Debian for Arm64 can be installed via serial console.

Some 4x4 Ryzen Embedded V1000 mini PCs support ECC SODIMM, e.g. https://www.sapphiretech.com/en/commercial/fs-fp5#Specificat..., possibly ASRock.


I have a 32GB DDR5 ECC in a max spec https://www.solid-run.com/industrial-computers/bedrock-v3000... - it's headless.

You can go to 96 GB.

With displays: https://www.solid-run.com/industrial-computers/bedrock-r7000...


Mini PC (or laptop motherboards) also have normal SSD storage, and are thus much more reliable than micro sdcard MLC flash cards.

There are ways to make Pi's more reliable, but it usually requires a lot of extra parts.

The ARM SoC do not have an IME though... ;)


> The ARM SoC do not have an IME though...

RPi 4 boots from GPU/VPU running closed firmware (incl. Microsoft ThreadX RTOS), which retains full control over the OS application processor, https://www.fsf.org/resources/hw/single-board-computers

  Boards based on the Broadcom VideoCore 4 family, such as the Raspberry Pi, require non-free software to startup, although signature checks are not enforced. A free proof-of-concept replacement firmware has been developed, but it is not in a usable state, and development has halted.

  By default, the GPU requires a blob running in this same startup firmware. However, Broadcom also supplies an "experimental" free software stack, which could run without blobs, if the startup firmware were free.


A boot-loader or GPU driver blob is different from an entire copy of Minix OS running on the Intel CPU that will boot-loop the hardware every 20 minutes if wiped.

The RISCV also offers something unique over Pi ARM chips, but little economic incentive to stop the shenanigans. ;)


Comment by the developer who attempted to create open firmware for RPi, https://github.com/christinaa/rpi-open-firmware/issues/37

> a lot of corners were cut to save time leading to what I believe is poor ARMv7+ Cortex IP integration (GIC, TrustZone, etc). So I stopped working on it. If those things were not the case (GIC working, "TZPCs" working, security working as intended, instead of NS forced to high on bridge, at least in my understanding) I would still work on it ...

ARM isn't a second class citizen on this platform, it's a third class citizen since BCM2709 (again this is an opinion) ... the features I wanted to tinker with the most are absent by design (cutting corners) and I'm not willing to resort to SW emulation of them through clever uses of the VPU.

Hopefully RPi 5 silicon offers a better foundation for open firmware.


Here in Europe I don't see any of them for less than $100. And for a small server something like OrangePi 3B (with nvme etc.) would be be reasonable at half the price and half the power.


Interesting, I was always in Mac mini camp.

I'd be cool to do a comparison new Lenovo mini vs used M1 Mac mini - should be the same price.


I imagine it comes down to if you want macos or not. Seems like a niche choice when comparing to something like the pi.


When I switched from Intel Mac to the M2, I was easily able to get my Linus and windows VMs running again.

As well as my intel apps. Was pretty smooth and the speed and power efficiency gains are obvious. My mac book air draws about 5w total while the DAW is cool as a cucumber.


Is ssd and ram replaceable on these?


Not OP but I am in the process of upgrading my M93P Lenovo Thinkcenter. Uses laptop ram and simple to replace hard drive and RAM. I am far from a hardware guy.


Yes to both

2x DDR4 SO-DIMM RAM (notebook) slots. No ECC though (at least on this model)

And you can have 2 storage drives. One 2.5" and one M2.NVME


Yes, almost everything is replaceable https://download.lenovo.com/pccbbs/thinkcentre_pdf/m910q_ug_...

Can even add HDMI, serial ports etc.


How do you monitor the power consumption of your pc?



First advice is to enable journaling mode on FS.

First advice must be to mount FS in read only mode, mount /var in memory and forward al logs to one nide, which may be not RPi but something with proper UPS and nut running. Power loss becomes absolutely bening if your FSes RO or temporary.

It is overkill if you have one RPi but author claims that he uses multiple RPis all around a house.

Also, good idea to have A/B system partitions and upgrade system with full partition rewrite and changing active one. Thus way your system will have one good system partition in any case, even if new version has fatal bugs, and recovery become trivial.

I'm using several small/single board PCs im different roles in such way for 20+ years with great success.


The author links to read only advice a couple of lines down.

https://www.dzombak.com/blog/2021/11/Reducing-SD-Card-Wear-o...


In addition to /var, tmpfs should also be used for /tmp and similar. That should lengthen the SD card's lifetime immensely.


Isn't this (tmpfs for /tmp) the default setup for most OSes, and surely raspberry's homegrown OS too?


If I had to guess, I would assume they chose not to use tmpfs because the earlier Pis had very limited RAM. With a 4GB or 8GB pi 4 or 5 this should not be a problem.


It seems it is not the default for Raspbian.


TIL! One would think if any distribution had it by default, it would have been Raspbian.


Maybe left off since some of the models have very little memory to spare for a ramdisk?


I think all the usual desktop distros don't use tmpfs for either by default. I don't see the benefit in this with modern hardware.


Fedora uses tmpfs for /tmp. I think it still makes a lot of sense to use tmpfs for a heavily written-to transient file system.


Arch uses tmpfs for /tmp.


Arch has you set up your own fstab so pushes the choice to the user?


Actually the installation guide for arch asks you to use `genfstab -U /mnt >> /mnt/etc/fstab` which basically copies over whatever was mounted at the live environment (taking off the /mnt), I'm pretty sure tmpfs for /tmp was there by default last time I installed.


During installation the guide tells you to run a program, which generates an fstab based on current mounts. So by default it will configure /tmp the same way it’s configured in the live cd.


I ran out of space on NixOS with tmpfs, they run all builds in /tmp so my swap ran out.


They should use /var/tmp instead.


Surprisingly often not!

Be careful to check inside your docker containers also. It can end up different.

Here is a good article btw, with someone advocating for moving to tmpfs https://ubuntu.com/blog/data-driven-analysis-tmp-on-tmpfs


A long time ago, it used to be. It doesn't seem to be now, and I don't have any idea why the distros changed. (Maybe it's due to the mv semantics? But I though people considered the idea of creating a file in /tmp and then renaming it to be bad instead.)

In fact, I didn't notice the change until now.


Copying or moving through tmpfs loses silently some file metadata, e.g. extended attributes or high-resolution filestamps.

Copying through /tmp is a frequently used method for transferring files between users on a multi-user computer.

It is said that the Linux kernel will include soon an improved version of tmpfs, which will allow user extended attributes, within certain limitations.

When this will happen, one of the most annoying misfeatures of tmpfs that has persisted for much too many years will finally be gone away.


Not the default in debian, because of low memory devices such as rpi.


Anything transient writable must be in memfs/tmpfs/how it is mamre im your OS, of course.

Logs to log server, if system needs non-volatile writable storage — NFS or second storage device, depending on requirements.

Of course, it is too much hassle for single such system (but I started to use it from very beginning out of curiosity) but uf you have many single-task small devices it is very convenient.


> have A/B system partitions and upgrade system with full partition rewrite and changing active one

What’s your upgrade process like? How do you make the new disk image? Do you log into each device to upgrade it, or do you have automation?


I have one «beefy» server/NAS (now it is EPYC2 on SuperMicro platform becuse they are cheap used on AliExpress, before it was Intel E3 12xx-badsed systems, two ir three generation old, always bought used). It is not my router, but NAS for all my data, NFS and build server.

I'm using FreeBSD, and here is script for preparing such installations named NanoBSD. To be honest, it is nothing special — build system with provided config (to strip it down, for example full FreeBSD installation includes toolchain and it is waste of space on «embedded» systems), mount file as loop device, create FS, install system to this FS by standard system means, add needed packages.

I'm building system once, make several images (as each device needs its own set of packages, of course), login to each device via ssh and run simple sh script which detects current active partition (by simply looking at output of mount command — from whic root is mounted) and then «dd if=/net/images/$hostname.img of=/dev/da0p$otherpart bs=128k», set this updated partition as «bootonce» in bootmanager and reboots. Last startup script in boot sequence check availability of network and liveleness of sshd, and if these simple checks are Ok, set this partition as alwaysboot (it is all UEFI features, mostly). If something goes wrong — one power cycle and device will boot from previous partition.

I don't have enough devices to automate «login and call script» part :-)


> First advice must be to mount FS in read only mode

How then do you update the system or install new software?


Remount in read-write mode when you specifically want to make changes.

  sudo mount -o remount,rw .....


As you update any embedded device: preparing new system image («firmware») and «re-flash» it.

Of course, devices don't build their own systems.


There is no need to mount read only if you forward the logs somewhere else.


Maybe. Maybe not. Years and years ago I was getting corrupt file systems even if the system was always shutdown correctly.


Note that mounting /var in memory might exceed your device's memory if you're using something like Docker. You might have to move /var/lib/docker to secondary storage.


> have A/B system partitions and upgrade system with full partition rewrite and changing active one

Are there any solutions available for this?


There appears to be new support for A/B partitions in the bootloader that might help: https://www.raspberrypi.com/documentation/computers/raspberr... and https://www.raspberrypi.com/documentation/computers/config_t...


I'm using FreeBSD, not Linux for all my headless systems, and FreeBSD has NanoBSD script for such installations like forever.


Abroot and OSTree


Might be worth considering a different journaling fs, like nilfs2 for filesystems you need r/w.


Back in 2011 I made a commercial product that ran on the earliest plug computers from global scale technologies. I only sold 20 of 'em and every single one of them was being returned with SD card corruption problems. I had to quickly pivot to keeping the rootfs read-only. I've been a fan ever since.

Incidentally, that early commercial product was a home security product with a very small amount of home automation. I released this into open source with a new name in 2021 and now runs on the Jetson series SBCs (https://github.com/hcfman/sbts-install). Except then including high end YOLO models as triggers.

Because it was intended to be a standalone product it supported https with a GUI wrapper around all of the certificate operations. This still exists in my open source version, making it easy to use self signed certificates for intra-device rest calls.

But I've kept and expanded upon the multi-partition memory overlayFS approach and the installation of this system first asks you to install the sbts-base system, which installs the multi-partition memory overlayFS so that other's can use this as their own base systems.


I had a similar experience when I hacked a $5 Pentium 2 PC into a fanless (whether it liked it or not) and noiseless workstation. Replaced the HDD with a CF card. After a while, the system started stalling for 1-2s on disk writes, and that was a pain.


I am urging everyone who wants to do this to see if they can't first do what they need with a small board like an ESP32. Their energy usage is a small fraction, they cost ones of dollars, and they're sufficient for a whole lot. If you're of the Python persuasion, many boards support both MicroPython and CircuitPython.

It's worth looking into for the cost savings on initial purchase and ongoing power draw.


I hear the nay-sayers regarding the time cost and complexity of embedded programming, but as a hobbyist I think this is a great recommendation to at least consider.

The projects that I've been able to accomplish on a microcontroller have been more reliable (over decades) than my Pi-based projects, and I don't have to worry as much about them being part of some botnet because I forgot to change the default ssh configuration (wasn't it `pi:raspberry`?).

Beyond micropython, the no_std rust support for ESP32C3 is getting better every month. For people doing little home automation projects because they are fun, the additional constraints can make things a lot of fun and very rewarding.

But yes, for those that are already handy with Linux, a Pi is generally going to be much easier, though IME will be at least 10x as expensive, and the additional setup to get it in the same ballpark for reliability (SSD booting vs network booting vs ro-rootfs, watchdog setup, etc) and the increased power usage (esp for a Pi5) should at least be small part of the decision making.


Can I run a media server on an ESP32? No.

Can I run my password manager server on an ESP32? No

PiHole? No. Unifi controller? No

I think people making this comment are envisioning people use Pis as garage door controllers etc., but reflexively suggesting ESP32s as Pi replacements isn't helpful.


So the straightforward answer is no, an ESP doesnt work for your situation.

They only asked that people consider it; no need to get snooty because it doesn't work for you, despite being very obviously inappropriate for your use case


I don't disagree with what you're saying at all, but in my defense, the blog entry posted literally starts with the sentence:

>I use Raspberry Pis around my home as everything from low-power FM transmitters to UPS energy monitors.


Why would you buy an expensive raspi for these scenarios? Wasn't it meant to be a computer learning platform for people with no access to powerful computers? Your applications don't need GPIO pins, so you could just use any small PC.


It’s unclear whether the usages are with Pi or Pi Zero, although advice applies to both. I have found more use with the Pi Zero W versus the full Pi. Some stuff certainly could be done using ESP, but faster to get done on Pi for me.


> Why would you buy an expensive raspi for these scenarios?

I know, right? I had to mortgage my family's ancestral home for the $35 to buy a raspberry pi 4 at microcenter. Luckily they had SD cards on sale, so I was able to buy the power supply and storage for the cost of a kidney. And now I can manage my home automation devices!


Pis are expensive? A Pi 4 and all accessories is about $100. Where am I going to do better? Every time someone says this I go look at NUCs and other minis and I don't see anything cheaper.


For me when I consider the cost of my time the Raspi in my experience is much more expensive. I've enjoyed my experiment with the Raspi but honestly would have saved si much time and hassle if I'd just went with a cheap x86 machine. My first Raspi will also be my last.


What were you doing that wasted time?

They are so widely used and well documented that for typical uses, they are no more complicated to do than an x86, in my experience.


What small PCs would be suitable RPi alternatives? For many/most use cases they'd have to fit the same category -- so similar power consumption, similar physical size, passive cooling.


This is a fair point with how Pi prices have gone, and I don't think I'll be buying a pi5.


Good! You don’t need a Pi 5. You can still buy Pi 3b, which I use for many things successfully.


Even for the garage door opener use case, one nice thing about Pi is you can run Cloudflare tunnel on it – you can then access it from the internet without messing with port forwarding or TLS certificates.


Your point about the PI is spot on: a full linux stack means you can get up to "shenanigans" with all sorts of tooling!

A PI with MQTT server and ESP32 as clients is a match made in heaven! For 30 bucks you go from nothing to PI zero, and a hand full (literal) of esp32 devices. Its a fun stack to play with!


Not an exact substitute but mentioning it because people might be interested: esphome added support for wireguard some what recently.


For one device it works well, but when you get a bunch, I find it simpler to use one Pi (actually now a Lenovo m920q, it could be a Pi, I just needed a bit more power for other stuff) with the tunnels and make it talk to all the iot stuff. Has a few advantages: - Updating the more security sensitive parts is a lot easier (only one machine can talk to the internet). - Lets me use ultra-low-power 1-coincell a year stuff. - Integrates everything in a single point so coordinating stuff is very easy (like, single action to close the blinds, turn on projector and set AC a bit cooler).


Tailscale also works excellent on even an original Pi Zero W if you want it on a VPN easily.


How about a camera server?

Why yes a ESP32 can do that, and you get to play with a another cheap stack.

How about Blinking leds or a Display, or led strip... WLED and artwix have you covered!

A project like ESPresense is evolved from a PIzero project.

Let's not forget that some of the pi's appeal is that it has those GPIO headers!


> How about a camera server?

I love ESP32s and use them both for fun and professionally but let’s be careful about overselling them. Can you get single photos from a specific set of imaging sensors? Yup. Can you encode 1080p h.264 video and stream it with eg RTSP? Definitely not.


Also simple power switchers. Would be crazy to run Linux for every device I want to smartly switch off and on IMO


> Can I run a media server on an ESP32? No.

You can run a web server though. Now I’m curious if I can push enough data to stream audio. I know the chip can handle 48k stereo samples, presumably I could stream that much over wifi. I may have to play with that this week. The bigger problem is attaching storage, many ESP32 SoCs don’t have USB peripherals, and many SDcard implementations can’t do much better than Audio throughput (SD has some insane lisencing costs for small dev shops), so I’m not sure the best solution for large amounts of data.


A useful deciding factor is whether your application benefits from an operating system and its utilities, or not.


How about PhotoPrism? ;-)

No, honestly I am impressed at the performance with which my RaspPi 4 is running a PhotoPrism instance with about 120.000 photos. This might also be thanks to the application software, but still quite a feat.


Can I run a media server on an RPI? No. -- It's interface is slow and bus speed is shared. No redundancy either.

Can I run my password manager server on an RPI? No -- do you want to loose all your data because the SD card failed?

PiHole? Unifi controller? Maybe


RaspberryPi has PCIe, you can attach NVMe SSDs with an adapter. You are not limited to the SD card.


If you need an OS and a cheap device use your old netbook or computer you have laying around instead of turning it into ewaste. You can do it all with a rpi zero if you insist on buying another device.


Agree. Old laptops have a built in UPS, screen, keyboard, active cooling, and can idle at very low power usage. Some machines even let you set a battery charge limit (say 50%) so the battery lasts for a long time without degradation. There are linux scripts around for this.


ESP32 and related are pretty cool, but it's a whole different mindset; if something doesn't work, you can't just connect an HDMI/keyboard to debug live with all the regular utilities one might know and that come for free with any mainiline Linux.


There are definitely limitations, but if it's a thing that you're going to leave untouched for years, it's worth looking at a device that will use 1/10th (or less) the power.


Just took a look at the power consumption of Pi Zero. It will use $0.70 worth of electricity on my local grid, in a YEAR.

I bet me writing this comment used more while keeping my laptop alive.

I love ESP32s, built a long running device for my car, where the power draw is important because I don't want to deplete car battery. It uses almost no power. However if it was running connected to a power grid, I'd not care about the electricity cost.


Maybe, but if it is going to take you drastically longer to write the software because there's a smaller ecosystem / the stuff you need doesn't have a readymade library / you've never done C/MicroPython / there aren't the Linux tools that will help you debug or do a simple crontab to run your script on a schedule... The ROI might not be there even with the lower power consumption.


That's true, and an important consideration. I'm not an ESP fanboy, but I really do appreciate it as an engineer for how simple and cheap the development boards are, especially considering things like ESPhome exist. It's kind of ridiculous how many projects can be built by just writing a yaml file and a little soldering - https://esphome.io/guides/diy.html


My time is worth more than the increased power draw


They are more stable in the long run since there is no OS. Honestly people make their lives harder using a Pi when using a microcontroller would be much easier and stable in the long run.


I ran a rpi for 2 years with no reboot.


Sure they are capable of running for years but they can fail at inopportune times. Every time one has died on me, it always seemed to be the microSD card that died. And that's with using reputable industrial cards and log2ram.

Also there's more complexity and overhead on a pi for simple tasks. And you potentially have to worry about updating the system and packages and other maintenance.

Not saying microcontrollers like the ESP32 are completely invulnerable to failure but it certainly is less likely.

And for another anecdote on uptime, I have a few Arduino-powered devices operating 24/7 for well over a decade now.


One of my RPis died somewhere in the mobo. I can turn it on and mount it over USB and read the SD card but it doesn't boot.

Another one lost the ethernet card. It was before they had Wi-Fi, or I didn't care to use Wi-Fi, I don't remember.

A Pi zero died, full stop.

A number of SD cards eventually died.

Not a very reliable platform compared to my laptop. On the other side my laptop costs almost a couple of orders of magnitude more.

I still use one RPi 3B+ with a TV hat and it has been on for about two years. It doesn't do anything else.

I switched to Odroid for everything else that could run on a small server, because it was impossible to find Raspberries. They work well, I'm happy with them. The only problem: I have to pin the kernel to their own version with their own drivers. The distro is Raspbian and I upgraded it a couple of weeks ago.


I never had an sd card in an rpi die, in years of continuous uptime, except for reboots.

I had one rpi become useless when its network stopped working.


I run servers for longer without reboot. But that doesn't mean the OS wouldn't run better with some cleanup reboot from time to time.


or let alone a well designed or industrial sbc (of which rpi are neither)


If something goes wrong I just plug USB into my regular computer with all my normal tooling and can see the serial console, edit the filesystem, etc.

Or if I'm using something like Circuit python I'll just connect the web console.


A Pi Zero will spend about a dollar a year in electricity.


I've recently taken an interest in embedded programming and this is something I can't look past. Unless you're producing something at scale, I don't think there is much of a difference economically between writing code in a low level language and running it on a bare metal ESP32, vs running a Python script on a lightweight Linux on a Pi Zero. The cost difference is going to be a few dollars over the life of the product.


battery powered devices is a whole segment rpi cant even put a foot in the door

do hobbyists care? usually not


Just to clarify, does RP2040 count as rpi in this context?


In the context of an argument over single-purpose microcontrollers vs. general-purposes computers with modern featureful operating systems: No, it doesn't count. Nothing counts at all. ESP32 doesn't count, ATMega doesn't count, RP2040 doesn't count, Pi0/1/2/3/4/5/eleventy doesn't count.

The whole argument is reprehensibly incoherent at its very core and there is no aspect of it that has any meaningful value.

Microcontrollers and general-purpose computers are both very useful things.

And while there is some overlap in how they can be used they are used, they are also very different things, with very different costs (to purchase, and to implement).

It's a tired old argument that has been happening for as long as we've had both affordable computers, and also affordable microcontrollers (several decades, by my count). It has never been resolved, and it cannot ever be resolved.

Both things can co-exist. This isn't like Highlander or the Superbowl: There can be more than one. It's OK.


i am specifically calling out the broadcom sbcs, the rp2040 is a lot more reasonable (considering you can buy the standalone chip to begin with, unlike the raspberry pi)


Electricity cost is not a problem, lack of ability to run of batteries (because if large amount of power consumption) is in some applications.


N100 costs about $1.50 a year in electricity yet is like 30x faster.


I assume you are referring to an Intel N100 and not the OnePlus N100 smartphone...

Still a whole system is not just the CPU so suggesting a CPU as a RPi replacement seems a little odd


The N100 CPU itself takes costs only $1.10/year, the rest is minimal platform. Ofc if you add more/larger/faster RAM, many drives, NVMe, PCIe, the platform electricity cost skyrockets, but the same is with RPi5 with fewer options.


For my last (a bit more complex setup) I am using a pi zero and several ESP8266/32 communicating with it, simply with http and wifi. My first time using micro python instead of Arduino as well.

Absolutely love it and to my surprise it's super stable. Something about WiFi power states often messed up my projects in the long run, but no issue with micro python architecture so far.


Just want to shout out to tinygo, for this old Go programmer, it makes working with ESP and friends loads of fun.

Admittedly, reverse engineering a single digit 7-segment LED display wasn’t the best use of my time, but by crikey it was fun.

https://github.com/doctor-eval/clocky


I had a look into it but unfortunately there seems no tinygo support for ESP Wifi/BT.


Do these devices allow for effortless OTA updating?


I run ssh, https server (with some dynamic pages written in python), there is a 3T disk attached to get files, I download torrent.

The other ones have touchscreens and speaker attached, I use them to listen to internet radio, watch cartoons and control relays.

Your cheaper solution doesn't allow for any of this to happen.


They simply asked people to consider it as an option, clearly your use wouldn't run on an esp very well


> This feels like a hack, but based on hours of reading online discussions, most people seem to settle on a script that periodically checks whether the WiFi connection is good, and restarts the WiFi interface or the whole Pi if it’s not.

It's not a hack, it's best practice! Just like important servers in a data center should have some kind of out of band connectivity (IPMI, remote controllable RPDU outlets, etc), Important servers in remote difficult to reach locations should have some kind of watchdog script. The script should of course be tuned to the specific use case, considering the impact of a reboot vs downtime until reboot. At the very least it could log adverse events for later investigation.

A simple bash watchdog script was the very first thing I did when I deployed a remote RPI. Not just for wi-fi issues but for any of the dozens of things that could break and be fixed with a reboot.


Nowadays this watchdog is init/PID1, on most distributions - systemd.

If init can't be relied on to manage services, what guarantees do you have for the system to provide them?

Sure, one could reinvent this in scripts, but we've moved past this. I mention systemd a lot, but that's not to cast favor - there are alternatives.

Most services don't make appropriate use of the environment they exist in. I assume they expect some site customization, ie: declaring your web server needs these mounts.

A commonly overlooked directive is 'PartOf='. You can tie restarts of one service/resource to another.

Heck, more simply, I think NetworkManager offers a way to customize the wifi/portal checking. You may not have to go completely heavy-handed


Just a fyi if you're going down that path, the hardware watchdog is disabled by default in most distros.

Enable it using RuntimeWatchdogSec.

Second, if you are running an important service, see if it can support systemd notification socket and even better the software watchdog protocol with systemd. The service just has to send a heart-beat every X seconds otherwise systemd will restart it.

This ties in the hw watchdog to systemd and systemd watches your service, ensuring both the hardware and the software is running, otherwise the system will restart.

Lots of details here: http://0pointer.de/blog/projects/watchdog.html

None of this is raspberry pi specific, it works on every system with a supported hardware watchdog, which includes raspberry pis.


> Nowadays this watchdog is init/PID1, on most distributions - systemd. If init can't be relied on to manage services, what guarantees do you have for the system to provide them?

Does that monitor actual (not just apparent link up) network connectivity or the ability to access remote networks? Not all failure modes are local to OS processes and sometimes rebooting fixes things like this.

process-specific monitoring can be quite useful but so is a generic "I can't reach my server I just want it to reboot now" with minimal complexity/dependencies. I can't imagine all of the failure modes where a simple bash watchdog script would help - and that's the point!

Now, if you have OOB access to the server (like in a data center) then you might not need or even want a watchdog script. But for remote and/or difficult to access servers (I refer to these as "mars probes"), they can be a life saver.


Similarly, I use an ESP8266 to supervise my wifi router and cable modem. Any trouble, reboot!

For the router, it just tries to connect to the appropriate SSID, and then tries to ping the router, and if either of those fails, it swaps to the other router. I have two identical routers with identical configs, with their power connected to the NO and NC contacts of an SPDT relay. If one fails, it just toggles the relay state to switch to the other.

If the router is up, the watchdog tries to load the cable modem's status page, and tries to ping any of three different IPs I've identified within my ISP's network which seem to be either the CMTS or closely associated hardware, and should indicate aliveness of the HFC plant -- I don't want to bother rebooting if the failure isn't something that a reboot could solve. Sadly I haven't figured out how to have two cable modems with the same MAC such that I could swap between them too, and the ISP won't let me have two modems on the same account, so my only resort if the CM fails is to reboot it and hope for the best.

This, plus a rack of batteries that'll keep the router and modem running for 30-plus hours in a utility outage, has kept me online nearly-continuously since May of 2020 when I built it. The code is an absolute horror-show (I'm much better with a soldering iron than an arduino), but in practice it's been rock solid.


Totally agree! Watchdog timers are essential for microcontroller and even computers running software “forever”. Things do happen that even perfect code and design can’t prevent and a watchdog timer will break out of an infinite loop and reset. Things such as cosmic rays flipping a bit or even brownouts…in raspberry pi you also have to worry about SD card corruption…

Edit: raspberry pi’s have built-in hardware watchdog timers I believe. I know Arduino's do!


It's best practice and a hack. It shouldn't be required, but bugs exist so it is.


We’ve been running thousands of Pis in production for about a decade now. We’re beginning to shift to x86. The price/performance isn’t what it once was for the Pi. I gave a talk about our experience recently at State of Open Con here (https://youtu.be/vX-qK9mxKZI).


Fellow Raspberry Pi digital signage CEO here :). Surprised you didn’t mention the secure boot support, available since the Pi4, in your talk. While our service doesn’t use it (yet?) it sounds quite solid on paper and allows you to protect the data on disk/SD.

We’re still pretty happy with the Pi and the move to more open source APIs (Mesa/DRM/KMS/FFmpeg) is, now that they are finally in a state they feel usable, really promising. As our main use case is still digital signage, the raw processes power isn’t really that relevant as the expensive part (video decoding) is obviously accelerated and the backwards compatibility that’s possible with the Pi is awesome. We still have customers running Pi1B+ devices continuously for almost 10 years with the latest OS release we provide.


Hello I am just curious what does different digital signage companies offer? From the outside, I can't think of any innovations or differentiation that is possible. Perhaps its all a matter of cutting down cost?


Agree on the adoption of open standards. That’s a step in the right direction. But even with secure boot on the pi, it’s still missing a TPM for other cryptographic operations. We use a lot of Zero Trust stuff on x86 and you can’t do that on the Pi.

The Pi is fine for videos/images (less proper storage), but chokes on a lot of modern web assets.


True, but now I’m curious what kind of cryptographic operation you’re doing that would need to be protected from local root. Because that should be the only case a TPM is helpful (compared to the Pi secure boot option) and in that case the device is compromised anyway and can show anything on the screen and have all local processes taken over.

Agreed on the web stuff. But I’d say the web sucks, not the Pi. :-)


We consider anything that you can extract from the drive by removing it security theater. Root or not doesn’t matter.

You’re right that physical access is game over for content in general. This is more about extracting sensitive data (like credentials/tokens to 3rd party sites).

All backend communication is done using mTLS, where the private key never leaves the TPM (on x86).

Moreover, we’re encrypting all sensitive data we send to the device using the corresponding public key. Thus even if you rip the drive out of the device, you won’t have much luck.


Sounds reasonable, but the secure boot mechanism of the Pi not only allows verifying the boot chain but also enables you to implement disk encryption with keys stored in the the hardware itself that you can then only access from the running OS. Stealing the Pi or just taking out the SD card will not allow access to the non-OS parts. I'm not sure if the secure boot stuff of the Pi has ever been thoroughly verified or exposed to serious attacks, but in theory that's all possible.


You need to factor in usage, if you are idling a lot ARM > X86.

And you need to look at longevity, there I suspect ARM will outlive X86 too.

For modularity ARM > X86 too, because they are cheaper to have many small of.

But for scalability (= business in the current economy) X86 > ARM.

Also all graphs should be per watt, that 2 -> 4 is more performant is not news, that it is more performant per watt is!

And if you did that you would see that Raspberry 5 is not getting as much per watt performance increase as it should.

We have peaked permanently for the eternity of mankind.

Let that sink in.

Last but not least, the ONLY hope for any progress (openness not performance) at this point is the JH7110, but they are lagging behind in 3D support.


> if you are idling a lot ARM > X86.

Don't take this for a given. RPis are notoriously bad at idle power consumption. The x86 replacement I bought for my home server ended up having about half the idle power consumption of the RPi it was replacing.


I think maybe you’re overestimating the idle power consumption of modernish x86 (especially the last 5-6 gens of intel)? I’ve a i5 9600T system that’s drawing less than a Pi at idle. And cost me the same to buy.

https://docs.google.com/spreadsheets/u/1/d/1LHvT2fRp7I6Hf18L...


Our customer don’t really care about power usage much as they are all connected to a TV which draws order of magnitude more power. For some use cases power usage is key, but not for our use case.


is this for running displays or is this being used for production assembly machines? (eg gpio usage)


Currently for displays, but with Edge Apps you will be able to run workloads that interfaces with sensors etc over GPIO (and USB/Serial).


thanks for responding!

I feel like Raspberry pi has one leg up regarding gpio.

I have never tried to do gpio from windows (E.g. belink)

I am sure it is possible with usb adapters, but then you are dealing with changing come ports etc


Yeah the built-in GPIO on the Raspberry Pi is nice indeed. You can get PCs with GPIO too tho. If you look for 'industrial gateways' you'll find a number of them, but they are a bit more pricey. Alternatively, you can use a GPIO over USB, but that comes with own set of challenges.


What do you mean shift to x86?


Maybe Alder Lake-N? Those cores are very fast (Skylake-level) and <10W full load. RPi5 is at 8.5W full load and much slower.


I did exactly zero of any of those things and have had some Pis run for multiple years without any issues until being replaces by a newer model (my HomeKit/Zigbee gateway and data logger is now a Pi 4). I guess it all boils down to good SD cards and stable power supplies.


> I guess it all boils down to good SD cards and stable power supplies.

Agree, I've also been running a number of PIs and when they broke, it was because of failing SD cards.

I found pibenchmark to be a good source of info --> https://pibenchmarks.com/

Definitely compare SD cards before buying.


  > I guess it all boils down to good SD cards and stable power supplies.
I would say your sample size has a stronger effect on your experiences. With a large enough pool of devices everything will go wrong, what can go wrong. And there will be also new failure modes that you have never even dreamed of.


6 years of the same Orange Pi PCs/PC2s often on the same cheap Sandisk SD cards, some with postgresql databases on them logging environment data every 5 seconds (though batched to 15 minutes), frequent large system updates (Arch Linux ARM is a bit more churny). And not much issues, to be concerned about doing something annoying like other people suggest here with A/B updates and readonly stuff, that prevents comfy normal use of a general purpose Linux OS.

I don't run a completely storage oblivious setup, I use f2fs, which has much better write patterns than ext4, and I disable stats collection in postgresql, which causes constant churn, but logging, and other stuff has negligible effects and I don't care doing anything special about it, and I leave it default OS configuration.

I have probably 4-5 uSD card failures over 60 year total runtime of my 15 SBCs that I run 24/7. So that's 1 failure per 12 years. Nothing that can't be dealt with using backups. (I didn't even need them that much, yet, because all uSD cards so far failed by turning read only, which for f2fs means you'll get the last consistent snapshot on the card, that you can just copy to a new card and continue. 10-15 minutes of recovery time.

All that complexity and limitations that people talk about seem way overkill for a home setup.

And I think other reason why I don't get many uSD card failures is because I run most cards in HS25 mode. Not in SDR104 or whatnot. 3-4x the frequency really causes the cards to heat up a LOT during activity. Can't be great for the flash chips. 2 of those failures were in SDR104 enabled hosts. Copying data to uSD card using SDR104 capable USB3.0 adapter really makes the card scarry hot in just a few seconds.


In the grand scheme of things a sample size of 15 SBC is the same as OP's: 1.

While I definitely agree that for a home setup it is overkill, in my dayjob I work on industrial embedded gadgets, sold 10-100k pcs/design, expected to run without reboot for years, sometimes for decades. And most of the weird issues end up usually on my desk for investigation. While I admit that this might not have been completely obvious, but when I referred to the sample size, I was referring to such numbers, taken from this experience.


Sorry, but when discussing the article, I assume the context of the article.

Context of the article is "I use Raspberry Pis around my home as everything". I don't really care about industrial anything in that context, nor 10k+ unit scale. And nobody sane does/should. At home/hobby I'm optimizing for very different things.

It's interesting if someone adds to the discussion that things predictably fail at 10k+ scale, but completely irrelevant. It's like people who read backblaze stats to go buy 2 disks from the same batch to put in raid1. That those disk mdoels fail at 2x lower rate at scale than other models is completely irrelevant to their data safety.


Older Raspbian OS around year 2016 and below had access time enabled, with any file read there was a write, which is probably why there were many reports of corrupted cards ever since.


Same here - I've been running a couple of Pi3 as Cups servers for years of uptime (the only time that uptime gets reset is when there's a power outage - and that's very rare indeed). Did nothing more than install Raspbian on a micro SD card, set up Cups, connect USB to printer (for one of them - the other manages a networked printer). And left them alone after that.


It’s very hit and miss. I’ve had endless problems with some and others seem fine for extended timeframes. No clear pattern in sight

I’ve switched to SSDs for most of them now. It’s too much of a dice roll otherwise


My Raspberry Pi 2 ran for a while, used as a pihole, a VLC-hacked media center, and a weather stations for simple sensors, until it didn’t :(

In ~2022 I started to get random errors, then found out the SD card was failing. I never took the time to fix it, it had lots of little things locally I don’t feel like doing again.


It may seem like overkill, but I learnt how to use ansible to manage the software on my pi. I've actually just upgraded my rpi4 to use an ssd so am choosing to start from scratch. I'm hoping my ansible playbooks are written well.


Haha, yeah, that’s a good point. I may still have some ansible playbook from a while ago but I’m pretty sure it wouldn’t be up to date. I guess it would be nice to have a system like CoreOS was, where you just provide a setup script, systemd unit files, and at runtime most of the file system is expected to be read-only. That way you’re confident you keep all your setup in a git repo and on reboot the whole thing is reset (outside of data stores).

It’s just so tempting to quickly ssh into the machine to hack something around, then you forget about it b cause it ~works.

But a rpi4 can run containers, so that may also be an alternative.


I've been trying (and failing) to get my ansible playbooks to work for about 2 hours - so definitely not a perfect solution by any stretch. This reminds me why I don't daily drive a linux machine! I don't know why I'm encountering so many errors around installing docker-compose, pip, apt packages etc.

I suspect ansible may be a better solution for when you're using it to build machines from scratch very frequently, and you'd have more of a chance to maintain and ensure they're working over time vs once every couple of years.

The other annoying thing with ansible is that while it may install the software, it can't log me in. So still having to manually sign in to syncthing/tailscale etc even after the software is installed.


Get a new SD card and replace it?


I'm shocked the SD card bit isn't first, and more surprised that the post doesn't suggest USB boot (I have one pi that's been on ~24/7 for years now, and I attribute its lack of problems to 1. using Alpine configured to barely touch disk, and 2. not having an SD card to corrupt - I don't know why USB would be more reliable, but anecdotally it is)


My argon rasberry setup with an SSD is also stable.

Only reason why it fails is when the electricity goes out. A battery that could keep it running for 10 minutes would completely be enough.


It's kind of a shame that laptops are the equivalent of a Server + UPS, but with none of the enterprise features.


Thinking of it, there are small UPS devices which have usb output, maybe I should buy one.


> 1. using Alpine configured to barely touch disk

I'd like to learn some tips on how to do that... do you mean something like https://wiki.alpinelinux.org/wiki/Installation#Diskless_Mode ?


Yeah, exactly - as best as I can remember, and https://wiki.alpinelinux.org/wiki/Raspberry_Pi seems to agree, Alpine on the pi defaults to a mode where it mosly loads itself into a ram disk on boot and just stays there. Which means that you're constrained on "disk" space (your root filesystem is no larger than ram), but if your workload fits in that then you have the advantage that after boot the system pretty much doesn't touch disk.


I've had a bunch of Pi cards, running on SD without problems. But a single one suddenly developed a super-hot SD card, this was a brand new Pi which I was just setting up. Got the card out, and that one and the next Pi got a USB SSD and are now using those. That was a bit scary. But as mentioned I've also been running Pi with micro SD as Cups servers for years, with no problems at all.


Would there be any reliability problems with using USB flash drives instead of USB SSDs?


Sample size of 1, but I ran an RPi4 just fine on a USB thumb drive for about a year.

Still upgraded to an SSD later because I wanted even more storage space (and SSD seemed to have better random IOPS than a thumb-drive), but I'd say go for it.


I don't really know. I assume they would be slower than SSDs though, but I have never measured USB flash. The SSBs get some 300MB/sec on my Pi 4 boards.


I agree with this completely. All my rpi failures were because of SD cards. I have 2 rpis, both boot and run from usb, both for several years now.


Same. I use an M.2 HAT to boot from SSD, and it works great.


I have 2 Pis running basically non-stop (2-3 power outages) with the same SD card since 2017 (DNS/print server and Kodi, media is on external NFS). The only thing I did was to disable all logs. Never had a single problem.

They both have SanDisk 2 GB cards in them. I vaguely remember naively thinking along the lines of "less space => less bit density => better reliability".


I've been using log2ram (github/azlux/log2ram) and been happy with the results.

It mounts a ramdisk on /var and only occasionally copies the logs from ramdisk to SD card. That way I can still see all the logging without hammering the SD card quite some badly.


I'm still running Kodi on a pi1 approaching 10 years of runtime on the original SD card. It's powered on most of the time but sometimes I accidentally power of the USB power supply it's connected to. It's a little 5-port one that's about just as old with a power button that's easy to press accidentally in my specific setup.


I have a RPi that has been running non-stop continuously since 2014 on the same SD card, serving up a weather website.

I basically mounted all logging and webpages on tmpfs, and the DB resides on the SD card being written to every 5 minutes.


Are there instructions on how to upgrade my pihole to this?


Start here. https://ostechnix.com/how-to-mount-a-temporary-partition-in-...

Just understand that anything that is written to tmpfs will be lost if you have a reboot. It might make troubleshooting difficult if you need to preserve them for whatever reason.


I still have my very first Pi, it's a 1b I think.

It has seen many installs over the years but it's now a backup DNS server. Looking at the filesystem this one has been a PiHole since 2018, it has essentially ran 24/7 bar some reboots and me moving between places.

I don't write anything to the SD, it all goes to RAM at /dev/shm and the PiHole will simply have to redownload the lists the few times it goes down. It would download them anyway daily.


Same here. I have 2 Raspberry 3 models. They've been running PiHole for ad blocking since 2019. Later I started using them for local DNS and also tailscale nodes. There have been times when I haven't rebooted them for several months. Longest being about 11 months uptime. They've been rock solid. Although It does help if you have them plugged into a UPS.


Amen!

Exact same kodi-setup and sandisk cards, and has been happily running for years without problems. Disable all logs, media on smb/nfs, and off you go.


I didn't do anything special at all. My two pis have been running 24/7/365 for 5 years with no problems. I often completely forget about them. One is a pi hole and the other our print server.


"Keeping a Raspberry Pi online and working with zero intervention for weeks, months, or years is somewhat of an art form."

I just boot NetBSD kernel with embedded filesystem, e.g., INSTALL kernel or custom kernel. SDCard can be removed immediately after boot. Optionally chroot to attached storage. This runs for weeks, months or years. Have not experienced any of the issues cited by the blog author. Only issue I find is with the power connector when using a case; the connection can be brittle, e.g., if using a replacement cable. Perhaps this has improved on more recent Pis. (But I could say the same about most computers. The cables and connectors are usually fragile. It's always cheap stuff.) If power is interrupted because of movement, then the Pi reboots automatically.


  a) Cable ethernet
  b) SSD (via USB3.0 adapter on my RPI4)
  c) Ubuntu Server LTS 22.04. 
  d) cheap UPS.
Mine runs Yggdrasil network, HAproxy, Caddy server, a couple of webservers in containers, and a TMUX instance that I log into almost daily to write code (slow computer reveals bad code much better). Since I put it (and my router) on the UPS, in the last 2 years it has literally never gone offline other than a couple of times I rebooted it for firmware upgrades.


This is mostly what I do for “critical” Pi things. The cable Ethernet and the SSD are major. Do you have any recommendations for a cheap UPS? What are your considerations here?


I went with a CyberPower BR1200 which was about 150 pounds (circa 200 USD) but that's overkill because it will keep my pi+router going for several hours, whereas power-downs typically last less than 15 minutes max where I live and I haven't seen > 1 hour in 10 years. It's usually "human error" related someone plugs the vacuum in or some appliance trips a circuit. Keeping the router on the UPS (and fiber box in my case) is important though because then everything is completely uninterrupted from a serving perspective. I also give the standard RPI power adapter enough headroom by not overclocking the pi, because in the past this has caused problems when SSD attached with lots of writing, and "big" (for a pi) compute load at the same time. Since turning off the overclocking, zero problems. Probably an RPI5-class power supply would be even better. No clue if this matters but on a jetson nano (which was notoriously power-spikey), crucial (micron) or samsung SSDs tended to be better than budget alternatives.


I use some Pis for various things in my house including Zeroes through CM4s and 4Bs.

The Zeroes run Raspbian configured with the read-only filesystem option. I have found it necessary to uninstall `unattended-upgrades` because the overlayfs employed for read-only root caches disk writes in RAM and the update/upgrade process exhausts RAM. For the same reason I disable swap. It makes no sense to swap to RAM on a 512GB system.

Upgrades are tedious since they require disabling overlayfs, rebooting, upgrading, rebooting, and enabling overlayfs. I wrote Ansible playbooks to perform these tasks. (https://github.com/HankB/Ansible/tree/main/Pi)

I have a Pi 4B performing as a file server and running Debian (not Raspbian) It boots from an SD card so that the entire HDDs can be used for a ZFS pool. To reduce wear and tear on the SD card I have mounted `/var` to a ZFS filesystem. I should probably use `tmpfs` for `/tmp`.

I use a Pi CM4 to run HomeAssistant and that boots and runs from an NVME SSD where durability is less an issue.


As someone using many, many Pi's at home and many times that at work, the preferred approach is to boot them all diskless (and for those that actually need an SD card, boot them off a read-only SD card and get everything else off the home server). This is so much easier than having lots of different SD cards/Pi versions etc, and makes them trivial to replace in the event of failure.


I have several of them, sounds ideal. can you perhaps link me to a good/useful writeup on how to accomplish this?


Look up diskless booting, it's a very general thing on linux (i.e. not Pi-specific, although there are plenty of Pi-specific tutorials on the 'net.

As a minimal first step, install an NFS server (which can be X86, Pi, other) on your LAN and make sure you can mount it from the pi ("mount $SERVER:/some/dir /mnt/tmp"). Then copy the contents of a Pi SD card to the server, make it exportable (see '/etc/exports') then edit '/etc/fstab' on the Pi to mount the (now remote) copy of the SD card instead of the usual root. That should get you started - beyond that, with some of the Pi's you don't even need to have an SD card installed (however you'll then need to set up things like DHCP and TFTP on your server).


There are also some pi-specific bootloader config options that can make life easier, they are documented at https://www.raspberrypi.com/documentation/computers/raspberr... and from https://www.raspberrypi.com/documentation/computers/raspberr... onwards

There's also a tutorial at https://www.raspberrypi.com/documentation/computers/remote-a...


Thanks for sharing!


Just read the data sheet of the Sandisk Max Endurance and, oh boy, what a fsck’ing marketing bs language.

They state the endurance on thousands of hours of FHD video, but what assumptions do they make in bitrate etc?

Can‘t they state total TB written or drive writes per day or something sensible?


Flash memory manufacturers are notoriously secretive about the actual endurance of their products, likely because it's now embarrassingly low.

Specifying endurance in "thousands of hours of FHD video" implies large sequential writes, or in other words the best-case for write amplification.


They also substitute parts all the time. Even the “best” brands have been known to do this. From different controllers to swapping TLC for QLC flash.


Better manufacturers do give you a TBW endurance rating, and document what type of flash is being used (SLC, MLC, etc.).

I used one of these Micron cards in my RPi4, it has high write endurance and also an A2 performance rating so it can support the IOPS needs of a boot drive. https://www.mouser.com/new/micron-technology/micron-i400-mic...


Switching to gokrazy[0] was the best thing I did for my Raspberry Pi uptimes. I think a lot of that is because it defaults to using read-only partitions so the common issue of SD cards falling over when you run apt upgrade no longer happens.

But I also think that gokrazy's simplicity and design helps it be just a solid, reliable foundation to build on top of.

[0]: https://gokrazy.org/


I used dietpi [1] for similar reasons: a slim version of Debian, and with the defaults set to push all the logging into ram to minimize writes. Dietpi has opinionated defaults, for sure, but it's easy to choose something else (e.g. Dropbear is the default ssh server, but bumping to OpenSSH is a matter of changing a setting in the handy config tool).

I've been running an RPi3 on dietpi on an SD card as my secondary PiHole instance on it for at least a year with no issues.

[1] https://dietpi.com/


Looks interesting! but not clear to me who is behind the project? do you know if it is a commercial sponsorship, Golang team / Google, or pure community based open source ?


There's no company behind it. Its a personal project by Michael Stapelberg.


The longest running RPi I have has run continuously for over 5 years. The big secret was not to use the SD card at all[1], I mount all file systems over the network to a NAS device (TruNAS from iX systems). It has a "UPS" in the form of a USB battery pack that is both charging from main power and powering the Pi. When power goes out the battery pack takes over, it has about a day of "hold up" time depending on Pi power usage. It is 'hard wired' to the local network (it doesn't use WiFi).

I got there from having SD cards (nearly[2]) always be the failure point. Everything else has been pretty reliable when used within tolerances.

[1] It does "boot" from the SD card but that acts kind of like a third stage bootloader which loads and boots the "real" OS (FreeBSD) from the NAS device.

[2] I have had one fairly spectacular looking "melt down" of a no-name USB power supply wallwart PSU which, to appears to have also put something like 12V directly across the USB power pins (my best guess at what the secondary winding of the xformer in the wall wart was putting out on the 'low' side)


This covers the readonly filesystem, but doesn't cover the write protect flag that you can set on the microSD card itself[1]. The flag will configure the card's controller to drop any writes, and is thought to resist the corruption issues that can still occur even when the filesystem is readonly.

Also, creating a readonly root out of an existing disto is a bit of a pain, my preference is to use a distro (like TinyCore) that's already a readonly root.

https://github.com/BertoldVdb/sdtool


Half of the post is about SD cards, wear, data loss and reliability. Just use SSD?


Before I opened the article I was thinking "Don't use an SD card. Don't use an SD card. Don't use an SD card."

People don't understand that 90% of the SD card problems are power related (only relevant if you don't use the official power supply) and 10% are simply because of poor SD card quality.

People haven't gotten the message that a charger has lower QC standards since an interruption of the charging process does not shut the device down. A bug that leads to a few milliseconds of power loss will pass QC, but also corrupt your SD card.


Penny drops. Yes this jives much better with the observed data -- that Rpi break SD cards orders of magnitude more than anything else that uses SD cards no matter how crappy said cards are.

Problem could have been solved by adding a decent sized capacitor on the 5v supply rail.


> that Rpi break SD cards orders of magnitude more than anything else that uses SD cards

I think if you take into account how often they're powered on you'd find cameras destroy flash memory cards comparably to Raspberry Pis.

> Problem could have been solved by adding a decent sized capacitor on the 5v supply rail.

Or you could just buy the Raspberry Pi branded power supply.


Are SSDs less prone to corruption than SD cards when power drops for those milliseconds?


I was expecting that to be tip #1 indeed. And it’s why I went to a Nuc eventually. Choose a nice ssd and it will really run for years and years.


Nucs also tend to win on performance, software compatibility/upkeep (yay commodity x86), price per performance, and often price outright (pis aren't really $35 if you get a newer/better model and once you buy storage/power/case). AFAICT pis only consistently win on power consumption, and that only just (x86 can get well under 10W, you just have to get the right model)


Do you have recommendations for a NUC to replace a raspberry pi that draws around 10W when idle?


Have a look at the Beelink SER5, idle around 5W and its quite affordable

https://www.notebookcheck.net/Ryzen-5-5560U-performance-debu...


The Intel N95 and N100 NUCs are amazingly efficient.


And possibly one as silent as a fanless pi4 ?


Thin clients are also an option.

Similar power consumption but power and case/cooling/storage is included.

Depending on what you choose you get eMMC or SATA or M.2 SATA or M.2 NVMe for storage.

For anyone that is interested: https://www.parkytowers.me.uk/thin/


You can also use a Raspberry Pi compute module in an appropriate carrier. I use the CM4 for work and consequently it’s all I use at home too. Lovely versatile device that fits in whatever carrier you want (including one that mocks a regular pi) and it has industrial grade flash.


Do you have any recommendations for a carrier?


I use this minimal one. Check out what Waveshare has to offer too.

https://sourcekit.cc/#/


Thanks… my ideal would be a similar sized carrier with PoE and M2 support

Think I really want something like the Uptime Labs Blade https://pipci.jeffgeerling.com/boards_cm


What's a carrier in that context ?


An IO board that adds the IO ports that the e.g. CM4 lacks. There's a decent range of them from different suppliers, but Raspberry Pi have one themselves called "Raspberry Pi Compute Module 4 IO Board".


Thanks !


Maybe professional bias but I prefer to follow a cattle vs pets approach here and treat them as disposable, meaning: let the sd card break and optimize for quick replacement.


NVMe SSDs cost the same as SD cards nowadays. You can treat the SSDs as disposable if you want, but you are going to quickly realize that this mindset isn't exactly actionable, because there is nothing that needs disposing of.


Current Pi 4s & 5s support booting from USB out of the box with no configuration (took them long enough), so I don't think it's worth the downtime and wasted SD cards.


I've been running a bunch of Pi's for years now, and the biggest problem I've had is the Pi itself dying: 24/7 usage is hard on a small device. I've also found that stable power is essential, and to that end I've always used 5v 3a branded power cubes, plugged into a pure sine wave UPS. Choice of micro-SDHC cards is important and I ended up getting ATP industrial cards (https://www.atpinc.com/products/industrial-sd-cards) - expensive but really long-lived. Finally, using RPi-clone (https://github.com/billw2/rpi-clone) on a regular basis has been a life-saver. I clone to Sandisk Extreme micro-SDHCs and can recover from an outage in minutes.


And how frequent you write to SD card too. My Pi 3B+ have been running for 3 years. Haven't considered to upgrade since my need is small.


It's worth noting that SD card firmware is usually optimised for the FAT32 filesystem, which has a very predictable access (specifically write) pattern, and using filesystems that have a more "free-form" layout can lead to lower performance and higher write amplification:

https://lwn.net/Articles/428584/


Good note, I wonder if wear leveling depends on that since it's local wear leveling compared to SSD global. SDXC probably requires exFAT and free space bitmap is considered.


I have an original Pi 2012 running buildroot. It's a perfect fit. Need just enough to run a Linux kernel and ser2net for doing RS485 stuff with solar inverters. I think the image size was around 100MB and no volatile filesystem whatsoever.

Buildroot was surprisingly easy to use. Use a menuconfig to pick what you need and a burnable image for your SD card comes out the other side. Think I only spent an hour on the whole project.


I have a cluster of 7 SBCs, 1 pi 3b and 6 Tinkerboards of various models. The Pi I got in 2016/17 and ran Gitlab without issue until 2022, something in the SD card got corrupt and linux was no longer able to boot, I was able to salvage all of the data and continued with a new SD card. The tinkerboards, which have various tasks as VPN, Nextcloud, staging servers using docker have never corrupted an SD card.

I think the tinkerboards are better than Pi’s especially the ones that come with 16gb of onboard flash storage. However you don’t get all the niceness of PiOS but have to use TinkerOS (which was less than barebones when it came out) or Armbian which is nice but not built specifically for the tinkerboard.

I have a few friends who complained about Pi’s corrupting SD cards and it also happened to my only long running Pi so there is something going on.


I had an older rpi model without wifi where the network adapter got broken.

Other than that, I've been running 3 rpi at home for home automation they have been working with no issue. But it turns out that they are a bit underpowered to be used as a minetest server.

I never had a corrupted sd card on those. But I had an android nokia phone that corrupted like 3 sd cards before i gave up and stopped putting new ones.


My sound localizing Raspberry Pi installs a resilient base system as part of its install.

https://github.com/hcfman/sbts-aru

https://hackaday.com/2023/12/30/localizing-fireworks-launche...

With one command it for all Pi’s for both Raspbian and bookworm it:

* Shrinks the file system (Gee, how does it do that with just one disk ? ;-) )

* Creates new partitions

* Installs a memory overlayFS

* Installs and configures the system as an audio recorder with micro second time accuracy

* updates /etc/rc to do a forced repair of the data and config portions, in case they were damaged. This avoids system hangs waiting for human interaction with fsck

For the partitioning scheme it creates a swap partition, not as a wow but as an enabler if you really need it to install some large software.

It creates a small config partition. The idea here is that you keep it read only and remount it read write if you need to change config then remount it read only again.

And finally a data partition, which in this projects case is where the audio files are written.

I maintain a version of an overlayFS boot for the Pi but it needs revisiting for bookworm. The easiest way to use do this is to install the sbts-aru and then just don’t use it. Then everything is done for you in one command. And that version works for all Pi’s.

I also do this for the Jetson SBCs. But I need to revisit this for the Orin series. I have it working here for myself and friends but need to update the installer. Note, due to kernel behavior changes with Orin the older Pi like overlayFS code will not work. But I solved this and will release it when I release the Orin release of sbts-install soon.

I’ve been using memory overlayFS like installs for years for long running Pi systems.


I'd consider enabling the hardware watchdog as well.

While one could argue that you should figure out the source of your device freezing in the first place;

Nothing is better than having to ask someone to power cycle your Raspberry Pi while you're away.


Yes. And I monitor my network link and automatically power-cycle the modem, if the worst happens. (Which is rare, but the network link has been the source of most of the few problems I've had.)

And use a wired network connection for anything important!


I've been running a Pi 4 as a home server for a few years. When boot-from-USB became available, I moved to that, with a good-quality USB thumb drive for boot/root (I've had had SD card issues in the past) (and also speed, though I doubt it makes much practical difference). A couple of weeks ago I started getting intermittent disk errors. I thought it was the USB drive, so I cloned it (still worked well enough on my laptop) to another drive. Same thing happened. So everything points to the USB controller glitching out. Have gone back to an SD card, and everything seems fine.


Have a look at your power supply. I had intermittent file system errors with cheap ones. I threw them all out if only for the fire hazard.


Using the official pi 4 power supply.


It is the the time for mini PCs as many have noted it. I am getting one for one of my offices, instead of building or buying a desktop.

I used to build my machines and loved it. Some are still chugging along after over 10 years. They were worthwhile before the mini PCs reaching comparable capabilities at lower cost.

One thing I still cannot seem to find is a good site that compares various vendors in similar fashion as "PC builder" sites do of build components.

Any suggestions?


I have been running a Raspberry 2 cluster for 10 years.

A few weeks back the first SD card to fail got so corrupted it failed to reboot!

My key learning is use oversized cards, because then the bitcycling will wear slower!

I'm going from 32GB to 256/512/1024!

That said "High Endurance" cards are a scam, they fail way quicker than regular cards!

All SD cards except SanDisk have latency problems. There is no competition.

If you can get SLC SD cards use them for workload instances = no db or file storage.


I'm still using a Sandisk 8GB microSD from 2008 running 24/7: smartphone expansion 6 years -> Orange Pi as a router for 3 years, Pi 4 running Frigate for 3 years.

I'm guessing it's SLC/MLC.

Had a Transcend 32GB in 2016 die after a year.

The biggest issues with set-and-forget setups is software upgrade for security or other reasons, jumping major versions ends up breaking things. Compared to cloud which is (usually) regularly updated.


All my SD cards are Sandisk in all my Pis. (2x B+ and a 4, and a Zero). Never a single issue in over 10 years of running at least 1 Pi 24/7.

Only buy Sandisk I guess!


For the life of me I don't really get it why Raspberry Pi Foundation does not include onboard eMMC or SSD storage in their non-compute modules products.

Yes with the new PCIe expansion in the latest RPi 5 you can have external SSD for example, but if you decided to use it for other purposes as well like extra Ethernet port expansion then you cannot use it for booting anymore.


> For the life of me I don't really get it why Raspberry Pi Foundation does not include onboard eMMC or SSD storage in their non-compute modules products.

Cost. It's always cost.

If the eMMC or SSD storage is not enough to hold a general purpose OS for all users, then only some users get the value. And if it is big enough, it's put the cost of the machine up above the point at which they feel happiest, when an SD card is perfectly fine for the majority of their target users.

Eben Upton is regularly on record talking about how the cost-per-component/users-per-component tradeoff will lead them to avoid adding a component, and has motivated removing some (composite video for example)


It's obviously about cost.

It's curious that there are people who need this, are aware of the compute module, and still complain about it. Is the the availability of the compute module that's a problem?


And here is me, who is running a Pi1 in my cellar for 10 years straight, which logs all my temperature sensors over 433Mhz + triggers my door openers via physical relay over WIFI, without doing anything special. After some years I only connected it to a UPS, after the sdcard filesystem died after some power outages....


Going to plug my own side project from the past here: https://cattlepi.com/ https://github.com/cattlepi/cattlepi/blob/main/README.md

Been running pis (mostly 3b+) for eons with this solution and at this point i can say it's bulletproof.

The key is to minimize the sdcard wear and tear (it uses an overlay filesystem with squashfs as base and tmpfs as write top layer) and to keep zero stare on the device. You can build the image starting from normal raspbian. You can also update it over the network.

As far as usage in the wild the largest "deployment" (i know about) is at around 1000 pis.


Buy a decent SD card and overprovision the space and wear leveling will take care of the rest. I have a Pi with a 128GB SD card and a 32GB filesystem on it running for 6 years straight without problems now. No need to disable logs, just disable debug logging so you won‘t generate gigabytes a day.


Also: consider using overlayfs to make root fs read-only


They mentioned making the root fs read-only which (if they are using the config tools) does use overlayfs:

https://raspberrypi.stackexchange.com/questions/124628/raspb...


A few people have mentioned achieving long uptime. What is often overlooked is that it is possible for uptime to be too long.

It is quite possible for updates to not break a running system but make it so that it will break on the next reboot. E.g., a dynamic library gets updated in a way that breaks a server process. It doesn't affect the running server because it still has the old library loaded.

Next time you boot your server process doesn't start.

These kind of problems can be annoying to deal with, especially when your system has an uptime of years and for all you know whatever change broke it could have been in any one of dozens of updates you've applied over that time.


How do you deal with that?

Probably one should have a canary system that is rebooted every day. In a home setup we usually don't hand either the spare machine or the spare time to deal with it, or both.


The easiest way is to reboot after updates. Most updates are not urgent so unless you are having some problem that you hope the update will address wait until a time when it is OK to reboot. Then install the update and reboot.

If there is an urgent update that you need to apply at a time when rebooting would be an issue apply the update and leave yourself a reminder that the system is running a configure that you don't know is bootable. When a reasonable time for a reboot comes around take it.


Me personally, I always reboot after every update. I'd rather deal with the break now while I'm working on it, rather than at some random point. Then I'll know exactly what changed.

I take it a step further. I usually reboot before updates, too. Make sure I won't chase updates as an issue if something else broke.


> This is unlikely to do anything unless you’re hitting some unusual bug, but it’s worth noting that IPv6 has, in the past, led to all sorts of strange behaviors in different networking contexts.

This makes me sad. I don't doubt that there are scenarios in which having IPv6 connectivity makes things worse, but these days, the opposite is more common, so I don't think "disable IPv6 just in case" is a good blanket recommendation to make anymore.

"If disabling IPv6 fixes your issue consistently, consider disabling it" would achieve the same outcome, without potentially causing problems/inefficiencies down the road.


I agree with this. While I’ve had various issues with IPv6 a decade or so ago, nowadays I’ve never found disabling IPv6 to be a solution.


Step 1 is to question why you're using a Raspberry Pi. It's almost never the correct answer.

If there's a really really good reason, step 2 is to get rid of the SD card. Personally, none of mine even have an SD card inserted (ok, one does). I use network boot/NFS for everything. Some people attach other kinds of SSDs.

The one I lied about is a reverse telnet server that has been quietly doing its thing for 4 years without a hitch, apart from the time I had to replace the (PiJuice) backup battery because it was looking a little swole. But I should take the time to at least have a backup of the card ready to swap out when it fails.


There are three uses for Pi. One is cheap computer, but Pi is never good at that and miniPCs have gotten cheap. Two is IoT, ESP32 is good choice for simple things and Pi Zero for bigger things. Three is for small servers and dedicated computers. Pi is perfect for that, it is cheaper than miniPCs, is smaller, and uses less power.

Lots of people in this post have mentioned uses like that. I want to make outdoors ADS-B receiver, and Pi will fit in enclosure and be powered by PoE. I want to make GPS time server, and Pi has PPS input on header. I want to make portable ham radio box, and Pi can be powered by 12V DC.


I’m trying to make a smart speaker where you push a button to talk and when you release it sends the audio to a server for transcription and response. It also has to do some basic logic with LEDs. I also want it to be always on and available as long as it’s plugged in. Do you have any advice on what might be a better alternative to a Pi? This is basically my first foray into hardware so I’m trying to learn as much as possible!


Another nice thing you can do if you use the multi-partitioned memory overlayFS approach I mentioned earlier is you can make /var/docker be a symlink to your read-write data partition. Obviously you are going to have problems using docker with the standard memory overlayFS approach.

Also nice, is you can make your data or other partitions be encrypted. I've done this before. On the Pi 5 you can use the standard encryption as there's hardware support. On earlier Pi's you can use the encryption used for android. This does means there's a manual step in the startup for you to enter your encryption password.


Their suggestion for log2ram does help for the most part but depending on what the pi is running, even that doesn't completely solve it. I've burnt out a number of microSD cards running a Pi-hole instance. I finally gave up and moved it to a tiny x86 with a SSD.

Yet I've been feeding ADSB-Exchange and FlightAware from a Pi Zero for years and never had SD card problems.

I really like Alpine Linux for the Pi, running in it's own read-only mode where changes must be committed to disk. But unfortunately, Pi-hole isn't compatible with Alpine (at least last time I checked)


I use a USB connected SSD as boot. Been running Home Assistant and Pihole for years with zero issues.

EDIT: Also, make sure the power supply is sufficient. I was using a cheap adapter and had random errors and reboots.


Cheap adapters are a killer. I switched to a good 50W USB charger which powers a couple of Raspberries. Zero problems since then.


I have a Raspberry PI 1 model B running almost non-stop for over 8 years (except during power outages) serving as my front gate controller. Custom python code uses the GPIO pins to trigger an RF remote to open or close the gate, and does not write to SD for logs/etc (to prevent SD wear)

I use a USB network card, and a high-quality SD card. Other than that, no other special configs (except for another spare SD card with a full system image clone).

Rock solid performance and uptime over 8 years.


Also no mentioning of the power connector? I have too little experience with USB-C, but the micro USB connector used on early Raspberry Pi's is just asking for trouble. That might be (barely) good enough for charging, but a computing device w/o battery-backup won't take lightly the power interruptions when jiggling the cable a little. I finally got around to replace it with an old-fashioned (time tested!) barrel connector. Easy way to improve the robustness significantly.


As with my DSLRs/mirrorless cameras (particularly the ones that use the wretched USB mini) when tethering I have ended up with a right-angle adapter cable secured with a cable tie (or the super-strong Tethertools Jerkstopper), and a USB micro cord into that.

USB micro is designed to snap at the connector, not the board. And the connectors will indeed snap.


Anything long running shouldn't happen over that piece of crap that WiFi is. Data centers aren't build over WiFi. My network at home neither.

I've been running RPi for years (including a VoIP server on a RPi 1!). The two tricks if you want a really long running Pi are: SD cards mounted read-only and ethernet.

FWIW I ve got a Pi running the unbound DNS resolver and it just works. It is not an art. There s a reason businesses are scooping up millions of Pi: they just work.

P.S: I ve got an army of NUCs too.


The ethernet card of my 1st rpi broke :D


Around early 2019 I set up a raspberry pi 3 running Raspbian. I made the /var/log partition a ramdisk. Haven't touched it since. It goes down for power outages but probably has been out for a couple minutes total over five years (aside from power outages). Most of its job is to translate analog audio to a USB speaker system. Whole house audio for about $150. Anyway, I never touch it, it just works, all the time.


Do people really have such a neurotic aversion to the sight of cables that they'd rather struggle with workarounds to an always-inferior wifi than just using an ethernet cable? It seems unhealthy to get that stressed out over the sight of a 3cm wide cable.

I've had an RPi4 running continually for close to a year and never had a single network connectivity issue between the RPi4 and my router, connected via a 30m cat5 cable in my 75m² condo.


I got one of those super low profile USB thumb drives and then set the Pi to boot from that. It ran automation here in my house for 2.5 years without a blip.


I have a few long time running Pis - and have been keeping them up for a decade now. No SD Card corruption ever and i got close to 1000 days uptime on one.

The biggest problem is loss of wifi, after a few months one will lose wifi, but keep working - it’s constantly recording data so a reboot is not a good idea. I’d prefer a solution where I could just reset the wifi, but all attempts to script that reliably so far failed.


I've got a few in factories (doing non critical stuff) at work. They auto reboot regularly because otherwise they'll lose whatever's on usb (reading those usb metering devices being the whole point of the PIs).


I can appreciate the effort and time put into this, but seems completely overkill. I’ve been running 4x Pi 4B’s in a Kubernetes cluster for over 3 years 24/7/365. Only thing I did is disable swap (which you should do for Kubernetes anyway) and used SanDisk Extreme 128GB cards[1].

[1] https://a.co/d/ipehodH


having used an original model b (rev 2) since 2013 with running 24/7 for about 80% of the time, i feel that a lot of this depends on how much you want to push your pi.

for context, my workload was bursty in nature and nowadays it is mostly idle. but i never had any issues with the SD card, and i only upgraded to a new one for getting more capacity all this time (only twice).

keeping the dust out and managing the temperatures would go most of the way. i have served files directly from the sd card but it is always better to mount an external drive for this, while providing enough power to the board to power usb devices. limiting debug logs for stable applications can also help avoid write cycles, but using sd card on a pi has been a similar workload to using one in older smartphones for storing media.


When I worked with that, we had the SD card entirely read-only, and a usb stick mounted for writing. I think the rpi wouold detect a broken file system on boot and reformat the usb stick. It also made it easy to pull the usb stick and format it in any windows pc to reset it.


I’ve got 2 rPI’s thats been running for 4+ years each. Only thing I do is update them once in a while. Both have workloads running 24/7. no issues. I have however experienced issues in the past but those were due to a faulty power adapter not keeping voltage within spec.


Haven’t followed any of these tips, yet I have barely stumbled upon any issued with my long running pi


All the SD problems go away if you get a RPi Compute Module instead. Well worth the additional cost.


How do the problems go away? Is the lifespan of the built in eMMC longer compared to a high quality wear leveling sd-card? I am a layman, but isn’t the problem the flash which should be basically the same?


An SD card is using contact pins to make the connection, which is susceptible to dirt and vibration. eMMC is soldered to the board so you no-longer need to worry about environmental and mechanical disturbances.

The eMMC has a known supply chain and is guaranteed for 1 million write cycles that are then wear leveled. The SD card could be coming from anywhere, or a counterfeit with who knows how many write cycles and wear leveling.

The CMs are built for industrial use and I've shipped a few products based on them, shipping thousands of units. They have had zero flash failures.


Thanks for elaborating, valuable advice.


Not even NOR SLC flashes are regularly specified for 1 million erase cycles. And common MLC/TLC eMMCs certainly are not. What are you talking about? :)


Admittedly, I only have one long-running Raspberry Pi, but it's currently sitting at a few months uptime. And that was an intentional reboot. I've never had to take any measures like these in the four or so years I've had it up.


i very much so enjoyed these articles even if i don’t use raspberry pi anymore. it has a very much so research notes cleaned up for sharing feel and reminds me of some of my teams internal articles, particularly on research the author wasn’t sure on but wanted to put out there regardless. it was very easy to get their decisions and the limit of their research and knowledge so i had pretty clear idea what i need to still check on my own and what i might adopt if i were to use rpi devices.

i don’t really get what to use a rpi for but i guess not important, it was just a nice series of articles


I can't do without my https://en.wikipedia.org/wiki/LibreELEC RPI.


As far a I know (and my knowledge inly goes to version 4) the RasPi does not support different power states or sleep modes. It would be interesting how it compares in power consumption to other solutions.


I wonder if Raspberry OS comes configured not to swap on the SD by default …


Anything long running should have ECC RAM, which the Pi doesn't have.


This is only a concern for workloads where the loss of contents of memory is a significant problem beyond the need to reboot, no?


All my home Pis network boot, so there is no card to fail. You can also change what OS they boot into by just renaming a symlink on the server and rebooting them. Very convenient.


I'm curious what you're using all these Pis for when you've got a server on the network to provide network boot. Can you not just use the device that's providing network boot to provide whatever you're running on the Pis?


No, because the server is in the wrong place. Some Pis are connected to televisions and play videos, or emulated games. One is connected to my hi-fi. One has a GPS receiver on it for NTP. One is a photo frame. Basically, the Pis exist to interface with the real world.


I've been plagued with the wifi problem since changing routers. Devices on the local network will randomly lose the ability to connect to it, but everything else is fine.


I added a cron job to one Pi that checks if wifi is up and tries to restart it if not.

I also had an LG monitor that had so much feedback it would disable the WiFi interface completely. So I would check your monitor, if the pi is connected to one.


I don't think I took many special precautions and a Pi I had running as a VPN server survived for about 10 years, aside from a power supply failure or two.


I don't understand what's hard about adding like 256MB of flash soldered to that thing, I'd pay $5 more if it did.

Everytime there's a RPI I cry about it.


Judging from a sampling of your comment history, every time someone points out that there's a Raspberry Pi compute module option that comes with flash memory.


No mentioning of flashybrid? I'd thought that's the obvious solution to SD wear (or rather the danger of SD corruption on sudden power loss).


Too late to edit, but I meant to refer to the Debian package of that name, not an actual flash/spinning rust hybrid storage device.


My experience with long-running raspberry pi's is that there are USB problems. One sometimes loses the connection to these devices.


Not a single mention of netbooting the Pi instead of relying on crappy SD cards which wear faster than I can type this.


> Your SD card can wear out or completely fill up

Why is the author not considering using an SSD instead of an SD-Card here?


Not sure about the SD cards he recommends. Use Swissbit SD cards. But they will cost you.

Mouser is your friend.


I run my RPi's 24/7 from initramfs. I can even remove card after boot.


Would you send them out to customers relying on them?


Yes. They use them like that.


I use OpenWrt whenever it's possible, to avoid SD card wear completely.


Alpine in diskless mode and an ethernet cable solve 2/3 of that.


Does that have a hardware stall timer?


eventually you'll think about power outages. i had a pihat battery nearly explode on me. beware!


> Keeping a Raspberry Pi online and working with zero intervention for weeks, months, or years is somewhat of an art form.

What are you talking about?

It is literally zero effort. Just set up crontab reboots in case of power outage. That's it.

I've had a pi running as a BLE gateway / security cam for over 4 years with zero intervention.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: