Hacker News new | past | comments | ask | show | jobs | submit login
I finally found a use case for my Raspberry Pi Model B+ (ounapuu.ee)
140 points by hddherman on Nov 1, 2022 | hide | past | favorite | 150 comments



I turned my original RPi (I think it was the one that launched in 2012?) into a noise machine in my bedroom, thanks to scripts from a friend. It dutifully starts rain noises in the late evening and stops them in the morning. The original one my friend made does a better job at fading the audio over time (something is wrong with my ALSA config), and also switches to birds chirping in the morning to wake him up.

It's a great use-case for it as it's not very demanding, and essentially doesn't require networking aside from NTP to keep the clock up to date.


I made a nice brown(ish) noise generator for my bedroom. I use it at night and sometimes during the day when I want to meditate, or something. It uses an ATtiny85 to generate white noise, which I filter through a resistor-capacitor. I put it in a mounting I made and used varnished balsa wood to make a nice box for it.

ATtiny85 are dinky little chips to use.

It's instant-on and off, and I don't have to worry about corrupting anything, or some upgrade quirk like I do with the RPi. I was still using my original RPi on a permanent basis until recently. RPis are not obsolete if your requirements are modest. Try running X-Windows and Firefox and you're in for trouble, though.


Thanks for giving me ideas for my own pile of ATtiny85's I have lying around:)


You're welcome. I did a write-up from some time ago: https://mcturra2000.wordpress.com/2019/09/03/brown-noise-sle... AFAIK the schematic is accurate. Use the bottom one because I do like the filtering. White noise tends to be too harsh for my liking. Although it is one of my earlier projects, it has proved to be a good and enduring solution (to me, anyway).

It has since acquired a snazzier encasement using balsa wood. I bought some wire mesh for the front of the speaker, to give it protection.

If you want to do day/night timing (although I personally think that's overkill) , you could think about adding a DS3231 clock. I recommend steering clear of those really cheap Chinese modules (DS1307, if memory serves). They're a complete waste of time.

Have fun!


Excellent, thanks again!! This may just be the inspiration I've needed to start playing around with hardware again. I've mostly got all the parts I need to get going, just need to find the time :)


Can you discuss the setup more?

Are you using speakers or headphones? If headphones, wired or wireless? If speakers, where do you place them and is strange sound localization an issue?

I recently tried sleeping with inexpensive Bluetooth headband headphones, which was a great expensive. I liked the feel around my head, but am curious about the speaker option.


I'm using a cheap "can" bluetooth speaker, nothing fancy. Because it's mostly just noise from the rain, and the speaker is cheap, there aren't any sound localization issues I'm aware of.

My friend was using a much nicer stereo setup for much higher realism for the rain sounds.


I also still have an OG one for similar purposes (shifts room lights to orange in the evening, brightens them up blue in the morning, plays music and radio on a schedule). Stays up uninterrupted for months at a time, sd card got corrupted maybe once or twice in over a decade.


if anyone else is just looking for the noise generator, theres one built into ios now so you can use your ipad or iphone. This is under accessibility - audio/visual - background sounds. It’s ostensibly for tinnitus, but as with all good accessibility, others can use it.

I might use it, but instead my wife and I just turn the air purifier to maximum to get a lot of brown noise .


Whaaaaat, this is awesome, I had no idea!


Do you run it through a stereo or a small speaker ?


Every time I've tried to use a pi as a long-lived server, the SD card ends up getting corrupted. Switched to a chromebox running kodi for my media server, running 24/7, five or six years ago and haven't had a single problem with it.


SD card corruption is a big problem with the Pi, but its easily solved: enable the read-only overlay for your boot/root filesystems. This is built into the RPI OS. start raspi-config, and go into performance options.

Every month or two, disable the overlay (again using raspi-config), reboot, apply updates, re-enable the overlay, and you are good to go.

Also, if you are using a PI4, switch to booting from a USB stick and just forget about the SDcard.


They also support netbooting now I believe. Been meaning to try it out so I can fully host my retropi on my NAS.


Thanks! This is handy to know. All the guides I've seen earlier involved a lot of manual config.


when did that option appear? that is great info.

(I've been running openwrt on a pi just because it had an overlay filesystem - I could run raspbian now)


I had trouble with that on the original models, even with a good (or at least official) power supply (many reported the issue was worse with cheaper USB power sources).

Never had trouble with the 3 that was a Kodi box and latterly has been used for some electronics experiments, the 4 that is my current Kodi box, or the 400 that has been playing as router+firewall+VPN since early this year (at the time getting a 3 or 4 for the job would have been either expensive or near impossible, and I didn't want to try a potentially less well supported option, but I spotted a nice offer on a 400).


You are probably (or, the default config) doing "something stupid", like atime, or something that writes continuously to the fs, like logs, etc

(I'm not blaming you, it's hard to have a pure RO FS, that would help a lot - and a standalone RPi doesn't need a lot of stuff that comes by default)


In addition to the other helpful suggestions you’ve received, look into using Log2Ram. It does what it sounds like, puts log writes in ram and then writes them to disk on a slower cadence that doesn’t work your SD card that much.

https://github.com/azlux/log2ram


I have rpi4, installed almost 3 years ago on 64GB Sandisk SD card. I also have attached external storage. Still alive with tons of doker services.

I have /var/log and /var/tmp on tmpfs tho


Yeah, I won't buy it until it has a bit of flash memory on it.

And I don't need much, just 32MB would be enough, I would just boot linux using using a read only SD card.

It's a major flaw, I don't understand why they did not find a fix yet.


There are Pi CM4 SKUs that have onboard eMMC flash - I have one that I use on a carrier board that also has a SATA controller and two SSDs connected.


If you have a workload that writes data quite frequently, then moving that data off to an external SSD is also a good way to extend the lifetime of the SD card.


You can boot directly off the ssd, i dont even have an sd card installed


That's true, I forgot that was a feature they supported with the newer Pi-s. Doesn't work for Pi 1, you can move the root partition to the SSD, but anything needed for the initial boot still has to live on the SD card.


I starting using a usb-3 nvme adapter instead of an sd-card for my always-on pi 4, and haven't had any issues since.

It boots in about 10s now too.


Try a Beagle Bone Black


I also recently found a use for my rpi 3b as well. I bought a $30 USB SDR Receiver [0] and started collecting ADS-B [1] data and uploading to ADS-B Exchange [2]. I have been collecting flight data for aircraft flying overhead for a few weeks now and it is super interesting to see what kinds of aircraft are flying around me. There are a lot more military planes that I would've expected.

[0] https://www.adafruit.com/product/1497

[1] https://www.faa.gov/air_traffic/technology/adsb

[2] https://www.adsbexchange.com/


We did this for a while in our house, but with Bittorrent/Resilio Sync on later Pi models, and it was a pretty miserable experience. I/O is terrible on most models (which is somewhat important for storage). IIRC there is no acceleration of crypto primitives, so AES encryption used by Resilio Sync and Syncthing is slow. Finally, we found that even with a proper power supply, the Pi does not really provide the stability you'd want from a storage node.

We ended up replacing the Pi(s) by an Intel NUC, which was just over 100 Euro in 2018 without memory and storage. For relatively low cost, you get SATA speeds (or NVMe if you are willing to spend a bit more) and an AES-NI capable CPU. Our current NUC has been humming along for 4 years.

Ps. Syncthing is not backup, even with a read-only node. You are just one Syncthing bug or sysadmin failure away from erasing your data. Having the files on btrts on a single SSD only makes it worse.

Just get some redundant block storage somewhere that supports object lock (to prevent accidental deletion or ransomware encrypting all your files) and do incremental backups with something like Arq or Restic. There are some good services where 100GB storage costs $1 monthly.


This isn't the only backup I have of the data, it's just another copy of the data. BTRFS snapshots should be enough protection against the ransomware use case as well, assuming I discover it fast enough. And if I don't, I still have offline backups.


With a cloud solution, how are you not one bug or sysadmin failure away from data-loss or leaks?


You avoid backup software bugs by using object lock. Even if your backup software messes up, it cannot remove older backups. On the cloud-side, sure bad things can happen, but a data loss event through a sysadmin or hardware failure is far less likely to happen on e.g. Amazon S3 or Backblaze B2 than with a home Raspberry Pi with a consumer-grade SSD taped on top of it. And you are probably backing up to at least two different destinations (at different companies) if you care about your data.


> Even if your backup software messes up, it cannot remove older backups.

But what if that has a bug?

Multiple write-only DVDs are clearly the only option per your logic.

You also can't just add on backups indefinitely or your costs will also approach infinity given enough time. There has to be a mechanism for deleting things, be it on DVD or on object lock cloud hype.

Personally, I think a backup is just that: a reserve copy. It should be reliable, but so long as you test your backups regularly, you can be confident that there won't suddenly be a bug when the primary copy fails. I, too, like to have two independent backups instead of one (happened to me before that, due to a niche mechanical failure called little brother, an external backup drive failed very soon after the primary), but saying one shouldn't use normal sync software "because it's one bug/misclick away from erasure" is silly. There can always be bugs and misclicks. They even mentioned using a read-only mode. It's a matter of how certain you want to be, and most people don't have any (automated) backups in the first place. Syncthing or similar software wouldn't be (isn't) my choice of backup software either, but I wouldn't dismiss a simple solution that works fine for them.


You also can't just add on backups indefinitely or your costs will also approach infinity given enough time. There has to be a mechanism for deleting things, be it on DVD or on object lock cloud hype.

Object locks have a configurable expiration date.

But what if that has a bug?

Again, this is yet another typical HN discussion. We are now comparing a consumer grade SSD taped to a Raspberry PI without ECC memory to a theoretical bug that might be in S3's or B2's object lock implementation. They have stored petabytes of data and there are virtually no reports of data loss ever, nor has anyone bypassed object lock, even if it's a high-value target.


Depends on your cloud solution, but OneDrive has version history so you can just roll it back.


Unless OneDrive silently corrupts your files upon upload: https://github.com/OneDrive/onedrive-api-docs/issues/1577


Btrfs has snapshots too. That doesn't rule out bugs in it.

Making something someone else's problem doesn't magically make the problem go away.


Are we really having a debate on whether cloud storage with proper redundancy, reliable hardware with ECC memory and dedicated sysadmins is as vulnerable to mishaps as a consumer-grade SSD taped to a Raspberry Pi?

I hope we can at least agree that it is 3-4 orders of magnitude more likely that an SSD with btrfs and no RAID attached to a Pi to lose data within a given time period than, say S3?


Hetzner's storage boxes provide native borgbackup support, I've found it very useful as an offsite backup store.


Can you share an example of what might be a good service.

Re ransomware, how do you prevent it from deleting remote backups?


E.g. Amazon S3 and Backblaze B2 support this.

Re ransomware, how do you prevent it from deleting remote backups?

It can't, locks on objects cannot be removed in compliance mode. They can only expire.

https://www.backblaze.com/b2/docs/file_lock.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/object...


Write-Once-Read-Many and retention times, for example.


There can still be bugs such that you can write a second time much of the time the write once is just a software driver that won't write a second time so if you can write new firmware of your own design you can erase. Even if the media is physically not erasable, you can still write all ones over the top of what is there, corrupting data which is the same result.

Of course finding the bugs in firmware and writing a custom replacement is not trivial. It is conceptually possible though.

The only safe answer is a device in a vault without power. Of course once you retrieve it from the vault you risk whatever erased your data in the first place returning to get this too.

Good luck, you get to decide how paranoid you want go be.


We are talking about object lock as supported by something like Amazon S3 or Backblaze B2. If you find a way to override them, you probably want to apply for a bug bounty, since that'd be a huge security issue.

It works really nicely, I can't even remove my own data with my own credentials. It's really a nice additional security layer.


> If you find a way to override them, you probably want to apply for a bug bounty, since that'd be a huge security issue.

Big security issue, and honest people will just apply for a bug bounty or otherwise responsible disclosure. Most honest people are not actively looking for such bugs though, so evil people are more likely to find them. An evil person is can make even more money from a successful ransom. (there are also honest people looking, but many wrote the code so they may be too close to the problem to see it)


This single core is more than enough to be used as PIC/EEPROM programmer, or controlling any other I2C/SPI/UART appliances, like switches, thermometers, particle detectors


This is the winning answer. For most other applications it's either; "It can do <thing> _slowly_" or "It can do <easy thing> but is overkill".

Many people forget that the Pis have a wealth of peripherals accessible on the GPIO headers.

I suppose those interested in EEPROM programming and IIC, SPI, and UART probably know this though, and probably (like myself) have dedicated devices for that task.

Still, I have about 10-15 Pis from B+ to 4B sitting in drawers, the Pi Zero (2)s are the devices I find the most use for, so I'd love to bring some back into commission but OPs solution isn't it for me; I have powerful hardware with a plethora of storage and redundancy for that.

Once upon a time, with my OG Pi, I had a GSM "HAT" (before they were called that), and wrote a little API in C/C++ that would text me, so when my UPS detected power down/power restored I'd get a text, followed by another with my new IP address (because it wasn't static). I still have the Pi and "HAT" but don't really need the SMS part anymore.


> "It can do <easy thing> but is overkill".

"Overkill" tends to disappear when you consider the cost of your time. You are not penalized for not using all, or even most, of a platform's capability. Silicon is cheap, programmers are expensive.

The pi shines at doing complex tasks that involve physical I/O when the device is manufactured in low unit volumes. e.g., a few years ago, I built a machine to do a proof of concept for a physician. It was based on a Pi 2B solely because he wanted a touchscreen. The entire thing could have been built with an Arduino, but the hardware cost to do it on a Pi was only about $40 more than on an Arduino. That difference was more than an order of magnitude less than I would have had to charge to do the touchscreen software on an arduino.

When I was working for an engineering services company, there many applications that we could have put a Pi or a Pi Compute Module into, even at larger volumes but for one reason or the other the company would suggest a ground-up CPU board design to the customer...


+65535. For me, the killer use case for my Raspberry Pi (even the original one, or just any single-board computer to be honest) is always an emergency chip programmer. When working with embedded systems, from time to time you need to program firmware into a variety of chips - parallel EEPROMs, SPI EEPROMs, I2C EEPROMs, AVR microcontrollers, PIC microcontrollers, STM32 microcontrollers... Traditionally, many tasks require vendor-specific tools. It's incredible frustrating when you want a chip programmed today but don't have the right programmer at hand. Thanks to Raspberry Pi's GPIO, you can get bitbanging software programmers for a large number of chip protocols.

The parallel and serial ports on a PC used to serve the same purpose. But many modern systems don't have them. You can use USB converters, but latency is usually terrible due to USB overhead. The 3.3 V logic on the Raspberry Pi is also easier to use than 5 V TTL level or 12 V RS-232 logic level.


I'm using my old Pi B+ as a pi-hole. It's been ticking along quietly for a year now. We only remember it's there when we use the internet out side of the house and realise ads are still a thing.


I have a P4 CM, for which I have: - Installed a 2TB NVMe m.2 SSD (with a PCI to m.2 adapter) - Configured Samba and use it as a poor man's NAS, which is actually not so bad. - Installed Kodi and hook it to TV via HDMI 0, another poor man's media center (as a side effect, YouTube plugin for Kodi doesn't have ads) - Installed Syncthing, which servs as my major private backup system. (I know, Syncthing is not really a backup solution per se, but I just enjoy the dropbox-like convenience so much, and it has some basic versioning capability, good enough for my need) - Installed PiHole (actually on another Pi 3 which I have for years and reluctant to throw away as it's still working) and my kid won't be annoyed by the ads while playing with iPad - I'm diverging, but some app still shows a "watch videos for in-game items" button even after I purchased the "remove-ad package", which I really hate for kid to waste time on, not to mention many App Store games/apps are heavily ad-ridden nowadays) - Occasionally run the free Mathimatica that comes with the system for some very basic math calculation/demo - Setup nginx and configured a DDNS with LetsEncrypt, it serves some very basic stuffs that I'd rather keep private. - Used to run some cron jobs for some minor tasks.

In short, it's a low-power tiny linux server which I'm satified to tinker with, and I assume other good SBCs like BananaPi/ODroid/etc will serve the same perpose well also.


Sounds almost exactly like the use case that I bought my Pi 4 for, over a year ago. Sadly it's getting replaced today with a MeLE Quieter3Q. There were too many things that didn't work properly on the Pi. Wine (because ARM), YouTube crashes in chromium and most videos have no sound in Firefox. Boot issues if you have a USB drive plugged in. No power save. The MeLE is a SBC with a Jasper Lake CPU, 8GB memory and built-in 128gb of storage on eMMC. And an M2 slot. And a case. And a power supply. And they're actually in stock. It feels more like a 'real computer' then the Pi. It's a lot less work and probably costs the same when you factor in all the crap you have to buy for the Pi.


I think i could find a lot of use cases, unfortunately what I cannot find, is a Raspberry Pi itself... :(


Everyone thinks that before purchasing, that's why "everyone" got one or there wouldn't be so many jokes about it gathering dust while unused in a drawer somewhere :D


But you can rely on a community for ease of use, lots of tutorials, and SW support. /s

Looks like popularity also has its costs.


I use an old EeePC901 with a broken screen with two 4TB USB drives plugged into it for my remote backups. Built-in UPS. Though I measured it last night and was slightly surprised to find it using 19W while idle.


16W of that could be the drives if they are not dropping into a low power state, depending on which drives you have.


Yeah, I was thinking that. However a quick google says that the average laptop drive uses around 4W when accessing and 1.5W when idle. I'm also measuring at the mains point, and I'm not sure how efficient the transformer is - they're usually only about 75% efficient.

I'm loathe to have the drives regularly spin down. That sounds like a good way to make them wear out quicker. Though spinning up twice a day might not be too bad.


Yeah, I wouldn't expect >10W from a pair of 2.5" drives.


Could it be that it is slow because of thermal throttling? This foam is certainly catching heat.


I know performance is very much off the table, but wouldn't it be better to face the CPU side of the board out so you have less thermal throttling?


Yup that foam "case" made me cringe. Not just poor airflow, but actually wrapping the heat generating bits in insulation.


It conjured up images of suffocating someone with a pillow, while they run a marathon in the desert. Pis get pretty warm, I haven't looked back at my B+ for a while but I don't think it was a heck of a lot cooler than the more modern devices.


The recent models do get quite warm, especially Pi 4, but the Pi 1 with an overclock has not gone over 50C in this setup, and that's at a constant 100% CPU usage.


That's a very reasonable temperature I'd say for 100% usage. When I was running a Pihole; I don't recall if it was on a 3B or 3B+ but it would idle at ~55C.

This gives me an idea; I might see if I can run some thermal testing on all of the Pis I have using my Flir to get some thermal images and put together a little write-up.

I've done a bit of experimentation previously when I'd build a little 1U server case with 3 Pi 4B in and added a custom fan controller (using an ESP32) to stick in my network rack, and it was running Kubernetes, but I didn't keep it long.


Someone has already done it for you: https://www.hackster.io/news/raspberry-pi-4-firmware-updates...

For RPI4, firmware update is requried to improve rpi performance and decrease temps.


Nice; my experimentation was actually before this (and I posted it to the RPi forums), the firmware updates _did_ make a measurable difference but I decommissioned the setup shortly after testing it.

EDIT: What that post is missing, is a comparison of all generations of the Pi; We know the Pi4 is hot, but I haev every previous generation sitting in the drawer, might be interesting (for me at least) to do some comparison.


My RPI4 sits in a closet. 67 max temp, usually 57. I got the heatsink-enclosure. And I haven't myself made the firmware update - I may still have the bad one.


I would like to see a nice Pi case with external (USB?) drive mount and built in PS. I want to add 5TB of storage for a media streamer without taping a drive to the PCB.


It sounds like the Nespi case might work for you?

https://retroflag.com/nespi-case-plus.html


Does a Pi that old support thermal throttling?


Got a similar setup with the same board: 3 disks, one with important stuff (photos, documents), one is a backup with rsync, the third is for downloaded media (not backed up, I don't care if it gets corrupted).

Services: syncthing to sync photos and videos on my phone, SMB, nfs, FTP (accessible from internet), a web file explorer (on docker, exposed to the internet with basic auth), a torrent client (on docker, exposed with basic auth), caddy (for reverse proxy with https).

It's not the fastest you can have, but it works ok for what I need.

I have a separate pi4 box with Kodi from which I can view photos and films on the telly.


> let’s build the slowest damn Syncthing backup endpoint imaginable

I used to run syncthing on B+ for a few years, it was quite ok.


I tried syncthing on a pi for a few days in 2014 or so. It would take something like 2 hours before the GUI would even be accessible for me. I bought an Intel bay trail micro-ATX system without any moving parts, and it's still chugging along without any problems.


My beloved B+ is living out it's retirement as my backup local DNS server. It's not the fastest but it's cheap insurance for when the primary one goes offline for some reason.


Similarly, I have a B+ doing Pihole duties. It also runs a few scripts looking for unexpected new devices on the network and mails me when it first sees them (which has never actually caught anything bad going on, but does remind me of lots existence when I get a new phone or give a friend the wifi password).


I used to do something similar with hook scripts from DNSMasq and sending notifications to my phone. The next step was to allow me to click a button in the notification and kick that device from the network, but I never did get around to figuring out how to do that; I expect it wouldn't be done in DNSMasq, but I don't know which service _is_ responsible for kicking/blocking devices (by MAC presumably)?


I also have a B+ running PiHole. It's perfectly usable, and even the web interface is genuinely very fast and usable, even on this single core Pi with the limit memory it has. It's very impressive. The only thing that broke in over 5 years is the power supply (phone adapter).


I've been running PiHole on Pi Zero with ethernet hat i.e. whole network stack is going through the GPIO pins and it works fine. So there should be no issues running this on basically anything.


same but i dont have a ethernet hat so i just do wifi on pi zero. it works.....

recently i found a strange issue. i can access rustdesk on a device connected at home which goes through pihole. so the pc says no internet connection, triangle on the network icon (win10) and firefox does not work. restarting didn't help either.

it was troubling because i could use rustdesk just fine, only problem was the machine thought it could not access internet.

checked pihole address and it was not responding. sent remote hands to unplug and plug pi-zero to the router usb itself and everything worked.


Hm great idea. I am setting up a local toy system to monitor public predator domains since the Greek gov is using to monitor literally everyone alive these days.


If these "predator" domains are of a serious concern, would it not be advantageous to your fellow Greek to add them to a public block list so that all can benefit? I assume they serve no legitimate purpose other than to track? If that's the case, a blanket block list for all people who run PiHole/AdGuard/uBlock et al could be useful!


Good idea, did not thought about this.


what is a predator domain?


A predator domain is a domain name used by the predator mobile phone malware. There was a publicly available list of such domains released a while back in the GR media. Ofc it’s safe to assume that these domains are bit used anymore but given the fact that Greek gov did everything to avoid real investigations might worth monitoring these domains anyhow. By the way it appears that although using predator is expensive has been used more extensively that initially thought to target journalists, politicians and businesses men.


Would you mind sharing the detection script?


Probably not much use to you, it just logs in to the admin web pages on my TPLink router and scrapes the html of the connected devices list with a little bit of Perl and regexes. Full of the usual PoC (I wonder if I can make this work?” hard coded web admin and email passwords and everything, and it’s been just working like that for about 5 years now.

(yeah yeah, I know, you can’t parse html with regexes. For a super tightly controlled web page like this one it works just fine.)


I get you - that's given me an idea to do the same, but I'd probably use a HTML scraping library like AngleSharp. Thanks for the explanation!


I tossed mine (in a case) into a tiny, unventilated drawer where it’s happily running Pi Hole and Home Assistant (~15 ZigBee switches, lights and sensors). I don’t think a Pi 4 would thermally enjoy that, plus it’s impossible to get one, so no retirement for my Pi.

edit: it’s a 2 B, the first one wouldn’t be able to run Home Assistant, I think


I would prefer to have 1GB of quality flash memory and a slow CPU rather than use SD cards.

Why can't they even add 64MB of flash memory to store a bit of user data? That can't be expensive, or maybe I don't know how electronics work?

I can disable writing on a SD card to improve its lifespan, but honestly the RPi is pointless if there is no durable way to store data on it.

I love the concept of the RPi, but this is a major flaw and it's still not fixed.


You might enjoy an operating system that operates fully in RAM. I’ve used picore (a modified tinycore Linux) and it’s nice. It lets you chose when to write to the SD card. Everything else happens quickly.

I think there are some more distros with larger ecosystems that use just RAM, but picore has met my needs and is a piece of cake to set up.


Easy OS installation and experimentation by swapping SD cards is a neat feature made possible by this design choice. You can improve durability by underprovisioning - SD card of 64 GB capacity costs a dozen euros or so. Judging by the popularity, RPi isn't generally seen as pointless but certainly it's not the best option for every application.


> I would prefer to have 1GB of quality flash memory and a slow CPU rather than use SD cards.

> Why can't they even add 64MB of flash memory to store a bit of user data?

What you're describing is a pretty niche requirement, as 1GB isn't enough to store much of anything these days (and 64MB?). They do make the Compute Module, which is available with 8, 16, or 32 GB of onboard storage.

> I can disable writing on a SD card to improve its lifespan, but honestly the RPi is pointless if there is no durable way to store data on it.

A quality portable SSD attached via USB (and not relying on a SATA dongle) works great. A good one will cost you at least as much as the Pi board itself, which sometimes rubs people the wrong way.


I log to tmpfs, use sane filesystem that was designed for flash storage (ie tries to reduce write amplification effects), and most of my uSD cards already clock >5years with frequently updated arch linux arm running on them the whole time + services they are meant for.

Looking at stats, my SBCs take between 9-30 GiB of writes a month. So that's already 1-2TiBs of total writes. That doesn't sound like much. Like 30-60 total overwrites of the SD card. I'd expect the card to take 200-500 complete overwrites.

Basically infinite lifetime with this amount of write load. PSU will probably stop working before the SD cards will.


They make the compute module for people like you. For everyone else there's the regular pis.


Syncthing isn't a real backup, it's a live copy with delayed synchronisation.


I use Syncthing as a transitory step in my (mobile photos and videos) backup process.

I also set it as "receive only", so any accidental deletes don't cascade.


Receive only will still receive deletes sent from the other devices. It just won’t propagate changes from the “receive only” device to other devices.


The key phrase is "backup process".

I have a similar set up. I have a NextCloud server set up in my house, running as a docker container. A cron job shuts it down in the middle of the night and runs a full, true backup process up to AWS, and restarts it when done. The convenience of file sync techs with the safety of backup.

(The main utility of docker to me in this case is just being able to assert with absolute certainty, "here are all the directories NextCloud is storing data in". There's nowhere for unbacked-up state to be hiding.)

If I were running this at scale, I'd have some things to worry about, but at this scale, it's fine.


Read the whole article.

The author describes how he takes immutable filesystem snapshots.


To me a live copy with delayed sync sounds like a backup. What defines a backup for you?


Consider the ransomware case - while all the files are being encrypted on your work machine, those encrypted versions are also syncing across live and overwriting your "backups". Or if it synchronizes deletes, accidentally deleting the file on your work machine also deletes the file from your "backup" once sync has completed.

File versioning / version history would help, if you have sufficient disk space for all the versions. But you can be more confident in the backup integrity if it is taken offline once completed - eg cloning a drive to an external drive, and then unplugging that external drive and putting it in storage until needed.


There is very little software that is ransom-safe. People talk of cloud object locking, but that's not worth anything if they just cancel the account with the vendor or go into the config and turn that lock off. For the versioning you mention, wouldn't it be possible to just cancel whatever storage you use for these versions? After how long do you delete the data then, can't an attacker encrypt all files that you haven't touched in a year (so you don't notice right away) and wait for all the old backups to be gone, then hold all your old pictures and tax documents you might still need etc. for ransom?

A pi is actually a great solution because it's quiet and tiny, so you can place it at a friend's place and use physical access whenever you need to work on it. No need for the backed-up (potentially ransomed) system to have any access to it, ever, beyond the append-only encryption/authentication key for adding new backup data.


Something that doesn't delete a file that I accidently deleted at source.


I have my syncthing configured to not do that.


What if the file is wrongly modified ?


You can configure Syncthing to keep X versions of a file for N amount of days, etc.

I don't disagree that Syncthing is not ideal as a backup solution but it can be a pretty decent one depending on your use-case.


That sponge between RPi and backup is going to take alot of heat. It may catch fire.


Not sure about fire (especially on this low power of a board)... but it's definitely not ideal. Just a thin layer of rubber (like a mat for kitchen cabinets cut to size) would be better if going for this sandwich style layout.


The CPU reports a max temperature of 49C during my testing. I don't think it's that bad, but I'll take your comment into consideration, a cheap case to cover it is probably better than taking that risk.


Slightly OT because not "Pi", but same use case and similar power.

Once upon a time, Summer 2019 I got this https://www.friendlyelec.com/index.php?route=product/product...

and something between their NanoPi Neo2, but with 1GB Ram,

and NanoPi Neo2 Black, but without the EMMC the Black has.

I've put Armbian on it, and it works since then, slow but stable.

It even "speaks" https://en.wikipedia.org/wiki/USB_Attached_SCSI so not even that slow :-)

2TB internal, 4TB 3,5" external with own powersupply, attached only for Backups.

Runs between 400Mhz and 1Ghz on demand. Does only storage.

No firewalling, routing, adblocking. I've got other gadgets for those.

I can saturate about half of my feeble 45Mbit/s uplink with it, which is enough for my needs. So far...


Is there any encrypted peer backup system that could be used? Aka sharing rpi's among friends and relatives for p2p backup?


Syncthing quite recently added support for this type of use case: https://docs.syncthing.net/users/untrusted.html

They do warn that this feature is in beta/testing, so I wouldn't trust it with super critical data.


I would love and pay for this as well. I actually wanted to make this as a teenager but then it turns out I never finish projects. Probably also because restic or similar didn't exist yet so I started to write everything from scratch.

Some software existed, like duplicity with pgp, but it had major downsides that I had ideas about how to do better. I was pretty proud of the chunking algorithm that I had adapted from rsync, and I got really good performance by using python for the logic and C for the code that actually operated on the files, but that's also where the interest started to wane.

Restic with rest-server in append-only mode is a nice solution, but I'm not aware of anyone packaging this in a pre-configured image you can just flash. You'd also have to balance who backs up to whom. If that could somehow dynamically balance, that would absolutely be a dream.


> It’s a nice little affordable single-board computer with a huge community using it for all sorts of projects.

I always wished they would offer a less affordable, substantially more powerful (sufficiently to handle the YouTube and similarly heavy websites without so much pain, also having more PCIe/USB3 lanes perhaps) model. I wouldn't even mind if it was overpriced i.e. extra premium billed to subsidize the budget version or other charitable projects. Unfortunately 3-rd party clones are not a perfect solution because they don't have the said community and are not perfectly compatible. Or is there already a clone capable of running the original Raspberry PI OS?


If you're looking for a more generally useful and powerful SBC, the space is taken by mini pcs ( Intel Nucs and their clones). AFAICT SBCs are a strange middle between microcontrollers (Arduinos, ESP32s, etc) and computers that seem to come out of the myriad of options from mobile chip sets; more generalizable than microcontrollers, not as extensible as a minipc.

Odroid seems to have the most powerful hardware in the SBC space currently, support varies by project.


> If you're looking for a more generally useful and powerful SBC

I just want the "modern web" to work Okay (YouTube and alike websites feel slowww, I mean not the actual video playback but the page itself) while keepig it being a Raspberry Pi in all the rest of the aspects (except the price - thanks G-d I can afford it cost more).

For example Raspberry Pi comes with community-standard GPIO also usable for extension "hats", uniform format (so a whole chassis market emerged for it), free Mathematica, well-supported Kodi and Lakka packages, numerous alternative distributions treating it as a first-class target.

By the way, the latter seemes especially intriguing to me. I imagined (before the supply chain apparently broke) Raspberry Pi becoming a standard hardware platform for all sorts of alternative OSes, potentially letting projects like like Haiku, ReactOS, Serenity etc out of the virtual boxes.


SBCs are becoming more popular because one of the prevailing patterns in designing complex embedded systems is to do the real-time control on a microcontroller running on bare metal or maybe an RTOS, and the GUI/database/web connectivity on an SBC running Linux. It's been happening for at least the last 20 years and becoming more popular because the hardware keeps getting cheaper and cheaper.

If your system cost can absorb it, this gives you a huge amount of flexibility and capability.


You might be interested in Project TinyMiniMicro series that ServeTheHome has covered: https://www.servethehome.com/introducing-project-tinyminimic...

x86-based mini-PCs that are abundant on the used market, pack quite a punch, and generally use around 10-12W idling.

And with the current Raspberry Pi 4 pricing, they are actually a better deal.


Odriod H3+ might be for you. No big community as you say, but since it's x64, that probably matters less.

https://www.hardkernel.com/shop/odroid-h3-plus/


Isn't the 4 exactly that?


Not really, many websites still feel painfully slow. If I were a dictator I would force all front-end developers and UI/UX designers to use actual Raspberry Pi units as their main workstations so they would intuitively avoid making code/designs so heavy :-)


Yes please, Slack was unusable on my otherwise good 7-year-old laptop, because presumably all the Slack developers use new Macs.


I have an SDR (software defined radio) running on one that is dedicated to ADS-B for the FlightRadar24 network. You can order hardware to do it, or if you have stuff lying around you can DIY it. They have pre-assembled images, too. It's pretty awesome and ends up giving you a free business plan.

- diy: https://www.flightradar24.com/build-your-own

- preassembled receiver: https://www.flightradar24.com/apply-for-receiver/


I just set up one to run https://github.com/dmunozv04/iSponsorBlockTV, which is my new favorite thing ever! All the youtube videos playing out of my Apple TV automatically skip the sponsored segments now!

It also runs HomeBridge which also runs rock solid. I can now control my mini-split and pool equipment straight from Apple Home, which is way faster and easier than the crappy apps that they came with.


I did something similar to this 9 years ago using BTSync, it's a great use case!

https://reustle.org/btsync-pi


Is 'btrfs' used only for 'btrbk'? Does it prevent data loss due to SSD failure, or just user errors?

What are the recommended data storage file systems nowadays?


I use btrfs because of the snapshotting support. btrbk is the tool that makes taking and pruning snapshots easier.

In this setup with a single SSD, it has no role in preventing data loss if the SSD itself fails, but it should prevent data loss in case of user error.

If possible, use ZFS, and in case that isn't an option, use BTRFS if you need the data integrity guarantees and at least some protection against faulty hardware in mirror/RAID-like scenarios. That's my two cents.


I don't think you can run ZFS on any Pi older than a 4 (even then only with non-base-model RAM). I also like and use ZFS, but I run BTRFS on all my Pis and it's been solid for years on end, whether SD card or SSD.

Benefits in this context would include checksumming, compression, snapshot + incremental send, built-in kernel support (which is why I use it as root filesystem even on machines that use ZFS for data integrity), very flexible RAID1 setups, the option for DUP data on an SDcard (still unclear to me if this would improve odds of recovery in case or error or just increase the odds of corruption in the first place).

ZFS may be king in many / most scenerios, but I've been very happy with my Pi on BTRFS root that I've used with Ubuntu server, Raspbian, and now NixOS[0], on Pi 2/3/4.

[0]: https://github.com/n8henrie/nixos-btrfs-pi


I use mine for network latency tests https://smokeping.dabase.com/


  The fact that Pi is so underpowered that it cannot even make full use of the SSD is probably a contributing factor to the overall stability.


I have been running my model B as a VJ tool. It runs glsl shaders in SD resolution quite nicely. Has composite output straight in the board.


I got an OLED and programmed a digital clock which is dark with slowly changing colors the night (reddish the evening, purple before midnight, blue after midnight, greenish in the wee hours and yellow in the fifteen minutes before getting up) and bright orange the day.

It's absolutely mine and that's what I love.


Did you build in any detection/reset for wifi hardware issues? I'm jaded and assume this will be a thing with everything prior to the rpi4. (Mostly my experience here is specifically the rpi3b)

Of course, could be the USB wifi more reliable.


Haven't built any detection in so far, but if I do notice instability, then taking a look at kernel logs should reveal those pretty well with USB device reset messages popping up.

I have never used the WiFi on newer Pi-s that shipped with it, so I cannot say how reliable that was. The USB WiFi adapter seems to work quite well, assuming you have the firmware packages present (included out of the box on Raspberry Pi OS).


i am one of the three people in the comments that own the original RPi. for me, it is the only RPi i own.

like most people, i ended up using it as a "server" because i stopped using its GPIO ports. currently using it for sync and exit node.

most people should just get an old pc or repurpose existing old pc/phones for most of their needs than to contribute to the RPi supply issues. linux works everywhere*, remember?


I wouldn't try to make a server out of a raspberry, they are not reliable at all. Unreliable power, overheating.


I've found the Pis to be surprisingly resilient, heat-wise. I built a handful of inline monitoring servers running suricata, pulling yml configs from puppet, and forming an ad-hoc network with one another, with devices in a reasonably harsh manufacturing environment for six months' testing (near welding robots, and in network cabinets that average 180F during "non-load" hours) and didn't see a single system failure, crash, or reboot during that time. Literally the only thing I had problems with was maintaining consistency for the ad-hoc network, and that was largely owed to the amount of interference on the manufacturing floor, combined with greater-than-suggested distances between the devices.


Is that foam and those straps ESD-safe?


Works great as a cups AirPrint server or as a Homebridge server! Or both.


a pi can make a good standalone NTP server, and the older ones use less energy and can probably use a battery backup to continue during power glitches.


Still using mine as a SSH jump host


> The 1TB SSD is formatted as a btrfs file system and mounted to /storage.

That makes me cringe. If a server's purpose involves serving files, those files should go in /srv.

https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/srv.htm...


Would /srv be for files that are served up mostly unmodified? Or would you also put the database for a web application under, for example /srv/db? In a sense, the database contains objects that get served up but they get transformed by the web app, so it seems they should go into something under /var (like /var/lib/data/[db_vendor_name]).


> This main purpose of specifying this is so that users may find the location of the data files for particular service,...

If I read the docs right, if a database needs to keep data permanently, /srv seems to be an ideal place. It's not restricted to static files being served on a network, just where data (regardless of format or transformations) should be stored.

More abstractly, if any daemon, service, or file share needs to store any data that is read/written to/from a network, that data should be in /srv. Probably.


[dead]


Meanwhile, your $HOME's dotfiles want a word about cleanliness...

And your /usr/bin wants to argue with /usr/local/bin and both of those cringe when they see /bin.

No, mentioning and sticking to standards is a good thing and I think is very on-topic.


The good thing about standards is that there are so many of them to chose from.

Besides, if a man wants to have /storage on his owned machine, who are we to tell him otherwise?


> if a man wants to have /storage on his owned machine, who are we to tell him otherwise?

Nothing wrong with a man putting /storage on his owned machine.

But if I were using my own machine and I put /storage there (actually, I used /opt/<site>), it would be because I didn't know there was a standard place for things. Then someone told me about /srv and its purpose. So now I use /srv even on my own machine so that I'm familiar with /srv in a professional environment.


I'm all for filesystem standards but "serving" is such an ephemeral concept that I don't think it makes sense to mandate that "files" that are "served" must be mounted under /srv. That's just being excessively pedantic.


Yup, and while I'm not likely to ever again build a Linux fileserver, TIL.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: