Hacker News new | comments | ask | show | jobs | submit login
How I use my Raspberry Pis to help me work on with my side projects (techcoil.com)
150 points by bjoko 32 days ago | hide | past | web | favorite | 95 comments

It's not clear exactly how this helps him work compared to just using hosted versions of the services.

It looks like there could potentially be a lot of maintenance to keep multiple individual servers up and running, and this is probably multiple side project in and of itself.

I was well on my way to having a forest of RPis. The boxes are cheap, and "they only use 5W"... until you have 10 of them.

I replaced each and every one of them with an Intel NUC (6. gen Celeron) running FreeBSD, and created a jail for each of the services the individual RPIs maintained.

It uses 5-9W (Idle), which is like 80% of the time. It has real Gigabit Ethernet, as well as a SATA SSD, and that alone is enough to provide much higher performance (for me)

The only RPis i have left running are the individual nodes in my video/climate surveillance.

Chick-Fil-A use a small cabinet of NUCs and deploy an arbitrary number of services via Kubernetes to all there restaurants. It's a pretty neat setup.


I'm sure i've seen this as a talk too, it was pretty humble iirc they seemed surprised how practical it turned out to be.

Proxmox runs beautifully on a Gen 8 Intel Nuc. A lot more than 5-6 raspberry pis but much more powerful.

Could you maybe post some more specs / setup ?

Yeah my story of using one to write my startup's MVP after my thinkpad finally bought the ghost is maybe more compelling because I actually used it for everything until we got our first customer and I scored a mac for a more standard professional desktop environment. In a pinch it has enough functionality to build and launch a web application. Obviously not ideal but it actually worked in emergency for me.

Now that's something I'd love to read about! My only comparable experience was during my most recent co-op where I used an Asus C300 Chromebook running Gallium OS 2.1, and it served me quite well for the work I was doing.

Although using modern websites when you only have 2 GB of RAM and a mobile Celeron chip is absolutely painful...


Hosting websites at your house? Slower load times for your users.

Hosting git at your house? If your house burns, you might lose all copies of the code.

Hosting a manual WordPress installation? That's making work for yourself IMO.

    Hosting websites at your house? Slower load times for your users.
    Hosting git at your house? If your house burns, you might lose all copies of the code.
I'm guilty of both, although my home connection is a 300/300 mbit fiber connection with a static IP. I make versioned backups locally and remote daily, just as i would from any VPS host.

The only reason i'm considering moving all the internet facing stuff to a VPS is to allow me to "relax" a bit, by minimizing the amount of public services/ports that can have potential exploits on my home network. I've tried to minimize the damage an attacker can make by using FreeBSD jails.

I already segregate the network traffic in LAN, DMZ, SERVERS, KIDS, IOT and GUESTS, with the only route being through the firewall for access to other VLANS, as well as not allowing traffic from certain VLANS onto others. I monitor logs religiously.

I'm guessing a VPS would probably allow me to sleep a bit easier at night, though i'd still have the same amount of maintenance work.

> If your house burns, you might lose all copies of the code.

This is what off-site backups are for. You have them, for your hosted services as well as your home ones, right?! OK so with a hosted service you have the advantage of them managing uptime and recovery matters to an extent, but I still prefer to have my own backups on another host, so I have the data in my hands if the whole service goes offline for a time.

I'd agree with your other points but I don't think the "what about a site-down situation?" matter differs between self and remote hosted options (unless, perhaps, you are paying a pile for nice renumeration-backed SLA promises).

Backups would be great and solve the issue I mentioned. I assumed that of all the people tinkering on the Pi, only a minority would set up automated backups

The downside of decentralisation is that it's often just centralisation in your hands.

Could also be a upside :)

It does seem to be "for the sake of it". Why would you want a web based IDE when you could have a desktop app for way less effort?

hehe , yeah. i mean it's cool but this was basically, "Pi is a server and you can run software on servers". I was hoping for something that used some extra electrical components and circuitry to accomplish something more then just running software.

I'd actually argue it's easy to get set up on the cloud hosted versions of this stuff! sure it helps to roll your own everything (git, jira, IDE...) and i appreciate the work and write up. Plus those rpi's do look pretty fuckin sweet, it makes my nerd senses tingle

Having exposed GPIO pins is nice. I'm running a log display on a RPi 1, but there's also a 7-segment status display for the team, showing build+prod data. Not sure how I'd do that on a PC...serial port, probably.

On a PC, Arduino flashed with Firmata. It lets you write behavior in whatever language you want. It also keeps the serial connection open at all times so you can send commands as you go (e.g. with a Python REPL)

USB-to-serial still exists but the support for it is kind of crimped now due to security concerns (in Windows at least) and you can also damage the PC if the external circuit is not thought out correctly.

Not really sure if the USB adapter manages to isolate enough of the risk though, I have only used Firmata...

USB-to-serial has, first and foremost, horrible jitter issues.

I was confused about it, too. It made more sense when I thought of his blog as a series of tutorials for people who want to learn how to use RPI.

My thoughts exactly. I was hoping to find something more useful than the same benefits t I'd get from a free azure subscription.

Something about the RPi seems to motivate new hobbyists in ways that hosted VMs just don't.

It's quite difficult to put a sticker on a VM.

Closer to the truth than one might think. The Pi can be tinkered with in the same way one could tinker with an Amiga or an Atari back in the days. You can, of course, tinker around in a Docker container or a VM as well, but they're not physical products and therefore not "real" in the same sense.

current gen skilled hobbyists grew up with fancying "physical" objects. a RPi is a physical object manipulating your data. a hosted VM(even if hosted on your own personal machine) doesn't have the same "tactility"

I've Rasp Pi + alpha card + 50K mah powerbank as my router, which creates 3 APs, one with Tor other with work VPN and last with my home connection.

All use cloudflare dns+ pi hole to block ads.

I use another PI as a media server and for remote torrent downloading.

Is a powerbank an efficient and safe UPS for a RPi? I was at some point told powerbanks are not a good solution for Pi's, but I cannot remember the reasoning.

I have been using Mi powerbank 2i for my Rpis for last 9 months or so. I have had zero issues with it. The advantage of using 2i are,

1. It's dirt cheap than any other solution I found($15, I use 10k mah version)

2. I can plug RPi to it and I can plugin a charger in 2i. When the main power cuts off, It'll instantly switch over to battery with 0 downtime. I think this is called passthrough charging)

3. It can provide 2.5 or 3.5 amps(not sure about exact value) output

4. It has two ports

> 2. I can plug RPi to it and I can plugin a charger in 2i. When the main power cuts off, It'll instantly switch over to battery with 0 downtime. I think this is called passthrough charging)

This alone is a killer feature that’s far less common than I expected before I started searching for it. Most power banks cut off output while they’re charging, and many require physically re-plugging the device to start powering it again.

I ended up acquiring a PiJuice for this functionality alone.


How many hours it lasts?

I have never measured it. At worst the main power has gone down for a 2-3 hours and it lasted at least as much. I don't have a USB power meter to measure the power taken by the Rpi so I can not give you an exact number but I would guess it to be at least around 5-10 hours if not more.

Edit: Someone linked pijuice project. I went to their website and according to them, Their 10k mah version lasts 24+ hours

> Onboard 1820 mAh off the shelf Lipo / LiIon battery for ~4 to 6 hours in constant use! (with support for larger Lipo Battery of 5000 or 10,000 mAH+ to last up to 24 hrs +)


So, I would say 2i would probably last about the same?

It's not - it's a hack. Powerbanks aren't design for random surge in power output which is what Raspberry Pi requires.

Usually used for charging other batteries.

That depends on the powerbank. Many powerbanks with Quick Charge 3.0 support will supply over 4A at 5V without power delivery negotiation. It's a hack, but so is the Raspberry Pi - I wouldn't trust a powerbank for anything critical, but I wouldn't trust a Raspberry Pi either.

Can you clarify why? Surely they are just normal batteries with a charging circuit? I'd have thought it was more a problem with "pass through charging" where the battery needs to discharge (RPi) and charge at the same time.

Powerbanks are designed for continous load for a trickle charge.

An RPi can suck down a continously changing power input.

> Powerbanks aren't design for

What particularly makes them unsuitable for this?

Many things 'aren't designed' for something but work just fine in that capacity..

I have a 3B+ set up on my desk with a BitScope USB oscilloscope and the Arduino IDE running on it for my hardware projects, a 2B in a closet running Gitea and Docker as “build box” for my containers (it’s speedy enough when you move the Docker storage to an external USB HDD) and another 3B to run my 3D printer.

I’m not very keen on running ordinary services on them (my home automation setup runs on an ODROID, with Node—RED and homebridge Docker containers, which benefit from EMMC storage and twice the RAM as a Pi), but I’ve found the Pi to be a great platform for small self-contained tasks that really benefit from having a dedicated machine with a stable configuration.

Also, I’ve found them to be quite reliable from the 2B onwards (I even ran a Swarm cluster on a set of mine for a couple of years) — I have all the setup info for most of these on my GitHub profile at https://GitHub.com/rcarmo

Random question. How is the BitScope USB o-scope?

I'm actually looking for a new O-scope (since I am loosing access to a lab) and I'm looking for something small and compact.

It's not ideal for high frequencies, but as a simple logic analyser it's ok. I have used it only for tweaking small oscillators for now.

> Undeniably, Raspberry Pi has revolutionised the way we use computing technology in our lives

??? Shouldn't that be "Smartphones" instead? Even if the Raspberry Pi is popular among a certain segment of people, it is nowhere close to being as mainstream as smartphones which have revolutionized the way we use technology.

Also, most of my developer friends who have one booted theirs once or twice before disappearing in the closet. We used one as a home server, but replaced it by a NUC, since IO on the Pi is too constrained.

The Micro:Bit at least actually got a foothold in UK education.

The Pi is an awkward inbetweener - it's too power-hungry to substitute for a microcontroller, but too basic to replace an x86 computer in most applications.

To be fair, the Micro:Bit exists because of the lessons learned from the failure of the Raspberry Pi in education. In a strange sense, it's the 2.0 version of what the Pi was supposed to be; the Pi lives on because of the hobbyist community, but it failed fairly comprehensively at its intended purpose.

I think the Raspberry Pi had a good use though: it served as a stepping stone for people who wanted to try out home servers but did not want to spend a lot. After getting one's hands dirty with the Pi, it's fairly easy to move on to better hardware and feel confident doing so.

Yeah, IO constraints have made me dial back my use of Pi-alikes and have put a bunch of NUCs (or Gigabyte BRICs) in their place. I do still use my ODroid XU4 as a gimmicky little cluster for testing stuff, but nothing serious.

I do have one project idea for using my stack of RPis, though (a badging/console login system for fighting game tournaments) but that is waaaay on the back burner.

??? Shouldn't that be "electricity" instead? Even if smartphones are popular among a certain segment of people, it is nowhere close to being as mainstream as electricity which has revolutionized the way we use technology!

We are close to the point where more people have phones than access to electricity. See https://www.cnet.com/news/by-2020-more-people-will-own-a-pho...

Why can't both statements be true?

can't quickly deploy a service on my smartphone

The main point of using a Raspberry Pi is for access to the GPIO pins and the built-in camera interface. As others have said, there are better alternatives if all you need is a server.

Another good use case is learning clustering on a budget. Sure you can accomplish similar with VMs, but using 3-4 RPi can be more intuitive to people used to physical computers. It also introduces additional constraints (ARM, low memory) that a hobbiest may find challenging.

Can you name the alternatives?

Laptops in general are okay alternatives if you want a "UPS" (battery) and so forth. The only problem is that Laptops have hit-or-miss Linux support.

If Linux is a must, I'd argue that building a Desktop is easier, since you can verify which hardware works for Linux. (Pick motherboards that others have already tested).

Something like a Ryzen 5 2500x build (6-core / 12-threads) would be far better than a Rasp. Pi for any home-server use. You got 4-DIMMs for easy 32GB support for tons of virtual servers. You have Hard Drives, SSDs, AVX2 (for high-speed H264 encoding), PCIe (GPU compute) and more. The standard Desktop is a beast these days.


For "clusters", a typical GPU like Vega64 or the RTX 2070 will get you far more performance at far cheaper prices. You can even toss two GPUs on a typical Desktop... maybe four GPUs if you go HEDT (Threadripper or i9-Extreme). That's all a modern supercomputer node is these days: just a normal CPU with 4x GPUs on it.

High-performance SIMD programming is a bit difficult at first, but programming CUDA / OpenCL / AMD HCC is actually straightforward. Its the "high performance" part that gets difficult (but that's true of any programming).

Your "cluster of GPUs" is the architecture of modern supercomputers. You can build one under $1500 (2x GPUs RTX 2060 or Rx 590 + Desktop with a bit more power-supply than usual). That's a Heterogeneous compute with 3-nodes (2x GPUs + 1x CPU) right there, plenty of room for experimentation and learning about communication.


IMO, the Rasp. Pi's main benefit is GPIO and I2C pins, as described by the other poster. Rasp. Pi works extremely well as a host to Arduino (since Arduino has better latency metrics to its GPIO). Effectively, Rasp. Pi is easier to program than Arduino, but Arduino is easier to interface from an electronics point of view (due to lower latency, no OS in the way, etc. etc.)

But from a "serious compute" point of view, like servers or HPC, you probably should just get a normal desktop + graphics card. That's pretty much what a supercomputer is these days anyway.

Rent a vps, digital ocean, linode... buy a desktop computer. My mac broke recently and while in the shop I bought a crappy new desktop for less than 200€. It's running linux and would happily run all this software. it becames cheaper than buying lots of raspberry pi's cases and sdcards and it's easier to mantain.

But if one's doing it for the fun of it as the original author appear to have done, then I have no moral complains to present. :)

Why is an old crappy desktop computer universally better than a raspberry pi? I use two raspberry pi to run internal DNS and a couple other things small things. They are silent, take up no space and use almost no power and are almost certainly cheaper.

I have been experimenting with RPi for the past few months, and when I first got back into it I lost a couple hours diagnosing strange issues that ultimately all led to faulty SD cards. These are decent Samsung cards that work fine in an action camera, but the Pi was frequently dropping SSH sessions and even disconnecting from the network intermittently.

Eventually I tried other cards with better results. Since then I started using ATP aMLC “industrial” Flash, which has been very reliable.

Were you using a WiFi adapter?

Intermittent drops often come from undervoltage. You may need a 3.5v power adapter.

Also, slow ssh can come from dns troubles: http://jrs-s.net/2017/07/01/slow-ssh-logins/

I was experiencing these issues with a known-good PSU. The same one has been working fine since I switched SD cards.

Undervoltage is a common issue but easy to identify since it shows up in logs and displays the lightning bolt icon in the top-right corner of a display is connected.

> You may need a 3.5v power adapter

Do you mean a 3.5A power adapter? The Pi runs at 5v.

I have one running on a 2.0A adapter and it's been running for a few years now. Though it uses ethernet so there's less power consumption compared to one using WiFi.

>> Do you mean a 3.5 power adapter?

Yeah- sorry. Trying to do this from memory. :)

So basically, it's a tiny Linux server, and can run various server stacks/webapps, and is relatively accessible to beginners.

Except is not x86 therefore less packages available.

It felt like a clickbait title after reading the content... Hosting a phpMyAdmin? I was hoping to see some creative ideas...

I made an Elixir app[1] that worked on RPi and read various sensors, adjusted lighting and was driving some pumps with PWM. I was trying to grow some exotic plants and wanted to automate as much as possible. It was a fun project, the distributed nature of the language/OTP was really neat and when I bought the second Pi for monitoring the inside of a case it was very easy to hook it up with the former Pi and make it into one system.

Unfortunately, all the plants died after a while - mostly because I didn't finish all the features I wanted to write fast enough... - and I abandoned it. I've been considering doing a write up on it since then, but I'm not sure if that would be interesting to anyone other than me.

I wonder, would that count as creative? Maybe I should do the writeup after all (seeing as simply running some PHP on a few Pis got to the frontpage...)

[1] https://github.com/piotrklibert/planties

I'd be very interested; I'm learning Elixir at a new job, and I've got a Raspberry Pi that's been sitting around since two christmases ago.

With this many applications I would be looking at getting one higher powered server and running docker on it. Would be more power efficient as you only need one OS and one PSU as well as likely much cheaper.

Why docker? Seems like overkill for not much gain.

Docker has a multitude of benefits for single machine multitenant deployment strategy, namely resource and file system isolation.

It also has the benefit of being self documenting of dependencies via layers, which is a nice thing to have. To be honest, running just about anything self-hosted can be deemed overkill when there are so many free hosted solutions. But that’s why we’re engineers because sometimes it just scratches an itch for us :)

Docker is not overkill at all. Having 20 servers to run a few very low utilization is overkill. When you run a whole bunch of stuff on the same server it becomes a lot simpler than installing them the traditional way. A lot of self hosted server software is not in the repos so you end up having to add extra repos and after you do that a few times I find that something always ends up breaking and ruining the whole OS. Its much simpler to keep the host os basically stock debian with docker installed and then if anything goes bad its contained.

but then you have to learn docker and how docker does networking is the overhead they're talking about.

I don't disagree, but I think that sharpens the learning curve.

> Setup a Raspberry Pi git server

For the love of $DEITY, please at least use an external USB disk. In RAID1, if possible. (Perhaps some multi-RPI git mirroring scheme could work as well? Anyone ever done something like that?)

Do not EVER trust your valuable data on SD-cards on a Raspberry Pi!

If the data is valuable then one should have an off-site backup anyway.

I use an old laptop pretty much in the same way that the author uses his Pi's. While it's fairly new SSD is much more reliable than the Pi's SD card I would never store anything important on it unless I'm certain it's backed up somewhere else as well.

The 2B and newer boards are much more reliable, especially when running Ubuntu ARM.

I think it often depends on the power supply quality. I think many of the cards were corrupted due to bad power.

Regardless, 3 bits per cell TLC SD cards (or QLC, 4 bits per cell, ugh) are not made for repeated writes, like git or database workloads. They will get corrupted even in the best conditions.

You could use industrial SD-cards, but I think it's easier just to plug in a USB drive.

At first a I used a Raspberry Pi as an infrared sending device so I could control my air conditioner and lights remotely. Now I've built that functionality into its own PCB, but as I'm developing that product I need an MQTT broker to connect to. So the Pi now functions as a local MQTT broker until I have a hosted one I'm satisfied with.

Next thing I want to do with it is run a little web app that displays local train and bus times, garbage collection days, weather forecasts, and that sort of thing. They're great for prototyping hardware ideas and running various one-off web services locally!

I prefer to think of and use my RPis as home network appliances, things that can run hassle-free.

Examples from my house are: Pihole - Adblock for all wifi devices

Shairport - audio airplay target

Plex - media server

My family members and I simply use these services without even knowing/remembering they're hosted on a RPis. If it requires tinkering or isn't permanent, it goes on a VPS.

Many of the examples listed in the post are things that I would simply host on docker, a VPS or in a VM.

I have 3 RPi,s at home.

I have Pi2 that acts as a dashboard for real time data on my train to work, my wife's bus route, weather temp and rain probability. I am using sensehat to present this information through pixels.

I have a Pi3 running home automation (hass.io) which I am new to and find amazing in most part.

And final one in my daughters room which is a Pi0 with Envirophat that's monitoring temperature.

All side project and I am not a developer.

What dashboard software you're using?

I designed my own. You can read about it here.


It has become integral part of our daily routine.

I'm running a completely silent mini ITX J1900 build for a long time now, it has an x86 arch and 4 core CPU with Intel VT-x support where you can run as many KVM instances as your RAM allows for, don't really see how a dozen Raspberry PIs could be better but to each his own. I always recommend silent SOC x86 builds for home dev use instead of RPi

Your requirements aren't necessarily the same as everyone else. For people who want something silent and inexpensive to run a few containers, rpi can be hard to beat. Extremely cheap, extremely low power and take up very little physical space.

I always used to use various dusty old netbooks for this kind of stuff, and would run many applications on the same one.

A dedicated raspberry pi in a nice little case for each is certainly modern and cool, but it doesn't give me quite the same nostalgia as a ancient laptop shoved in a corner...

The article has 9 subchapters. It could have 2 or 3 by saying "Install GitLab", but maybe GitLab can't run on a Raspberry? If you are not restricted to a Pi however, try GitLab and you are done with all the steps described in this article.

It should be noted that most single board computers out there can be used to do the same things; a lot of them are more convenient than the Raspberries.

This is content spam, correct?

He seems to be using a content spam technique. He's effectively pointing to a verbose table of contents for his blog, with a slew of shameless plugs of tangential affiliate links.

so one of the things not really covered here is how to deal with SD card failure.

Any application that requires any level of writing to disk will almost certainly die in short order.

I have a boatload of Pis, doing a bunch of things, and one of the annoying things is having cards die on me. Everything is in ansible, so its not that much effort to rebuild.

However having any important data living on a pi's SD card is a nono

> so one of the things not really covered here is how to deal with SD card failure.

I recommend a program like win32diskimager (https://sourceforge.net/projects/win32diskimager/) so you can make images of your SD cards in their working state and if they get corrupted quickly get them back to working order.

> However having any important data living on a pi's SD card is a nono

That is why I wrote a script using the Fabric Python module to log in to my rpi3 and download important files. Ideally the script would run automatically every day but I haven't quite gotten there yet.

I have always been quite paranoid about port forwarding anything into my home network.

Likewise, I preferred to setup a VPN (well, that still needs dnat but...) with the benefit of my devices always going through my trusted home network when I'm out and about. It's super simple with wireguard, and the iOS client app [0] is great when you're away from home.

[0] https://itunes.apple.com/us/app/wireguard/id1441195209

Thanks for that - I'd never heard of wireguard -> http://jonathanhamberg.com/post/2018-10-17-wireguard-behind-... that post was quite interesting - it's quite different the way it seems to work

  With WireGuard there is not necessarily a central server. 
  There are many peers and any peer can connect to any other
   peer assuming they have the correct authentication 
  credentials. Every peer has a private and public 
  key used to identify its self.

I've found ngrok [1] useful for hosting stuff without opening ports. The only problem is you need to pay to get a fixed domain name.

1. https://ngrok.com/

Ah very good. Yip, I've used ngrok before, and it's a beautiful piece of software.. nice!

I do control my pi through telegram (polls a bot), which works nicely for the things I do want to control.

I think for me, I'll just stick with my little linode VPS for hosting side projects :). But, I like the idea of ngrok!

This is effectively exactly the same as opening ports from a security perspective.

I using an overenginered solution. I run a tor hidden service with that option to only allow authorized clients as a way to ssh to my home computer from my phone.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact