Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What does your self hosted stack look like?
52 points by LorenDB 10 months ago | hide | past | favorite | 41 comments
I'm curious to hear what others have found useful. Whether it's your OS, the services you are hosting, or even general tips on things like buying hardware on a budget or how you implemented a cheap/free DDNS, I (and probably the rest of HN) would love to see what's popular among self hosters.

(Disclaimer: I am in the process of setting up a self hosted server, so your answers may help me with my setup.)




At home I have a little Intel NUC running ubuntu hooked up to a 4 bay synology NAS.

I’m running several web apps, a git server, Plex, pihole, private CA, and keykloak, all on top of microk8s. It’s overkill, but I appreciated the opportunity to fiddle around and learn K8S without stress of external obligations.

I have two ingresses, one internal and one external facing. The external one is exposed via cloudflare and a micro vm (for multi level subdomains that cloudflare doesn’t support for free).

dyndns is handled by the router. It writes to a pseudorandom hostname, and cloudflare references it by CNAME.

It doesn’t get any significant amount of external traffic, but is good enough for family to use for the web apps + a yearly march madness pool (that can’t be hosted on yahoo/ESPN/etc due to a custom family rule set).


How do you connect your NUC to the Synology?


SMB/NFS?


NFS


Dyed in the wool Penguinista here, running Debian on most of my stuff. (Some smaller Raspberry Pis and a Pi CM4 desktop run R-Pi OS.) My home server runs off an old Supermicro board with 16GB ECC RAM and which predates UEFI. It cost me about $100US used. It recently boots from 2 500GB SSDs in ZFS mirror. (Previously from a pair of old 500GB laptop HDDs.) I used to use rsync to backup but now my laptop and desktop are also on ZFS so I use ZFS (sanoid/syncoid) for backing up. In addition to serving files, I run Gitea on the server (in a Docker container) and a few other odds and ends to support Home Assistant. It backs up to a remote server (Dell R420, free from where my son works) at my son's house.

Home assistant is now running on a headless Pi CM4 having recently migrated from a Pi 3B+. It ran fine on the 3B+ except when memory demand was high, being constrained by the 1GB RAM. The CM4 has 2GB RAM.

I have a Pi 4B with 4GB RAM and a pair of 6TB HDDs in a USB dock (which also provides power for the 4B) and which also runs Gitea in Docker. It's sort of experimental but has been pretty solid.

One of my uses for the servers is to host a MkDocs site where I keep all of my notes. This is served by a very simple "python -m http" and meets my needs. MkDocs lets me format my notes using Markdown which is very convenient.

None of this is available to the Internet. All local due to security concerns. I have a separate VLAN for IoT devices, mostly Raspberry Pis and some smart bulbs and outlets. They don't get access to my primary LAN. VLAN and firewall are supported by a very small PC (Zotac - like a NUC) running pfSense.

HTH. If you have any questions, go ahead and ask and I'll eventually get back to you.


I bought a used HP Microserver gen 8 last year. Cost maybe 100 euros. Dual core, 8 gigs of RAM, raid controller, dual Ethernet and ELO (which is better than I expected). It only uses 10 to 25 watts depending on load. I run arch on it (yes…) because it’s a distro I’ve used since 2006 and I know my way around. I dislike gui tools.

On top of arch I have a qemu vm that serves Postgres and grafana. There’s some python scripts that gather data from my solar array and other personal stuff.

Oh and I have a home assistant docker instance. I’m not sure how I feel about it yet. It’s a little fiddly.

This machine also has a VPN connection to the web that I occasionally use on other machines via ssh.


Other similar threads from this year, in case anyone is interested in reading more comments:

* "Ask HN: What's on your home server?" (456 comments) https://news.ycombinator.com/item?id=34271167

* "Ask HN: What hardware are you running for your home server?" (95 comments) https://news.ycombinator.com/item?id=36791697


Colocation:

2x1u servers in two different DCs including 1Gbit, DDoS.

Cisco UCS 220M5, 2x Dell R630, 1x R610

(I need an front plane Cisco NVMe cable if anyone has one to sell. EOL and can't find anywhere - email in profile)

System:

FreeBSD with ZFS

bHyve virtual machines nested in jails to create zones.

System reporting scripted in TCL, with TK for graphical interface.

NaviServer for web server and web TCL, Hiawatha for anything else lite/misc.

Network:

Mikrotik Cloud Router

IPSec tunnel for my VPN for my apartment.


I bought a used Dell Poweredge R620 on eBay for $150 and brought the RAM up to 128GB for another $50.

It runs proxmox bare metal, with all kinds of fun containers. I have wireshark, gitea, jellyfin, nextcloud, and a Debian VM that acts as a mirror for all of my Steam games with in-home download sharing or whatever it's called.

It doesn't have a GPU, but yesterday I started playing with llama.cpp. It has two 16 core Xeon processors, which gives me almost acceptable performance with the Wizard 13B q4 model.

Basically, any time I have the whim to play with something, I can just spin up a new container and delete it when I'm done.

This server isn't great, it's fairly old and rather inefficient. But it's definitely been an interesting experience to play around with some real enterprise hardware.

I keep it in the basement, and still you can hear it howling through the floor when llama kicks in


Ahh ... memories of racks of Dell servers screaming away like banshees! Still regret not taking ear plugs when working in those server rooms.


I've definitely learned some lessons from this machine. It was probably a mistake to buy a 1U machine. I think at some point I'll invest in a 2 or 3U machine with proper 3.5" drives and normal fans that I can replace with something quiet.

But hey, for $200 total, this wasn't a bad learning experience.


I host all my new projects this way:

- Hetzner 41 EUR bare metal server (AX41-NVMe)

- k0s for single node kubernetes cluster

- I push all my apps to that same cluster, I have pre-made helm charts so it's super easy to get a new app up and running

- if scaling requirements increase I will move to a fully fledged kubernetes cluster.

- Backups to Digital Ocean spaces (S3 like storage)

I have a separate VPS with DigitalOcean where I run self hosted gitlab instance (again backups to S3). It's quite expensive, but I didn't find the time to migrate away yet. There are a dozen more things I tried self hosting (supabase, sentry, posthog, ...) but stopped because free tiers they offer are enough and the hassle is not worth it.

Tip: Kubernetes is a massive headache to get started (so don't), but once you get past that stage (expect to spend a few months) it becomes quite nice.


K3s on hetzner: 3 bare metal servers and 3 control plane cloud vms, on a Tailscale network, so I can migrate to any provider easily.

Also push all apps to same cluster (except status page, which is hosted on fly.io free tier)

Also charts for all apps. Easy to deploy/upgrade. All stored in GitHub.

GH actions for cicd, using help upgrade directly to cluster, so my team can just build and changes are live in 5min or less.

Agree on k8s pain, took a few tries, but I now gain a lot from it.


I am currently saving up for a Raspberry Pi (if they aren't barely available, thinking of using my Wii, projects like the Wii-Linux kernel exist now), so for now, I'm using my Chromebook (yes, I know, bad choice) with Fedora 38 installed.

Specs are what you expect from a Chromebook, 4 GB of RAM, 32 GB of storange, and some Intel CPU. I use it to occasionally host my bots. I only host them for testing, as the main hosting is handled by my friend, and if I leave a bot running for too long, Black Box would crash, along with Firefox.

If I ever get a Raspberry Pi, I would install Ubuntu Server on it, mainly because of popularity and stability.


I'm currently working on a home server to build a homemade streaming service, something like a Plex server but worse because I'm the one making it. I'm using a Raspberry Pi 3A+. It was $25, and the MicroSD card I'm using with it was $50. I've got it working that it can stream video to my Android TV without issue. I put a big heatsink on it out of fear of it overheating, but I don't think it was necessary.

I installed Raspbian 64bit Headless on the raspberry pi. It took a bit of trial and error to set it up because I didn't know what I was doing, but ever since getting it running and being able to SSH into it, it's been really easy to work with because Raspbian is just Linux.

The server is a uvicorn server hosting an API made with FastAPI in Python. The impetus for my project was to learn FastAPI, and it was incredibly easy to learn. I already knew Android development so I was also able to write a client app for my TV that downloads and plays videos hosted on the server. The Android TV app was a little bit harder to make though just because of how quickly Android development practices evolve, and online resources/my own experience with it becomes outdated.

Raspberry Pi was a great choice for me. Super cheap, surprisingly powerful for such a small device, and I learned a ton of useful info in a very short amount of time

EDIT: Removed some details that were irrelevant


How is this self-hosted if you're hosting on GitHub?


Yeah that part wasn't relevant, I shouldn't have mentioned it


Got it, sounds like a fun, simple setup though. I was exploring FastAPI recently and was a bit surprised that async SQLAlchemy wasn't the out-of-the-box experience, unless I misread docs and wasn't following along closely enough. The result was that even with all of the async goodness in FastAPI, throughput was completely shot once I started querying a DB. It looks like there is a way to do async SQLAclhemy though with AsyncSession. Do you have any experience with this?


Thanks, it's been a fun project so far. It just parses filenames in a directory without a DB right now while I'm still figuring out all that I want to do with this project. I was planning to use SQLAlchemy so good to know I should expect that.


Yeah may not be an issue for you either, was just curious if you had any experience. I was considering FastAPI for a big production app and that was one concern/issue I ran into.


I run fewer and fewer services myself. "A person's primary task should not be computing, but being human."[0] I find that self-hosting gets in the way of that.

Nonetheless, I still host my media server[1] and my timeline thing[2]. The former has been rock solid so it does not need replacement. The later did not exist so I had to create it. I do plan to rewrite it far more simply though. After toying with static site generators, I don't want to host super heavy docker setups anymore.

The hardware is some generic small form factor PC. It's overkill for what it currently does, but it's very fast at media conversion. It runs some flavour of linux, but I don't remember which.

[0] https://calmtech.com/

[1] https://nicolasbouliane.com/projects/home-server

[2] https://nicolasbouliane.com/projects/timeline


To answer my own question, I am working on setting up a server with openSUSE Leap as the underlying OS. It's using BTRFS on top of hardware RAID 5 (with 8x 4 TB disks, giving me 28 TB total - ample storage for anything I'll ever throw at it). I'm trying to figure out DDNS with Porkbun, and I plan to use nginx-proxy-manager to handle mapping apps to subdomains.

My planned software deployment is currently up in the air; I know I'll be deploying Nextcloud, but I am also considering installing the following (some are suggestions from my friends who are in this with me rather than my own):

- Stable Diffusion (I'll need a GPU to do this) - Self-hosted LLM with Llama 2 (I have yet to figure out what is a good selfhosted web UI for this) - Minecraft/Minetest server - Matrix - Bitwarden/Vaultwarden - AnonAddy

There are probably other apps that I'll eventually install on the server that I haven't discovered yet :)

I also am hoping to get a backup solution in place, but plans for that are currently pretty nebulous. I do have a really cool idea that I need to look into though.


The only thing I'm going to say here is that IMO using RAID5 on disks that size is basically begging to lose the entire array.

I had this happen.. twice I think, back when 1TB drives were "big" for desktop use:

A single drive suffers a fault of some kind - enough to kick the raid controller into thinking it needs to rebuild.

The controller waits for you to replace the failed drive, and then diligently begins reading the data from the other drives as fast as it can.... putting them under increased load.... until a second drive fails...

The larger the disk, the longer the restore time will be, the greater load put on the remaining disks, the greater the chance of a subsequent failure.

If you're insistent on using traditional RAID, I'd recommend either RAID10, or at a push RAID6.

But ideally I'd actually suggest zfs, and add pairs of disks as mirror vdevs to a pool.


Good advice. You threw me back into the memory bank with this!


How long are you thinking of running this? I don't meant to spread FUD or anything but it doesnt look like Leap will get more updates after the next one.


Hmm, can you provide a source for that? Anyway, as far as I know, it should be a fairly simple process to convert a Leap installation to Tumbleweed, so I'm not too concerned.


There is not too much cause for concern. As you say you can convert to Tumbleweed. I think a group of volunteers are planning to keep releasing a stable distro like Leap so things will probably turn out fine in the end.

I haven't watched it yet but this video seems to be laying out the future plans for Leap. https://youtu.be/QZuIJbiC2lk


I wrote about this pretty recently, but the short version is a mix of NixOS and a k3s-based Kubernetes cluster using Tailscale to connect everything together.

https://gabrielsimmer.com/blog/infrastructure-megaupdate


## Data:

I have a Synology for data accessible via NFS, SMB, and a couple iSCSI mounts.

## Compute:

I have an off the shelf Thinkserve running Ubuntu + Cockpit for VM management via the Web. I I have 3 VMs running on the Thinkserve.

1) A single node microk8s instance. (This is mostly so I can stay familiar with k8s. Most of it could also be in a docker-compose file). This runs most stuff: portainer, Immich (photos), Jellyfin, Plex, Kavita (ebooks), Nextcloud, owntracks (location tracking), paperless-ngx, NodeRED, and Fasten (health tracking).

Most apps are deployed in portainer from a GitHub file, so a git commit is all it takes to update an app.

2) A VM for downloading things from YouTube and other places, categorizing/renaming them, and putting them on the NAS.

3) A VM dedicated to Home Assistant, with a pass-through USB to their SkyConnect device to control ZigBee/Thread home IoT stuff.

Each VM runs a cronjob to regularly put my Github pubkeys in my authorized_keys file for break glass SSH access. I detail my normal access below.

## Network/Security:

I use a Unifi Dream Router (UDR) with VLANs for most devices, one for IoT that's locked down, and one for my personal laptop with certain SMB traffic allowed for backups.

I don't allow incoming traffic (well one exception for Plex). I leverage Cloudflare pretty heavily for DNS, ingress, and access control.

I have 4 cloudflare tunnel pods on the k8s node. And a tunnel agent on each of the VMs, the Synology, the hypervisor, and the UDR. I use Cloudflare's ephemeral certs for SSH in tandem with cloudflared on my clients so I can SSH in a normal terminal (as opposed to the web SSH client).

Ingress traffic comes in via the tunnels which takes care of dynamic DNS. HTTP traffic runs through Cloudflare Access connected to my Google account which requires FIDO2 to log in. Non HTTP traffic (or non-browser traffic) are authorized via Cloudflare WARP + Gateway (mostly applicable to native Android apps).

## Internal DNS:

I have the cloudflared agent running in the UDR to allow incoming traffic for management, but it also creates a local DNS proxy that sends encrypted DNS requests to my Cloudflare Gateway allowing me to collect metrics and do filtering for any DNS on my home network.

## Backups:

I use rclone to backup an encrypted blob to Hetnzner storage.


Deliberately minimal! OpenBSD with https, sshd and smtpd. Hosts a bunch of stuff - git, web, files mostly. Uses next to no resources and is trivial to maintain.

Hosted on TransIP

I also have a FreeBSD host with ZFS for files but I’ll be dropping that, having migrated all the files to rsync.net.


In the past I’ve found good hardware deals at the ServeTheHome “Deals” forum. They would link to the best eBay deals and provide hints for what price to offer.

I haven’t bought used hardware in a while though, so not sure if it’s still a reliable source of deals


Wow, that looks like an invaluable resource! I will have to browse through there and see if there is anything that looks useful.


DokuWiki for my personal site on the cheapest VPS I could find.

KaithemAutomation(My own FOSS project) at home for watching my amcrest cameras SyncThing for keeping track of my media library. At the moment I just leave my laptop running, it's got object detection record so it doesn't fill up disk or use too much CPU.

Eventually I expect to want a NAS, I will probably get a commercial prepackaged one that comes with a free cloud access solution and a nice app, and preferably one that can be flashed with something else if that goes away.


At home I have a collection of various Asus Tinker boards and a raspberry PI. They all have linux and I use docker to run my projects. I also have next cloud with an external sdd on one of the boards and openvpn on the other.

I mostly use them as dev/staging servers for my personal projects.

The VPN let’s me demo my projects while not on my network and work on it while away from home.

It’s not amazing but it servers me well.

I should give out a shout out to tinkerboards, I have some that have been running for years, SD card never got corrupted unlike the only Pi I have..


* A RockPro64 NAS at home running DokuWiki, Jellyfin, files, backup. NixOS. Accessible via Wireguard.

* My external sites are all static or CGI (typically Golang), managed by my email/web provider.


Pcpartpicker.com to find the best server grade components with the biggest bang for your buck.

eBay for refurbished Dell power edge servers that can take a beating for years; and only cost you $1,000 or less.

eBay and Amazon for ubiquiti networking anything.

Once upon a time, LTT and others for keeping up on everything IT related lol.

Crosstalk Communications and Lawerence Systems on YouTube these days for keeping up with IT.. But be warned, even these two great channels have pitched some janky products over the years as well.


i worked at a data center operations company many, many years ago, and at that time i built a server and stuck it in the data center and got a really great ($50/month) deal on it. the company has been acquired since then, but they still keep billing me the same rate for the box!

i share it with a few friends, and we are running our personal websites on apache and also hosting our DNS (bind) and email (postfix) and spam filters (spamassassin). i SSH in to read my email in neomutt.

we run those base services as system services, but recently we're running some stuff via docker-compose files. here's my personal docker-compose stack using traefik to proxy to some personal infra, e.g. my own syncthing and my blog:

https://github.com/igor47/services


Got a second hand HP desktop with Ubuntu server (want to move to Debian at some point) all services etc are docker compose behind Caddy proxy. Connecting to a 2tb NAS. Running Jellyfin, freshrss, nextcloud, invidious, libreddit, Linkding, Firefoxsync, and vikunja.


Would be great if people could share their provisioning and bootstrap code repos (assuming they keep the private data separated).


- Dual AMD EPYC 7402 with 96 threads total (I considered upgrading via eBaying random newer used parts, but the value isn't there)

- 512 GiB of Samsung ECC DDR4

- Lights-out management via Supermicro IPMI

- (2) 4x NVMe to PCIe adapters with many Samsung, Seagate, and Intel Optane SSD on the motherboard

- Dual SAS3 controllers to Supermicro 847E1C-R1K23JBOD with dual SAS3 expanders and 37x 14 TB HGST helium drives. Not using ZFS because of past unrecoverable problems and poor community support. mdadm and XFS are rock solid. I built an md array and SMART drive monitoring tool that tracks the few parameters that are prefailure signals, and also monitors helium levels.

- Intel QuickAssist 9870 - cryptographic accelerator because "Why not?"

- Intel 10 GbE NIC optical to switch

- vSphere Enterprise Plus currently on 7.x (technically unsupported), considering 8.x

- APC SMX2000 and SMX1500 with multiple expansion batteries, NMC3 lights-out, and temp+humidity monitoring

- WiFi: Ubiquiti 6E locally managed without cloud bullshit

- Offbrand POE+ switch firewalled at the router to avoid cloud, telemetry, and backdoor bullshit

- Deciso OPNsense 740 firewall with 10 GbE links to ISP and to the switch running business OPNsense. Wireguard works great with a freebie dyndns to permit remote network access via "VPN". Had to setup a cron with a curl dyndns updater. It's good enough to do remote Steam from mom's internet on an iPad Pro with the Wireguard app VPN in automatic mode.

- 2 Gbps Google Fiber

- VMs standardized on CentOS 9 Stream. A couple of Ubuntu and Windows, along with *BSD and other OS build worker boxes. While the RHEL-compatible CentOS userland needs more work than most, the important bit of RHEL/CentOS is scalable stability, reliability, and security. Make RPMs for whatever you need and vendor dependencies.

- Kubernetes (k8s) and Docker CE on containerd

- Plex Media Server

- Samba setup and enabled as a TimeMachine network disk

- NFS for home directories

- PKI (CA and intermediate CA) for certs on all devices and apps managed by OPNsense (pk not saved)

- SSH: Yubikey GPG-agent backed authorized keys + TOTP + Yubikey FIDO2. (Service accounts are done differently)

- Bitwarden (yes yes, configured correctly and local sync server)

- Obsidian for runbooks and primary documentation

- Also: AWS CloudFront/S3 for YUM RPM repo and EC2 for one-off jobs. Staying below the free tier and using minimal resources is how to be cost efficient.

- Offsite storage: Mega 2TB plan + Apple plan + Google Drive plan (2 is 1, 1 is none)

- Working on: kata-containers and Istio

- Thinking about: ARM server and an HSM

PS: Buy as much as you can used because buying new is usually throwing money away. UPSes must be refurbed with new quality batteries and then refurbed periodically (say, every 7-10 years for the broke home gamer). Schneider Electric bastards (owners of APC) went to a subscription model for NMC3 firmware updates that is cost prohibitive.


> Offbrand POE+ switch firewalled at the router to avoid cloud, telemetry, and backdoor bullshit

How do you handle firmware updates for the devices connected to this vlan?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: