
Ask HN: What do you self-host? - aeleos
I know this is has been posted before but that was a few years ago so I wanted to restart the discussion, as I love hearing about what people host at home.<p>I am currently running an Unraid server with some docker containers, here are a few of them: Plex, Radarr, Sonarr, Ombi, NZBGet, Bitwarden, Storj, Hyrda, Nextcloud, NginxProxyManager, Unifi, Pihole, OpenVPN, InfluxDB, Grafana.
======
mavidser
I reworked my servers a while ago to host literally everything through docker,
managed via terraform.

All web-services are reverse-proxied through traefik

At home:

    
    
        loki + cadvisor + node-exporter + grafana + prometheus
        syncthing
        tinc vpn server
        jackett + radarr + sonarr + transmission
        jellyfin
        samba server
        calibre server
    

On a remote server:

    
    
        loki + cadvisor + node-exporter + grafana + prometheus
        syncthing
        tinc vpn server
        dokuwiki
        firefox-sync
        firefox-send
        vscode server
        bitwarden
        freshrss
        znc bouncer + lounge irc client + bitlbee
        an httptunnel server (like ngrok)
        firefly iii
        monicahq
        kanboard
        radicale
        syncthing
        wallabag
        tmate-server

~~~
dmos62
I see you're using Bitwarden.

Does anyone have recommendations for password+sensitive-data management?

I'm currently using Keepass and git, but I have one big qualm. You cannot
choose to not version-control that one big encrypted (un-diff-able) file.

~~~
johntash
You might like Pass [0] or GoPass [1] which had more features the last I
looked at it.

They both store passwords/data in gpg-encrypted files in a git repo. I'm not
sure what the state of GUIs/browser plugins are for it, but I'm pretty sure
there are some out there.

You can also set up your git config to be able to diff encrypted .gpg files so
that the files are diff-able even though they're encrypted.

[0]: [https://www.passwordstore.org/](https://www.passwordstore.org/)

[1]: [https://github.com/gopasspw/gopass](https://github.com/gopasspw/gopass)

~~~
dmos62
Yeah, I like Pass the most in this space, but it doesn't encrypt the index of
logins/items that you're keeping. I.e. it's a folder tree of encrypted files,
so you can see the sites, logins and other things that I'm using. That's kind
of a deal breaker for me, though I'm pondering if I'm being practical, or just
overly cautious.

------
teddyh
“Self-host” is such a weird word. Having your own stuff yourself should be the
_default_ , should it not? I mean, you don’t “self-drive” your car, nor “self-
work” your job. The corresponding words instead exists for the opposites: You
can have a chauffeur and you can outsource your job.

I think the problem is entirely caused by the US having absolutely abysmal
private internet speeds and capacity. Since you can’t then have your own
server at home, you are forced to have it elsewhere with sensible internet
connections.

It’s as if, in an alternate reality, no private residences had parking space
for cars; no garages, no street parking. Everyone would be forced to either
use public transport, taxis and chauffeur services to get anywhere. Having a
private vehicle would be an expensive hobby for the rich and/or enthusiasts,
just like having a personal server is in our world.

~~~
maxerickson
If you take a broader lens, having a private vehicle _is_ an expensive hobby
for the rich.

And most people actually do outsource their jobs. They are employees rather
than working for themselves…

~~~
avl999
> If you take a broader lens, having a private vehicle is an expensive hobby
> for the rich.

That might be true if you are in SF, NY, Toronto, London or some other major
metropolitan with a good public transportation network. However for a large
number of places in North America including metropolitans like LA, San Diego,
Minneapolis, Dallas, having a car is almost as necessary as anything as that
is the only way to get around the city without spending half a day in public
transit.

------
cyphar
I self-host the following at home. Everything is running under LXD (and I have
all of the scripts to set it up here[1]):

    
    
      * nginx to reverse-proxy each of the services.
      * NextCloud.
      * Matrix Homeserver (synapse).
      * My website (dumb Flask webapp).
      * Tor (non-exit) relay.
      * Tor onion service for my website.
      * Wireguard VPN (not running in a container, obviously).
    

All running on an openSUSE Leap box, with ZFS as the filesystem for my drives
(simple stripe over 2-way mirrors of 4TB drives).

It also acts as an NFS server for my media center (Kodi -- though I really am
not a huge fan of LibreELEC) to pull videos, music, and audiobooks from.
Backups are done using restic (and ZFS snapshots to ensure they're atomic) and
are pushed to BackBlaze B2.

I used to run an IRC bouncer but Matrix fills that need these days. I might
end up running my own Gitea (or gitweb) server one day though -- I don't
really like that I host everything on GitHub. I have considered hosting my own
email server, but since this is all done from a home ISP connection that
probably isn't such a brilliant idea. I just use Mailbox.org.

[1]:
[https://github.com/cyphar/cyphar.com/tree/master/srv](https://github.com/cyphar/cyphar.com/tree/master/srv)

~~~
douglascoding
> * Wireguard VPN (not running in a container, obviously).

I plan to use Wireguard too, so I shouldn't run on containers? Can you
elaborate on that?

~~~
BrandoElFollito
From the small research I did, I think you need a customized kernel on the
host to do that.

I run it on the host.

------
sdan
I host a bunch of docker containers plus Traefik to route everything. It runs
on a cheap GCP instance (more on this here:
[https://sdan.xyz/sd2](https://sdan.xyz/sd2))

Overleaf: [https://sdan.xyz/latex](https://sdan.xyz/latex)

A URL Shortener: [https://sdan.xyz](https://sdan.xyz)

All my websites ([https://sdan.xyz/drf](https://sdan.xyz/drf),
[https://sdan.xyz/surya](https://sdan.xyz/surya), etc.)

My blog(s) ([https://sdan.xyz/blog](https://sdan.xyz/blog),
[https://sdan.xyz/essays](https://sdan.xyz/essays))

Commento commenting server (I don't like disqus)

Monitoring ([https://sdan.xyz/monitoring](https://sdan.xyz/monitoring), etc.)

Analytics (using Fathom Analytics) and some more stuff!

~~~
djsumdog
I run netdata too, but I keep that behind my VPN. I'd suggest the same for
you. No reason to have that exposed to the entire world.

I wrote this to setup my web server, mail server and VPN server, and auto-
generate all my VPN keys.

[https://github.com/sumdog/bee2](https://github.com/sumdog/bee2)

~~~
rovr138
Any reason to have it behind a VPN?

~~~
AdamGibbins
Reduces surface area of attack, you never know when a 0day is going to be
found. Exposing monitoring/metrics is particularly interesting as it exposes a
lot of information to an attacker, if they're trying to starve your machine of
a resource or whatever.

~~~
sdan
Exactly. They have direct access to your vitals and can push certain buttons
to figure out how your system is running to brute-force that attack,
ultimately ruining whatever they intended to do.

I'm probably going to change how publicly accessible my monitoring view is
soon, but for now, it seems pretty cool for everyone to see.

~~~
lma21
Indeed it was cool.

Would love to get a link to a screenshot of your system's resource monitoring.
The description of each panel & eache metric was quite useful!

~~~
pm7
It's still public as of now.

------
whalesalad
Bums me out when I see people putting so many resources into running/building
elaborate piracy machines. Plex, radarr, sonarr, etc... (you note some of
these services but /r/homelab is notorious for this)

Here’s my home lab: [https://imgur.com/a/aOAmGq8](https://imgur.com/a/aOAmGq8)

I don’t self host anything of value. It’s not cost effective and network
performance isn’t the best. Google handles my mail. GitHub can’t be beat. I
use Trello and Notion for tracking knowledge and work, whether personal or
professional. Anything else is on AWS. I do have a VPN though so I can access
all of this when I’m not home.

The NAS is for backing up critical data. R720 was bought to experiment with
Amazon Firecracker. It’s usually off at this point. Was running ESXI, now
running Windows Server evaluation.

The desktop on the left is the new toy. I’m learning AD and immersing myself
100% in the Microsoft stack. Currently getting an idiomatic hybrid
local/azure/o365 setup going. The worst part about planning a MS deployment is
having to account for software licensing that is done on a per-cpu-core basis.

~~~
saagarjha
Plex can and is often used for hosting content that you own the rights to.

~~~
whalesalad
Sure, in the same way that BitTorrent can be used to download Linux ISOs :)

~~~
reificator
I pull in about two batches of 5-30 torrents every month or two for content I
_paid for_ on Humble Bundle.

People who use bittorrent legally do exist, or at least there's one of us.

~~~
cannonedhamster
There's tons of people who use BitTorrent legally. Some companies use it on
their servers to keep them in sync.

------
tbyehl
In colo:

    
    
      nginx
      Plex
      Radarr / Sonarr / SABnzbd / qBittorrent / ZeroTier -> online.net server
      FreeNAS x2
      Active Directory
    

At home:

    
    
      nginx
      vCenter
      urbackup
      UniFi SDN, Protect
      Portainer / unms / Bitwarden
      Wordpress (isolated)
      Guacamole
      PiHole
      InfluxDB / grafana
      Active Directory
      Windows 10 VM for Java things
      L2TP on my router
    

Everything I expose to the world goes through CloudFlare and nginx with
Authenticated Origin Pulls [0], firewalled to CF's IPs [1], and forced SSL
using CF's self-signed certs. I'm invisible to Shodan / port scans.

Have been meaning to move more to colo, especially my Wordpress install and
some Wordpress.com-hosted sites, but inertia.

[0] [https://support.cloudflare.com/hc/en-
us/articles/204899617-A...](https://support.cloudflare.com/hc/en-
us/articles/204899617-Authenticated-Origin-Pulls)

[1] [https://www.cloudflare.com/ips/](https://www.cloudflare.com/ips/)

~~~
Pmop
Wait. What? Windows VM for Java?

~~~
throwaway8941
It's most likely client-side stuff. Probably some crappy banking client, or an
authentication client for some government websites, or something like that.

I use one for the sites below. It is written in Java/Kotlin, but barely works
anywhere except Windows.

[https://egov.kz/cms/en](https://egov.kz/cms/en)

[https://cabinet.salyk.kz/](https://cabinet.salyk.kz/)

...

------
zelly
SSH: for git and tunneling literally everything: VNC, sftp, Emacs server,
tmux, ....

Docker running random stuff

Used to run Pihole until I got an Android and rooted it. Used to mess with
WebDAV and CalDAV. Nextcloud is a mess; plain SFTP fuse mounts work better for
me. My approach has gone from trying to replicate cloud services to straight
up remoting over SSH (VNC or terminal/mosh depending connectivity) to my home
computer when I want to do something. It's simple and near unexploitable.

This is the way it should always have been done from the start of the
internet. When you want to edit your calendar, for example, you should be able
to do it on your phone/laptop/whatever as a proxy to your home computer,
actually locking the file on your home computer. Instead we got the
prolifetation of cloud SaaSes to compensate for this. For every program on
your computer, you now need >1 analogous but incompatible program for every
other device you use. Your watch needs a different calendar program than your
gaming PC than your smart fridge, but you want a calendar on all of them. M×N
programs where you could have just N, those on your home computer, if you
could remote easily. (Really it's one dimension more than M×N when you
consider all the backend services behind every SaaS app. What a waste of human
effort and compute.)

~~~
dmos62
I sympathize. My meditations about this lead me to thinking about waste as
well.

Why computer at home though? For someone who moves around a lot and doesn't
invest into "a home", this would be bothersome. Not to mention it's more
expensive, in terms of energy and money. I think third-party data centers are
fine for self-hosting.

~~~
zelly
There's really no difference. Mainly I use a machine at home instead of a data
center VM because that's just one less bill to pay. I have two GPUs in there
which would be very expensive on public cloud.

I guess one reason people might gravitate to home hosting is owning your own
disks, the tinfoil hat perspective. You can encrypt volumes on public cloud as
well, but it's still on someone else's machine. They could take a snapshot of
the heap memory and know everything you are doing.

~~~
dmos62
I speculate that in the future the trust aspect of using third-party hardware
might be solved technologically. And I agree that today the tinfoil sentiment
is not baseless.

------
ricardbejarano
On my home server (refurbished ThinkPad X201 with a Core i5-520M, 8GB of
memory, 1TB internal SSD sync'd nightly to an external 1TB HDD) I run a
single-node Kubernetes cluster with the following stuff:

* MinIO: for access to my storage over the S3 API, I use it with restic for device backups and to share files with friends and family

* CoreDNS: DNS cache with blacklisted domains (like Pihole), gives DNS-over-TLS to the home network and to my phone when I'm outside

* A backup of my S3-hosted sites, just in case (bejarano.io, blog.bejarano.io, mta-sts.bejarano.io and prefers-color-scheme.bejarano.io)

* [https://ideas.bejarano.io](https://ideas.bejarano.io), a simple "pick-one-at-random" site for 20,000 startup ideas ([https://news.ycombinator.com/item?id=21112345](https://news.ycombinator.com/item?id=21112345))

* MediaWiki instance for systems administration stuff

* An internal (only accessible from my home network) picture gallery for family pictures

* TeamSpeak server

* Cron jobs: dynamic DNS, updating the domain blacklist nightly, recursively checking my websites for broken links, keeping an eye on any new release of a bunch of software packages I use

* Prometheus stack + a bunch of exporters for all the stuff above

* IPsec/L2TP VPN for remote access to internal services (picture gallery and Prometheus)

* And a bunch of internal Kubernetes stuff for monitoring and such

I still have to figure out log aggregation (probably going to use fluentd), I
want to add some web-based automation framework like NodeRED or n8n.io for
random stuff. I'd also like to host some password manager but I still have to
study that.

I also plan on rewriting wormhol.org into supporting any S3 backend, so that I
can bind it's storage with MinIO.

And finally, I'd like to move off single-disk storage and get a decent RAID
solution to provide NFS for my cluster, as well as a couple more nodes to add
redundancy and more compute.

Edit: formatting.

~~~
bluegreyred
nice to see somebody using a thinkpad as a homeserver.

I remember comparing low power homeservers, consumer NAS and a refurb Thinkpad
and the latter won when considering the price/performance and idle power
consumption (<5W). You also get a built screen & keyboard for debugging and a
efficient DC-UPS if you're brave enough to leave the batteries in. That's of
course assuming you don't need multiple terabytes of storage or run programs
that load the CPU 24/7, which I don't. These days a rPi 4 would probably
suffice for my needs but I still think the refurb thinkpad is a smart idea.

~~~
ricardbejarano
I don't overload the CPU and my storage requirements are low. 95% of my used
storage is stuff I wouldn't care if it got lost, but just nice to have around.
I only have around 2GB of data I don't want to lose.

I do leave the batteries in. Is it dangerous? I read some time ago that it is
not dangerous, but the capacity of the battery drops significantly, I don't
care about capacity, and safe shutdowns are important to me.

In the past I used an HP DL380 Gen. 7 (which I still own, and wouldn't mind
selling as I don't use it), but I had to find a solution for the noise. And
power consumption was at around 18EUR for my EUR/kWh.

Cramming down what ran on 12 cores and 48GiB of RAM on a 2-core, 4GiB (I only
upgraded the memory 2 months ago) machine was a real challenge.

The ThinkPad cost me 90EUR (IBM refurbished), we bought two of them, the other
one burnt. The recent upgrades (8GiB kit + Samsung Evo 1TB) cost me around
150EUR. Overall a really nice value both in compute per EUR spent and in
compute per Wh spent. Really happy with it, I just feel it is not very
reliable as it is old.

~~~
bluegreyred
>I do leave the batteries in. Is it dangerous? I read some time ago that it is
not dangerous, but the capacity of the battery drops significantly, I don't
care about capacity, and safe shutdowns are important to me.

It's not necessarily dangerous but lithium batteries have a chance to fail and
in very rare cases even explode, making them a potential fire hazard. I'm not
an expert, maybe someone else can expand on this. If I were to run an old
laptop of unknown provenance with a LiIon battery 24/7 completely unattended
I'd at least want to make sure that it is on a non-flammable surface without
any flammable items nearby.

>In the past I used an HP DL380 Gen. 7 (which I still own, and wouldn't mind
selling as I don't use it), but I had to find a solution for the noise. And
power consumption was at around 18EUR for my EUR/kWh.

Yes, I am surprised how many people leave power consumption out of the
equation. These days you can rent a decent VPS for the power cost of an old
refurb server alone.

~~~
ricardbejarano
> It's not necessarily dangerous but lithium batteries have a chance to fail
> and in very rare cases even explode, making them a potential fire hazard.

Well, I'm removing the battery and the pseudo-UPS logic right now. The battery
looks fine, but I'm not taking any risks, since it's on top of the DL380 but
under a wooden TV stand.

Thanks for the heads up! You might have prevented a fire.

------
kstenerud
Everything I run is in a deterministic, rebuildable LXC container or KVM
virtual machine.

I have around 10 desktops that run in containers in various places for various
common tasks I do. Each one has a backed up homedir, and then I have a ZFS-
backed fileserver for centralized data. I connect to them using chrome remote
desktop or x2go. I've had my work machine die one time too many, so with these
scripts I can go from a blank work machine to exactly where I left off before
the old one died, in a little over an hour. None of my files are stuck to a
particular machine, so I can run on a home server, and then when I need to
travel, transfer the desktop to a laptop, then transfer it back again when I
get home. Takes about 10 minutes to transfer it.

[https://github.com/kstenerud/virtual-
builders](https://github.com/kstenerud/virtual-builders)

I also run most of my server apps this way:

[https://github.com/kstenerud/virtual-
builders/tree/master/ma...](https://github.com/kstenerud/virtual-
builders/tree/master/machine-builders)

------
itm
I eat my own food:

[https://github.com/epoupon/lms](https://github.com/epoupon/lms) for music

[https://github.com/epoupon/fileshelter](https://github.com/epoupon/fileshelter)
to share files

Eveything is packaged on debian buster (amd64 and armhf) and run behind a
reverse proxy.

~~~
djsumdog
Huh, interesting. I usually have full copies of my music collection where I
need them (512gb microsd in my phone and on the work laptop) but it would be
nice to just have a web interface if I'm at someone's house or so they can
play off their phone. I think I was using subsonic until they changed all
their licensing.

One UI question? Is there a reason you left off volume controls? That's
something that always annoys me still about Bandcamp and I had submitted a
patch to Mastodon to create a volume control for their video component.

~~~
itm
This is a good question. On mobile, I just use phone buttons to control
volume. On desktop, I just use the media keys since I don't listen to
something else. But since you are not the first to ask, I can add a volume
slider on large devices.

------
letstrynvm
I have a cheap dedicated server with outgoing Postfix mail forwarding with
sasl auth, nsd for the domains, a few web services over tls. Git server via
gitolite + git-daemon. Mailman.

Incoming mail points directly to an RPi at home on dsl... Postfix + Dovecot
IMAP. It's externally accessible, my dedicated server does the dynamic dns to
point to the RPi; the domain MX points to that. Outgoing mail forwards through
the dedicated server, which has an IP with good reputation and DKIM.

This gets me a nice result that my current and historical email is delivered
directly to, and stays at, home, and my outgoing mail is still universally
accepted. There's no dependency on google or github. There's no
virtualization, no docker, no containers, just Linux on the server and on the
rpi to keep up to date. It uses OS packages for everything so it stays up to
date with security updates.

------
rolleiflex
I’m pretty vanilla compared to most people here, I just host my own email and
I have a Synology box that provides a few utilities like an Evernote
replacement.

I also host Aether P2P ([https://getaether.net](https://getaether.net)) on a
Raspberry Pi-like device, so it helps the P2P network. But I’m biased on that
last one, it’s my own software.

------
thegeekbin
I changed my hardware around recently, I used to have 5u colo that I’ve now
downsized for financial reasons, I migrated all into one box called Poof, on
poof I’m running:

    
    
        - matrix home server
        - xmpp server
        - websites for wife and I (Cloudlinux, Plesk, Imunify360)
        - nextcloud
        - jellyfin + jackett + sonarr + radarr
        - rutorrent
        - CDN origin server (bunnycdn pulls files from this)
        - znc bouncer
        - freeipa server
        - Portainer with pihole, Prometheus, grafana and some microservices on them
        - Gitea server
        - spare web server I use as staging environment
    

All of this is behind a firewall, I’ve been fortunate enough I’ve got /27
assigned to me, so more than enough IP addresses available to me, I’m using
all but about 5 or 6 of them, but plan to change that soon. I’m going to be
assigning dedicated IPs to every site I host (3 total), put my XMPP server on
its own vm instead of sharing it with Matrix and giving it its own IP.

I blog about this stuff if anyone’s interested:
[https://thegeekbin.com/](https://thegeekbin.com/)

------
Youden
I used to have things in a colo but now I have fiber at home, just about
everything is on a single giant machine, complete with graphics card for a
gaming VM:

    
    
      VM management: libvirt (used to host gaming PC and financial applications)
      Container management: Docker (used to be k8s but gave up)
      Photo gallery: Koken, Piwigo, Lychee
      Media acquisition: Radarr, Sonarr, NZBGet, NZBHydra
      Media access: Plex
      Monitoring: InfluxDB, Grafana, cAdvisor, Piwik, SmartD, SmokePing, Prometheus
      Remote data access: Nextcloud
      Local data access: Samba, NFS
      Data sync: Syncthing
      WireGuard
      Unifi server
      IRC: irssi, WeeChat, Glowing Bear, Sopel (runs a few bots)
      Finance: beancount-import, fava
      Chat: Riot, Synapse (both Matrix)
      Databases: Postgres, MariaDB, Redis
      Speed test: IPerf3
    

I also have a seedbox for high-bandwidth applications.

------
h1d
Just an advice but I suggest you host publicly facing services and privately
hosted services on different instances.

You don't want less tested web app to expose some security hole for someone to
start snooping on your traffic toward BitWarden after SSL termination.

If you don't want an extra box at home, you can always get a $5/mo cloud
instance for public stuff, where you don't have to worry about increased
electricity bill from DDoS having CPU spiked or choking your home network.

------
DominoTree
I self-host a fairly big Plex and several personal websites along with a
NextCloud instance to sync calendars/reminders/etc across devices. Pretty much
everything forward-facing is behind CloudFlare.

On the front end I have two 1Gbit circuits (AT&T and Google) going into an
OPNSense instance doing load-balancing and IPS running on a Dell R320 with a
12-thread Xeon and 24GB of RAM

Services are hosted on a Dell R520 with 48GB RAM and two 12-thread Xeons
running Ubuntu and an up-to-date ZFS on Linux build.

Media storage handled by two Dell PowerVault 1200 SAS arrays.

Back-end is handled by a Cisco 5548UP and my whole apartment is plumbed for
10Gbit.

~~~
Havoc
>On the front end I have two 1Gbit circuits (AT&T and Google)

Holy hell. How did that come about?

------
menssen
Nothing. I self-host nothing. My entire home networking infrastructure
consists of a more powerful WiFi router than the one built into the modem that
the cable company provides so that it reaches to the back of my apartment. I
pay money for GitHub, Dropbox, iCloud, Apple Music, Netflix, Hulu, HBO, Amazon
Prime, a VPN to spoof my location occasionally, and Google Apps (or I would,
if I were not grandfathered into the free tier). When I want to spin up a
personal project, I do it on Heroku.

I live in a stable first-world democracy. Or, since it seems to be getting
less stable recently, maybe a better way to put it is: I participate in a
stable global economy. If "the cloud" catastrophically fails to the point
where I lose all of the above without warning, I will likely have bigger
problems than never being able to watch a favorite tv show again.

I wonder if this exposes two kinds of people: those who value mobility, and
are more comfortable limiting the things that are important to them to a
laptop and a bug-out bag, and those who value stability, and are inclined to
build self-sufficient infrastructure in their castles.

~~~
elagost
There's a third kind of person - one who doesn't want their personal data
beholden to a bunch of faceless for-profit companies who have proven they care
less about security and privacy than they do about money.

I don't self host a lot of services (and the ones that do could go away
tomorrow without hurting me much) but I only have one cloud resource - email.
It kind of has to be that way for various reasons; I'd self host if I could
reasonably do so. I also think I value my $75/mo more than I value an endless
stream of entertainment.

(edit: just wanted to say, thanks for posting this. It is a valuable
discussion point.)

~~~
CarelessExpert
Not to mention a fourth kind of person - one who just wants services that work
better than what the cloud offers.

By definition, self-hosting means the service is under _my_ control, doing
what _I_ need, customized for _my_ use cases. And because I use only open
source stacks, I can (and have) even modify the code to customize even
further.

And that's ignoring the fact that free, self-hosted options can often provide
features that third party services cannot for legal, technical, or supports
reasons.

For example, my TT-RSS feed setup uses a scraper to pull full article content
right into the feed. A service would probably land in legal trouble if they
did this. And while it works incredibly well, like, 90% of the time (thank you
Henry Wang, author of mercury-parser-api!), if it was a service, that 10%
could result in thousands of support emails or an exodus of subscribers.

~~~
acolumb
Could you elaborate on how you got TTRSS to scrape?

~~~
CarelessExpert
I installed the Mercury parser plugin:

[https://github.com/HenryQW/mercury_fulltext](https://github.com/HenryQW/mercury_fulltext)

The directions there are pretty clear. You've gotta set up the mercury parser
API service (I used docker) and then enable the plugin for the feeds you want
to apply it to.

Alternatively you could use the Readability plugin that ships with tt-rss, but
I have no idea how effective it is as I never tried it.

Finally, you could stand up the RSS full text proxy:

[https://github.com/Kombustor/rss-fulltext-
proxy](https://github.com/Kombustor/rss-fulltext-proxy)

That service standa between your RSS feed reader of choice and the RSS feed
supplier and does the scraping and embedding.

------
olalonde
I recently installed docker-simple-mail-forwarder[0] to use a custom domain
name email with Gmail. It's a one line install.

[0] [https://github.com/huan/docker-simple-mail-
forwarder](https://github.com/huan/docker-simple-mail-forwarder)

------
folkhack
I'll bite because I love threads like this! I run an Intel NUC with an i5, 8G
RAM, a OS SSD, and an external USB3 4TB RAID attached. OS is Debian 9. I've
always ran a "general utility" Debian server at home due to projects and SSH
tunneling (yeah yeah I should setup a proper VPN I know ha).

* It's a target for my rsync backups for all my client systems (most critical use); Docker TIG stack (Telegraf, InfluxDB, Grafana) which monitors my rackmount APC UPS, my Ubiquiti network hardware, Docker, and just general system stats; Docker Plex; Docker Transmission w/VPN; Docker Unifi; A custom network monitor I built that just pings/netcats certain internal and external hosts (not used too seriously but it comes in handy); and finally a neglected Minecraft server.

I went for low power consumption since it's an always-on device and power
comes at a premium here + fanless. I highly suggest the NUC as it's a highly
capable device and with plenty of power if upgraded a bit!

~~~
ekianjo
I guess this is a recent i5? In case you want a low consumption alternative,
an older Celeron-based NUC is also a very capable machine (much better than a
Raspberry Pi 4 for about the same price used nowadays) and remains idle at a
few dozens of watts.

~~~
folkhack
Not sure on how recent but I picked it up about a year ago. Looked into the
Celeron ones and although they're impressive I decided to go with something
that's a bit beefier due to how many containers I planned to run/experiment
with =)

------
kissgyorgy
Wallabag! It changed my reading habits:
[https://wallabag.org/en](https://wallabag.org/en)

~~~
mackrevinack
ive been eyeing that up for quite a while now. being able to it run on an eink
reader seems like a great idea. ive set up the email-an-article-to-your-kindle
thing a few years ago but it's still too much effort.

------
yankcrime
In colo (a former nuclear bunker, no less!) I have a small OpenStack 'cloud'
deployment cobbled together from spare hardware, pieced together in
partnership with a friend of mine. I wrote a bit about it here if anyone's
interested:

[https://dischord.org/2019/07/23/inside-the-sausage-
factory/](https://dischord.org/2019/07/23/inside-the-sausage-factory/)

At home I have:

    
    
      A Synology DS412+ with 4 x 4TB drives
      An ancient HP Microserver N36L with 16GB RAM and 4 x 4TB drives running FreeBSD
      Ubiqiuti UniFi SG + CloudKey + AP
      An OG Pi running PiHole
    

The DS412+ is my main network storage device, with various things backed up to
the Microserver. Aside from the OEM services it also runs Minio (I use this
for local backups from Arq), nzbget, and Syncthing in Docker containers.

------
Mister_Snuggles
At home I have:

FreeBSD server running various things:

* Home Assistant, Node-RED, and some other home automation utilities running in a FreeBSD Jail.

* UniFi controller in a Debian VM.

* Pi-Hole in a CentOS VM.

* StrongSwan in a FreeBSD VM.

* ElasticSearch, Kibana, Logstash, and Grafana running in a Debian VM.

* PostgreSQL on bare metal.

* Nginx on bare metal, this acts as a front-end to all of my applications.

I also have:

* Blue Iris on a dedicated Windows box. This was a refurbished business desktop and works well, but my needs are starting to outgrow it.

* A QNAP NAS for general storage needs.

Future plans are always interesting, so in that vein here are my future plans:

Short term:

* Move my home automation stuff out of the FreeBSD Jail into a Linux VM. The entire Home Assistant ecosystem is fairly Linux-centric and even though it works on FreeBSD, it's more pain than I'd really like. Managing VMs is also somewhat easier than managing Jails, though I'm sure part of this is that I'm using ezjail instead of something more modern like iocage.

* Get Mayan-EDMS up and running. I hate paper files, this will be a good way to wrangle all of them. I've used it before, but didn't get too deep into it. This time I'm going all-in.

Medium term:

* Replace my older cameras with newer models.

* Possibly upgrade my Blue Iris machine to a more powerful refurbished one.

* Create a 'container VM', which will basically be a Linux VM used for me to learn about containers.

Long term:

* Replace my FreeBSD server with new hardware running a proper hypervisor (e.g., Proxmox, VMware ESXi). This plan is nebulous as what I have meets my needs, this is more about learning new tools and ways of doing things.

------
boredpenguin
Currently not much. On the home server:

• Apache: hosting a few websites and a personal (private) wiki.

• Transmission: well, as an always-on torrent client. Usually I add a torrent
here, wait for it to download and then transfer it via SFTP to my laptop.

• Gitea: mostly to mirror third party repos I need or find useful.

• Wireguard: as a VPN server for all my devices and VPS, mostly so I don't
need to expose SSH to the internet. Was really easy to setup and it's been
painless so far.

------
0x0aff374668
Why are so many folks here running media servers? Are you really streaming
your own video / audio libraries, or is there something else it is useful for?
I'd be rather shocked to learn people still store digitally media locally.

~~~
AYBABTME
I use it for my boat (offgrid) & for travels, so we're not stuck behind
regional content filters, like mid-way a series and we arrive in a new country
and it's not on Netflix here.

~~~
0x0aff374668
This is the coolest response. :)

(You didn't by any chance sail around Cape horn in 2016? I met this really
cool older couple in Central America who had been living at sea for 17 years.)

Reading all of the replies I realize that sometime between 2007 and 2012 I
just gave up entirely on storing media locally. I don't watch movies (e.g. no
cable or netflix), but I've been using spotify for a decade maybe? One
response makes a good point: it is a waste of overall bandwidth to stream
content.

------
stiray
I am a bit old school but this fills all my needs.

\- httpd

\- nextcloud (mostly for android syncing, for normal file operations I prefer
sftp). Nextcloud is great but the whole js/html/browser is clumsy.

\- roundcube (again mostly imap but just to have alternative when phone isnt
available - I havent used it for ages)

\- postfix

\- dovecot

\- squid on separate fib with paid vpn (mitming all the traffic, removing all
internet "junk" from my connections, all my devices, including android are
using it over ssh tunnel).

\- transmission, donating my bandwidth to some OSS projects

\- gitolite, all my code goes there

I think this is it.

Everything is running on mitx board, with 16gb of ram, 3x 3tb toshiba hdds in
zraid and additional 10tb hitachi disk. FreeBSD. 33 watts.

------
jasonkester
S3stat, Twiddla, Unwaffle, the Expat site, and a dozen other old projects all
still run on a single box in a Colo.

it costs about $800/month for the half cage and all the hardware in it, when
you amortise it out. And there's plenty of performance overhead for when one
project gets a lot of attention or I want to ad something new.

Pretty much the only thing I use cloud computing for is the nightly job for
S3stat, because it fits the workload pattern that EC2 was designed for.
Namely, it needs to run 70 odd hours of computing every day, and gets 3 hours
to do it in.

For SaaS sized web stuff, self hosting still makes the most sense.

------
kemenaran
I like self hosting, but I want it to work without having to do sysadmin work.
Especially the upgrades–most hosting providers will have one-click tools to
install self hosted instances of something; but very few have working upgrade
scripts to keep up with the new versions.

So I set up Yunohost [0] on a small box, and now I install self hosted
services whenever I need them. Installing a new service is a breeze–but more
importantly, upgrading them is a breeze to.

For now I self host Mattermost, Nextcloud, Transmission.

[0] [https://yunohost.org](https://yunohost.org)

~~~
simplehuman
For a paid alternative look into cloudron or unraid

------
hendry
FreeNAS + voidlinux nuc running grafana + prometheus.

Tbh I run hot and cold about self hosting since after work, I really really
want to be able relax at home.

Not wonder why the hell my nuc hasn't come up after a reboot. Or why is it so
hard to increase the disk space on my FreeNAS
[https://www.ixsystems.com/community/threads/upgrading-
storag...](https://www.ixsystems.com/community/threads/upgrading-
storage.79357/#post-551026)

------
dcchambers
A household wiki. Contains all kinds of information about our house and lives.
We used to track stuff like this in a google doc but it was getting unwieldy.

I wasn't happy with any of the free wiki hosting solutions available so I
ended up self-hosting a mediawiki site. It's been...challenging...to convince
my wife and family to adapt and use wiki markup.

I've been considering switching to something that uses standard markdown
instead since it's easier to write with.

~~~
slavox
I also had issues with the mediawiki/wiki editors and their clumsy nature.

For me I'm just after a simple pure text knowledge-base.

Currently I use vuepress
[https://vuepress.vuejs.org/](https://vuepress.vuejs.org/)

The positives with vuepress for me were:

* Plain Markdown (With a little bit of metadata)

* Auto generated search (Just titles by default)

* Auto Generated sidebar menus

The negatives:

* No automatic site contents, I mostly use the search to move around docs

* Search is exact not fuzzy

* The menu settings are in a hidden folder

------
Jaruzel
Hyper-V host:

    
    
        Active Directory (x2)
        Exchange Server 2013
        MS SQL 
        Various Single Purpose VMs providing automation
        Debian for SpamAssassion
        Debian for my web domains
        Custom SMTP MTA thats in front of SpamAssassin and Exchange
    

Raspberry Pis:

    
    
        TVHeadEnd
        Remote Cameras
    

Plus a Windows Server hosting all my files/media.

I used to self-host a lot more, but have been paring back recently.

------
canada_dry
Calendar: ([https://radicale.org](https://radicale.org))

Home automation/security system + 'Alexa': completely home grown using python
+ android + arduino + rpi + esp32

------
dnate
I self host a flask app on my raspberry pi, soldered to the garage door
opener.

I have hosted media folders/streaming applications for friends and family, but
this has been by far my most used and most useful hack.

------
Macha
So far I have a home server with:

* Unbound for dns-over-tls and single point of config hostnames for my home network

* Syncthing for file sync

* offlineimap to backup my email accounts

* Samba for a home media library

* cron jobs to backup my shares

* Unifi controller

On my todo list:

* Scheduled offsite backup (borg + rsync.net being the top contender currently)

* Something a bit more dedicated to media streaming than smb. some clients like vlc handle it fine, others do not.

* Pull logs for my various websites locally

------
vermilingua
On a bit of a tangent, hopefully not an inappropriate question:

What do you all spend on this sort of thing? Whether hosting remotely or on
local hardware, what would you say is the rough monthly/annual cost to move
your Netflix/Spotify/etc equiv to a self-hosted setup (excluding own labor)?

~~~
Havoc
Home server - nothing recurring. Repurposed an old gaming laptop. Only cost
was some USB3 HDD bays. Plus probably extra electricity since I run BOINC on
it.

Websites - nothing. Using GCP free server. About to move it to Oracle's free
VMs though thanks to GCP's IPV4 shenanigans and Oracle's free offering being
better (higher IO & you get two VMs).

------
chrissnell
At home, I run:

\- A weather station that lives on a pole on the yard. Powered by GopherWX
[https://github.com/chrissnell/gopherwx](https://github.com/chrissnell/gopherwx)

\- InfluxDB for weather station

\- Heatermeter Barbecue controller

\- oauth2_proxy, fronted by Okta, to securely access the BBQ controller while
I'm away. This proxy is something that everyone with applications hosted on
their home network should look into. Combined with Okta, it's much easier than
running VPN.

In the public cloud, I host nginx, which runs a gRPC proxy to the gopherwx at
home. I wrote an app to stream live weather from my home station to my
desktops and laptops and show it in a toolbar.

nginx in the cloud also hosts a public website displaying my live weather,
pulled as JSON over HTTPS from gopherwx at home.

------
ohiovr
I'm testing some self hosted apps including Nginx reverse proxy with
letsencrypt, nextcloud with either onlyoffice document server or collabora,
onlyoffice community server with mail, gitea, lychee, osclass, guacamole,
wireguard vpn, searx, and a few others.

------
dmclamb
Boring and predictable, but openvpn and pihole on a raspberry pi.

I have a second raspberry pi running a version of Kali Linux. I only hack my
own stuff for learning.

Once upon a time I ran a public facing website and quake server, and published
player stats. No time these days for much play.

------
zzo38computer
On my computer I host HTTP (with Apache), SMTP (with Exim), NNTP (with
sqlnetnews), QOTD (TCP only, no UDP), and Gopher. I might add others later,
too (e.g. IRC, Viewdata, Telnet, Finger, etc). And on the HTTP server I host
several Fossil repositories.

~~~
vageli
> On my computer I host HTTP (with Apache), SMTP (with Exim), NNTP (with
> sqlnetnews), QOTD (TCP only, no UDP), and Gopher. I might add others later,
> too (e.g. IRC, Viewdata, Telnet, Finger, etc). And on the HTTP server I host
> several Fossil repositories.

Man, at my last job in a large enterprise, I WISH they were running fingerd.
Would have made for some pretty cool, lightweight integrations.

------
geek_at
Open Trashmail so I can use throwaway emails with my own (sub)domains and keep
my data private

[https://github.com/HaschekSolutions/opentrashmail](https://github.com/HaschekSolutions/opentrashmail)

------
hanklazard
\- raspi running pi-hole \- synology NAS for storage and backups, and it runs
an Ubuntu VM for a wireguard vpn server \- for music, Volumio on a raspi as a
server with snapcast; 4 other amped raspi’s with speakers in other parts of
the house as clients, synced up via snapcast (check out hifiberry amp if
you’re interested in this sort of thing) \- an older Mac mini now running an
Ubuntu server with hassio virtualized. Lights, hvac, music controls, etc,
controlled through hassio front end \- print server on a pi zero

(I guess these may not really be “self-hosted” since I don’t make them
publically accessible through ports ... just vpn in to my home network)

------
yogsototh
On a scaleway (about 20€/month):

\- my websites with nginx

\- IRC (ngircd)

\- ZNC

\- espial for bookmarks and notes

\- node-red to automate RSS -> twitter and espial -> pinboard

\- transmission

\- some reddit bots manager I’ve written in Haskell+Purescript.

\- some private file upload system mostly to share images in IRC in our team

\- goaccess to self host privacy respecting analytics

At home, Plex.

------
moutansos
Raspberry PI 3: OpenVPN Dell Poweredge R720 running VMware ESXi With \- Ubuntu
Docker Host \- - Plex \- - Blog Site \- - TeamCity \- - Minecraft Servers
(Java and Bedrock) \- - Gitlab \- - ElasticSearch \- - Kibana \- - Resilio
Sync \- - PostgreSql \- Manjaro Linux VM \- Windows Server 2019 VM \- 3 Node
Kubernetes Cluster \- - Couple of Side Projects Running on It

Basically all the stuff I don't want to pay a cloud provider to host.

Overall the R720 with 48GB of ram has been one of my best buys hands down.
down the road I plan on grabbing a second server and a proper NAS or unraid
setup.

------
nilsandrey
\- Syncthing (folders across devices)

\- docker (just dev env with a lot of images, almost everything I can is
tested in there, and maybe used there too. Just on VM if is a desktop gadget
or app)

    
    
      - generic web
    
      - some stacks, Rails, nodejs, php.
     
     - ...
    

\- Calibre

\- Windows Media share feature for remote videos on devices and TV (, don't
like it really, mess with subtitles and really will look for a docker oss
alternative)

Wish list:

\- wallabag

\- firefox-sync (stuck on Chrome yet, no alternative on this found)

\- email sync

It's not so great for now. Looking on this thread for contacts and calendar
(currently used from the cloud classic providers)

------
ehnto
Edit: sorry, I misunderstood the question. The below is referring to software
development.

Everything. I keep infrastructure simple as I found as a developer,
infrastructure configuration, dependency issues and updates took an
extraordinary amount of time while providing zero benefit for products of a
small to medium size. I do have a plan in place should I need to scale, but it
is not worth maintaining an entirely different stack full of dependencies for
the off chance I get a burst in traffic I can't handle.

------
notinventedhear
# 2GB linode instance ($10/month)

    
    
      nginx
      mailinabox (email, nextcloud)
      gogs
      6 static websites
      3 (dumb) little personal web-projects
      selfoss
      mumble
      openvpn
    

# rpi-3 at home

    
    
      osmc (kodi) + 8TB of raided HDDs
      nginx
      chorus-2 in kodi publicly available (behind htpasswd) updated w/ dynamic DNS
      a nightly cron job rsyncs the from the linode instance
    

# another rpi-3 in garden shed

    
    
      8TB of raided HDDs
      nightly cron of the other rpi-3

~~~
mxuribe
Curious, why the other rpi-3 in the garden shed? Is that for "off-site"
backups?

~~~
anderspitman
Probably in case the house burns down.

------
k_sze
On a Linode instance (OS being Ubuntu Server 18.04):

\- mail server in Docker container

\- ZNC in Docker container

\- Shadowsocks server

\- Wekan as a Snap

\- My blog, statically generated using Pelican, served from nginx

At home, I only have a Synology NAS that is exposed to the internet.

------
munmaek
On my FreeNAS server: gitea, plex, openvpn (w/ ExpressVpn), Mayan EDMS

I am unhappy with the complexity of Mayan EDMS. I'm debating moving to
Paperless. All I want is a digital file system that 1) looks at directories
and automatically handles files 2) has user permissions/personal files so I
can let my family use it 3) has a web form for uploads.

I am planning to change gitea to sourcehut- the git service as well as builds.

Any ideas for things a raspberry pi 3 & 4 could be useful for?

~~~
cannonedhamster
Do you have a link for paperless? I've been looking for years for something
like this.

~~~
munmaek
[https://github.com/the-paperless-project/paperless](https://github.com/the-
paperless-project/paperless)

------
Fiahil
Like most folks here, I'm running the pihole/media/torrent suite. Hardware is
a Rock64 soon to be colocated with a few Raspberry pi 4. Everything is
dockerized and scheduled on k3s. Using kubernetes is a real life changer. I
can unplug one of the SBCs and things are automatically balanced and
rescheduled. It also makes the whole setup completely portable.

I use NFS on the NAS for the storage unit. It's the only thing I need to
backup.

------
bob1029
Nothing right now, but I am looking at spinning my own stack back up either
"on-prem" (aka at home), and/or in some bare-metal hosting provider.

Relying on streaming providers, cloud email services, etc., has left me in a
very foul mood lately and I feel like I need to take back control. My biggest
trigger was when I purchased an actual physical audio CD (this year; because
NONE of the popular streaming providers offer the album), ripped it to FLAC,
and then realized I had no reliable/convenient way to expose this to my
personal devices. I used to have a very elaborate setup with subsonic doing
music hosting duty, and all of my personal devices were looped in on it. This
was vastly superior to Spotify, et. al., but the time it takes to maintain the
collection and services was perceived to be not worth it. From where I am
sitting now, its looking like its worth it again.

How long until media we used to enjoy is squeezed completely out of existence
because a handful of incumbent providers feel its no longer "appropriate" for
whatever money-grabbing reasons?

~~~
thegagne
It may rub you the wrong way but Google Music allows you to upload your own
music.

------
kixiQu
I selfhost stuff for fun! Which I'm counting my EC2 instance as.

* Pleroma/Mastodon - I had been using Pleroma, but I'm not happy about a few things, so I bit the bullet to upgrade to a t3.small and am now running Mastodon. I love all the concepts of the fediverse, though the social norms are still being ironed out.

* Write Freely ([https://writefreely.org/](https://writefreely.org/)) at [https://lesser.occult.institute](https://lesser.occult.institute) for my blog (right now mostly holds hidden drafts)

* Matrix (Synapse) and the Riot.im frontend for a group chat. I'm a little conflicted, because right now the experience around enabling E2EE is very alarming for low-tech users and a pain for anyone who signs in from many places, and if it isn't enabled I have better security just messaging my friends with LINE. That said, I really want to write some bots for it. Group chats are the future of social networking, they all say...

------
greenyouse
I'm just starting out with building a virtual workstation system for myself
with Eclipse Che. My home desktop has always been much more powerful than my
laptop so I've always thought it would be ideal to have mainframe style
development. I learned about Che 7 this week and figured that it was worth a
shot. Using containers for everything sounds like an interesting idea to try
out too!

Surprisingly (at least to me), there are some really big companies like
Microsoft, IBM/RedHat, and others pushing this workflow. The editor is
supposed to basically be VSCode in browser and compatible with most
extensions.

I'm using my RPi as a jump box and have some commands to turn on my home
desktop + mount the file system and that kind of stuff when connecting. I've
used it in the past and it's worked nicely.

I got k8s running but got blocked by some bugs when installing Che. Looks neat
though. It would be cool to have a 2007 macbook with the computing power of a
2990WX workstation :).

------
winrid
I wrote my own orchestrator that I deploy my personal projects (pixmap,
watch.ly, etc) with.

The orchestrator can now deploy itself! All declarative service configuration
with autoscaling etc. It manages the infra and service deployment for me.
Thinking about open sourcing.

Nginx/nchan, NodeJS, static sites (vanilla/angular/react deployments), nfs,
MongoDB, Redis

------
pjc50
I used to host email and a blog. I even had a server in a rack on which I let
people have shell accounts.

I still have the email domain, because it's easier to run it forever than
migrate all the things you signed up for. But actually running my own email is
too much of an obligation and need to keep up on all the anti spam measures.

------
holri
Freedombox, Apache, Exim4, prosody (XMPP), rsync, rsnapshot, ssh all on 2
identical, redundant, interchangeable Olimex A20 Mini Server with ssd (1 at
home, 1 colo) and one more powerful x86 separate X2Go (Desktop usage) & File
(sftp) Server at home. Everything on pure, plain Debian stable and unattended-
upgrades.

------
bluedino
I bought a used Lenovo P50 for $450, added another SSD, it has 48GB and an i7
so it's overkill.

VMware ESXi, with VM's for Squid, DNS, MySQL, Nginx, Apache, basic file
server, Gitlab, and one that's basically for IRSSI

Strongly considering just moving everything to Debian with containers for
everything, easier to manage than VM's.

------
minimaul
As much as I can, currently:

On colo’d hardware:

\- off-site backup server (Borg backup on top of zfs) - this is a dedicated
box

\- a mix of VMs and docker containers - mostly custom web apps

\- email (it’s easier than you think)

At home:

\- file server using zfs

\- Nextcloud

\- more custom web apps

\- tvheadend

\- VPN for remote access (IKEv2)

\- gitlab

\- gitlab ci

Also run an IPSec mesh between sites for secure remote access to servers etc

While my workplace uses AWS a massive amount, I still prefer to run my own
hardware and software. Cloud services are not for me.

------
fractalf
Gitea for an easy gui git access to repos with personal/sensitive data and
Resilio for backing up my phone

------
mmcnl
Besides some self-hosted applications, this is some stuff that is very useful
to me:

* Nextcloud - your own Dropbox! Amazing stuff.

* VPN - simple Docker service that is super reliable and easy to set up (docker-ipsec-vpn-server)

* Ghost - a very nice lean and mean blogging CMS

* MQTT broker for temperature sensors

* Samba server

* Deluge - Torrent client for local use

* Sabnzbd - NZB client

* Gitea - my own Git server

* Mail forwarder - very handy if you just want to be able to receive email on certain addresses without setting up a mailbox

* Pihole - DNS ad-blocking

* Jellyfin - self-hosted Netflix

It's become sort of my hobby to self-host these kind of things. I use all of
these services almost daily and it's very rewarding to be able to fully self-
host it. I also really love Docker, self-hosting truly entered a new era
thanks to readily avaibable Docker images that make it very easy to experiment
and run things in production without having to worry about breaking stuff.

------
harlanji
I built a setup called TinyDataCenter on a RasPi and run it hybrid with AWS
and S3FS for unlimited media storage. On it I built iSpooge Live to host and
syndicate my livestreams to YouTube and Twitch, and built some ffmpeg scripts
to turn videos into HLS and playback with adaptive rate via VidroJS. Also on
it is my portfolio site and in progress are imported copies of all my social
media archives like Twitter and IG. Auth happens via JWTs from Auth0 but I’ve
an email magic link system to bolt in soon. There’s an xmpp server that isn’t
integrated yet. Email is hosted 3rd party but I may try email in a box. The
theme is decentralized with syndication. This has been going on and live
streamed to regularly for about 2 years. All mybdcripts are open source, same
username on GitHub.

------
conradfr
I actually self-host a Phoenix LiveView silly game at work on my MacBook Pro,
I'm not sure you can self-host more than that ;) For the anecdote the devops
tried to ddos it but the app was working like it wasn't flooded by requests.

Of course you can't even tell Macos to not suspend wifi or whatever if you
close the lid while on battery so now I'm trying to move it to a Raspberry Pi
4 but I've got an obscure ssl error with OTP22 on it while querying an api, so
I'm trying to debug that instead ... oh the joy.

All my side projects and some clients are hosted old school style in a
dedicated servers. I do overpay because that's the same price and machine
since 2013 and yet it's still way cheaper than any cloud offering, especially
because of the hosted databases pricings.

------
CarelessExpert
At home:

TT-RSS + mercury-parser + rss-bridge + Wallabag to replace Feedly and Pocket.

Syncthing + restic + rclone and some home grown scripting for backups.

Motion + MotionEye for home security.

Deluge + flexget + OpenVPN + Transdroid.

Huginn + Gotify for automation and push notifications.

Apache for hosting content and reverse proxying.

Running on a NUC using a mix of qemu/kvm and docker containers.

~~~
shostack
What sort of things do you use Huginn and Gotify for?

~~~
CarelessExpert
I'm using Gotify for receiving push notifications on my phone for things
where, in the bad old days, I might've used email. So things like: when my
offsite backups complete, if my VPN goes down, on torrent add/complete events,
and when motion is detected on my security cameras.

Huginn came into being because I wanted a way to republish some of my emails
as an RSS feed that I could subscribe to with TT-RSS (e.g. Matt Levine's
newsletter), and for that purpose alone it's justified its existence.

I've also used it as the plumbing that connects my various services to Gotify
(Huginn makes a Webhook available and the event gets routed to Gotify). This
is, admittedly, entirely unnecessary; I could just hit Gotify directly. But
putting Huginn in the middle could give me some flexibility later... and it's
there, so, why not use it? :)

------
ekianjo
many things:

\- Nginx

\- Nextcloud (with Calendar/Contacts on it)

\- IRC client (thelounge)

\- IRC server

\- DLNA server

\- Ampache server

\- video and photo library thru NFS (locally only)

\- OpenVPN

\- Shiori for bookmarks

\- Gitea for private projects

\- Syncthing (to keep a folder synchronized across my devices)

\- Jenkins

~~~
kuzimoto
Glad to see another Ampache user!

~~~
ekianjo
Yeah after trying many other options it is the one that works the best so far!

~~~
kuzimoto
Agreed! There's not much it can't do!

------
Zash

      * Email (postfix + dovecot)
      * XMPP (prosody + biboumi for IRC gateway)
      * Static websites
      * Mercurial code hosting (mercurial-server + hgweb)
      * File storage (sftp, mostly accessed via sshfs)
    

Some on a HP microserver somewhere, some on a VPS.

------
Spivak
Plex, Bitwarden, Nextcloud, Unifi, Pihole, OpenVPN, IPSec VPN, Gitea,
OpenLDAP, Portainer, My Personal Site, Cloud Torrent, TTRSS, Grafana, Loki,
FreeRADIUS, Kanboard, Dokuwiki, SMTP, Goitfy, php*Admin, Container Registry,
Python registry, Matomo, PXE Server.

~~~
h1d
Is IPSec VPN a IKEv2 on either LibreSwan or StrongSwan connected to FreeRadius
for authentication? What are the clients you connect from? Is it stable?

------
gargron
I don't host anything at home, but I think it still counts as self-hosting if
you run an independent service. In that sense, I self-host Mastodon:
[https://mastodon.social](https://mastodon.social)

------
platz
For my bookmarks, I self-host Espial, an open-source, web-based bookmarking
server.
[https://github.com/jonschoning/espial](https://github.com/jonschoning/espial)

------
theshrike79
Fastmail handles my mail, Newsblur for RSS, iCloud for calendar. My blog is
hosted on Netlify.

The only things I host are either just hobbies or non-essentials:

At home: \- Node-red for home automation \- PiHole for ad filtering on the
local network \- Plex on my NAS for videos \- A Raspi for reading my Ruuvitags
and pushing the info to MQTT On Upcloud and DigitalOcean and a third place: \-
Unifi NVR (remote storage for security cameras) \- Flexget + Deluge for
torrents \- InfluxDB + Grafana for visualizing all kinds of stuff I measure \-
Mosquitto for MQTT

------
iuguy
Online:

\- Nextcloud

\- Mailu.io

\- Huginn

\- Gotify

\- Airsonic

\- Gitea

All on a dedicated box. Planning to add password sync, wallabag, syncthing a
VPN and a few other features. Other boxes I have run various things from DNS
to backup MXes and a WriteFreely instance on OpenBSD.

Internally I host a ton of stuff, mostly linked to a Plex instance.

------
algaeontoast
File server and plex, that’s about it. I have another server I’ll occasionally
run a Kubernetes cluster on, otherwise I don’t really bother with self hosting
- I hate dev ops shit for a reason...

~~~
shantly
> I hate dev ops shit for a reason...

I notice I was a lot more keen on hosting a bunch of crap myself before I knew
how to do it "right", and before devops, orchestration ("you mean running
scripts in remote shells?"), cloud, or containers or any of that were things.
And yet it all worked just fine back then—time spent fixing problems from my
naïve "apt-get install" or "emerge" set-up process wasn't _actually_ that bad,
compared with the up-front cost of doing it all "right" these days. A couple
lightly-customized "pet" servers were fine, in practice. Hm.

~~~
shostack
As a beginner programmer this is something I wonder about. Having worked with
many amazing engineers, I have some sense of the effort that goes into "doing
it right" and the fear of god put into me for the consequences of not doing it
right.

So then look at home projects and I wonder if I know enough to self host
things, or host them on GCP in a manner that won't just invite getting hacked,
running up a ridiculous bill, or leaking my private sensitive data out.

Any guidance to offer?

~~~
shantly
1) Just pay a flat fee for a VPS, unless you're _trying_ to learn how to use a
"true" cloud provider. Their web interfaces usually make recovery from the
worst failure modes ("I can't even ping the box...") trivial and they'll cut
you off if usage goes too high (which is what you want if you're trying to
avoid insane bills). They may also have DNS and such in one place, again in an
easy pointy-clicky interface, which is nice.

2) A lot of what people do is chasing nines that you don't need (and a lot of
the time they don't either, but "best practices" don't you know, and no-one
wants to have _not_ been following best practices, even if doing so was more
expense and complexity than it was worth for the company & project, right?) so
just forget about failover load balancers and rolling deploys and clustered
databases and crap like that. _All_ of that stuff can be ignored if you just
accept that you may have trouble achieving more than three nines.

3) If it's just for you, consider forgetting any active monitoring too. That
can _really_ kill your nines of reliability, but if it's mostly just you using
it, that may be fine, and you won't get alerts at 3:00AM because some router
somewhere got misconfigured and your site was unreachable for two minutes for
reasons beyond your control. Otherwise use the simplest thing that'll work.
You can get your servers to email you resource warnings pretty easily. A ping
test that messages you when it can't reach your service for the last X of Y
minutes (do not make it send immediately the first time it fails, the public
Internet is too unreliable for that to be a good idea) is probably the
fanciest thing you need. Maybe you can find some free tier of a monitoring
service to do that for you and forget about it, even.

4) If you can mostly restrict yourself to official packages from a major
distro, and maybe a few static binaries, it's _really_ easy to just write a
bash script that builds your server from scratch with very high reliability.
Maybe use docker if you're already comfortable with it but otherwise, frankly,
avoid if you can and just use an official distro packages instead, as it'll
complicate things a lot (now you have a virtual network to route
to/from/among, probably need a reverse proxy, you may have a harder time
tracking down logs, and so on). Test it locally in Vagrant or just plain ol'
Virtual Box or whatever, then let it loose on a fresh VPS. If you change
anything on the VPS, put it in the script and make sure it still works. If
you're feeling _very_ fancy learn Ansible, but you'll probably be fine without
it.

5) For security, use an SSH key, not a password, and change your SSH port to
something non-default (put that in your setup script) just to cut down on
failed login noise, if you feel like it. You could add fail2ban but if you've
changed the port and are using a key it's probably overkill.

6) Forget centralized logging or any of that crap. If you have a single digit
count of VPSen then your logging's already centralized enough. If one becomes
unreachable _and_ can't be booted again _and_ you can't find any way at all to
read its disk, and that happens more than once, consider forwarding logs from
_just that one_ to another that's more reliable if you wanna troubleshoot it.
You can do this with basic logging packages available on any Linux distro
worth mentioning, no need to involve any SaaS crap.

7) Backups. The one ops-type thing you actually _have to_ to do if your data's
not throwaway junk is backups. Backups and strictly-used build-the-server-
from-scratch + restore-from-backup scripts are kinda sorta all _most_ places
actually _need_ , despite all the k8s and docker chatter and such.

8) Cloudflare exists, if you have any public-facing web services.

[EDIT] mind none of this will help you get a job anymore since everyone wants
a k8s wizard AWS-certified ninja whether they need 'em or not, so don't bother
if your goal is to learn lucrative job-seeking skills, but it's entirely,
completely fine for personal hosting and... hate to burst anyone's bubble...
an awful lot of business hosting, too. Warning: if you learn how to run
servers like this you may need to invest in some sort of _eye clamp_ to
prevent unwanted eye-rolling in server-ops-related meetings at work, depending
on how silly the place you work is.

------
apple4ever
In DO:

4 Ubuntu 16.04 servers:

\- Nginx/PHP for Wordpress \- MySQL \- Redis \- Mail

Planning to expand the the Nginx/PHP servers to at least two, and add load
balancers. All certs are provided by an Ansible script using Lets Encrypt
(yuck).

At home:

Proxmox running on two homebuilt AMD FX 8320 servers with 32GB each, with
drives provided by FreeNAS on a homebuilt Supermicro server with about 10TB of
usable space (on both HDDs and SSDs)

Ubuntu 16.04 Servers:

\- 2x DNS \- 2x DHCP \- GitLab \- Nagios \- Grafana \- InfluxDB \- Redmine \-
Reposado \- MySQL

Other:

\- Sipecs

All set up via Ansible.

Next will set up a Kubernetes cluster (probably as far as I’ll get with
containers).

------
DrAwdeOccarim
I host everything internal where if I need the resource I VPN in from outside.
They all run on Raspberry Pis.

> Resilio Sync for iPhone pictures backups and "drop box" file access

> Transmission server

> SMB share of NAS to supply OSMC boxes on every TV

> Nighthawk N7000 running dd-wrt with a 500gb flash drive attached as storage
> for my Amcrest wifi cameras

> Edgerouter Lite running VPN server

> Hassbian for my zwave home automation stuff

> A pi with cheap speakers that I can log into and play a phone ringing sound
> so my wife will look at her phone!

------
HellfireHD
I wasn't going to post mine until I realized that I'm hosting some stuff that
I haven't seen mentioned yet.

    
    
        Appveyor
        Gitea
        Graylog + Elastic Search
        Minecraft/Pixelmon
        Nodered
        ruTorrent
        Taiga
        Tiny Tiny RSS
        Ubooquity* 
        WikiJS
        Zulip (chat/IM)
        

*I hate it, but haven't found something better

Also, kudos to those brave souls who are running Tor exit nodes!

Edit: Forgot a bunch

------
preid24
Intel NUC7i5BNK with coreos running the following in a single node docker
swarm:

    
    
      - Traefik (reverse proxy)
      - Git Annex
      - Gitea
      - Drone (CI)
      - Docker Registry
      - Clair (security scanning for docker images)
      - Selfoss (RSS reader)
      - Grafana / Prometheus / Alertmanager (overkill really)
      - A few custom applications...
    

Turris Omnia running transmission under lxc

------
lostmsu
I tried to host OwnCloud, but could not figure out how to make fully automatic
updates to function (including host OS - e.g. Ubuntu).

Now I only host my own project:
[http://billion.dev.losttech.software:2095/](http://billion.dev.losttech.software:2095/)

Also regular Windows file sharing which I use for media server and backups.

Though I'd like to expand that. Maybe a hosted GitLab.

~~~
fractalf
Try Gitea instead og Gitlab if all you need is just a simple online gui for
git. Its like a lightweight version of github, super sweet

------
javitury
I run a small sever with node-red. Right now I use it to scrap university
websites looking for PhD paid scholarships.

Also, I use it to find flats when I need ro.

~~~
mosselman
That flats thing sounds cool. Do you have anything written up or available
somehow where I can read up on it? Or would you care to elaborate here?

------
absc
I have one OpenBSD VM running on vultr with:

\- Mail server (OpenSMTPD)

\- IMAP (Dovecot)

\- CVS server for my projects.

\- httpd(8) for my website.

I still need to add rspamd for spam check. But insofar, I received just one
spam E-mail.

~~~
mdaniel
> \- CVS server for my projects.

Out of curiosity, do you genuinely prefer CVS or just haven't migrated from a
historical repo?

------
dvko
Email, using mailinabox.email. Highly recommend it.

Also NextCloud (files, contacts and calendar), few WordPress websites and
Fathom for website analytics.

------
jorijn
Synology NAS:

    
    
      Unifi controller
      Miniflux
      CouchPotato
      DSMR Reader (software that logs smart electricity meter data)
      Gitea
      Deluge
      MySQL
      PostgreSQL
      Cloud Storage mirror (for Google Drive backup)
    

Intel NUC:

    
    
      Full Bitcoin node
      Bitcoin lightning node
    

Remote (Digital Ocean):

    
    
      Trading Software
      Various PHP websites

------
frgotmylogin
Currently nothing but Hass.io on a raspberry pi with an assortment of z-wave
and zigbee sensors and a few wifi enabled light bulbs.

------
pnutjam
OpenSuse Leap, that acts as a NAS for my other computers. At home: Borg
backups jellyfin x2go

cloud (time4vps 1TB storage node) borg calibre AdGuard

\-- home server data drive rsyncs to an internal data drive (XFS to btrfs),
btrfs drive takes a snapshot and unmounts when not in use, then important
stuff is rsynced to my VPS. \--- home drives backed up with borg for
encryption

------
rukuu001
Syncthing as a Dropbox alternative

I keep looking at hosting my own mail server, but get scared off by tales of
config/maintenance dramas.

~~~
smartbit
For gmail to accept mail and not end up in spam folders, add domain
verification TXT records
[https://support.google.com/a/answer/2716802](https://support.google.com/a/answer/2716802)

------
Artemix
I self-host the following services:

    
    
        syncthing
        nfs server
        UPnP server, connected to my media NAS
        gitea server, for my personal projects
        droneci, linked to my gitea server, for building websites and releases I publish
        A few locally hosted services, such as DevDocs, draw.io or Asciiflow, for convenience.

------
psic4t
On cheap cloud instances at Hetzner:

    
    
      - postfix/dovecot for mailing
      - searx instance
      - synapse for matrix
      - unbound for DoT
      - nginx for my blog
      - gophernicus for old times sake
    

At home:

    
    
      - nextcloud
      - monero full node
      - unbound backup instance
      - fhem for home automation
      - restic for backup

------
p0d
I run test environments for my saas products, gitlab and nextcloud on a dual
core box, hp dc7900, in my roofspace. The os is running of an ssd and there
are two old metal disks in software raid.

All my business backups go to the same box. I have a pi and enrypted usb drive
copying my backups to my shed from my house.

------
sahoo
Plex on raspberry Pi2 with deluge remote client/server. I don't think it can
handle more than that.

~~~
sahoo
I do plan to upgrade the pi or get another one for a pihole.

------
zelon88
Everything.

PiHole, HRCloud2, HRScan2, HRConvert2, my wordpress blog, a KB, and a few
other nick knacks. Currently working on a noSQL share tool (for auth-less
large file sharing) and then maybe this idea that's been floating around my
head for a Linux update server. Like WSUS for linux.

------
wildduck
nodejs nginx apache2 postgresql mysql nextcloud jvm/rhino/ringojs mattermost
wekan wikimedia nextERP nodejs WebRTC signaling server, nodejs pushing
notification server, STUN server, mumble Asterisk git Haraka etherpad

------
nikisweeting
Zulip, archivebox, codimd, mailu, plex, radarr, sonarr, jackett, transmission,
matomo, kiwix, minecraft, nextcloud, unifi controller, unifi CRM, pihole,
wireguard, zfs, glusterfs, freenas, autossh, swarmpit, netdata, syncthing,
duplicati, elk stack, nomad, a bunch of static sites, a bunch of wordpress
sites, a bunch of assorted django apps (including a large consumer-facing
one), custom dyndns and tls renewal cron jobs, and many many more that have
come and gone over the years.

All on a few Vultr + Digitalocean droplets, 2 raspis + 1 atomic pi, a couple
HP i5 mini desktop machines, and a Dell r610 rack server with 24 cores and
48GB of ram (with about 36TB of assorted shucked and unshucked USB hard drives
attached in a few GlusterFS / ZFS pools). I have a home-built UPS with about
1.5kwh worth of lead-acid batteries powering everything, and it's on cheap
Montreal power anyway so I only pay $0.06¢/kwh + $80/mo for Gigabit fiber.
It's a mix of stuff for work and personal because I'm CTO at our ~9 person
startup and I enjoy tinkering with devops setups to learn what works.

All organized neatly in this type of structure:
[https://docs.sweeting.me/s/an-intro-to-the-opt-
directory](https://docs.sweeting.me/s/an-intro-to-the-opt-directory)

Some examples: [https://github.com/Monadical-
SAS/zervice.elk](https://github.com/Monadical-SAS/zervice.elk)
[https://github.com/Monadical-
SAS/zervice.minecraft](https://github.com/Monadical-SAS/zervice.minecraft)
[https://github.com/Monadical-
SAS/ubuntu.autossh](https://github.com/Monadical-SAS/ubuntu.autossh)

Ingress is all via CloudFlare Argo tunnels or nginx + wireguard via bastion
host, and it's all managed via SSH, bash, docker-compose, and supervisord
right now.

It's all built on a few well-designed "LEGO block" components that I've grown
to trust deeply over time: ZFS for local storage, GlusterFS for distributed
storage, WireGuard for networking, Nginx & CloudFlare for ingress, Supervisord
for process management, and Docker-Compose for container orchestration. It's
allowed me to be able to quickly set up, test, reconfigure, backup, and
teardown complex services in hours instead of days, and has allowed me to try
out hundreds of different pieces of self-hosted software over the last ~8
years. It's not perfect, and who knows, maybe I'll throw it all away in favor
of Kubernetes some day, but for now it works really well for me and has been
surprisingly reliable given how much I poke around with stuff.

TODOs: find a good solution for centralized config/secrets management that's
less excruciatingly painful than running Vault+Consul or using Kubernetes
secrets.

------
pasxizeis
What do people use to provision their Raspberries? Ansible or something?

------
IceWreck
On my VPS

    
    
      * My Website
      * Seafile
      * FreshRSS
      * RSSBridge for making rss feed for websites that don't have one
      * Dokuwiki
      * A Proxy
      * Multiple Telegram and Reddit bots

------
asdkhadsj
On this note, I've got a few services I'd like to setup locally. I'm curious
if I could set them up in a Docker-like fashion, where it's super easy to
manage the individual container image - and then run it on some type of home
"cloud". I debated reaching for Docker Swarm, but I'm curious:

What might be the easiest way to achieve this? Running a Kube cluster is
insane for my needs, I imagine I'd be perfectly happy with a few Pi's running
various Docker Containers. However I'm unsure what the easiest way to manage
this semi-cloud environment.

 _edit_ : Oh yea, forgot Docker Compose existed. That may be the easiest way
to manage this, though I've never used it.

~~~
dillonmckay
You can run a single node k8s setup.

~~~
detaro
Is k8s worth it on just a single node if you aren't prototyping or learning
for a larger setup?

------
jimmcslim
For folks that are reverse proxying I have a few questions...

1) Do you identify the reverse proxy by host or by path?

e.g. <service>.yourdomain.com or yourdomain.com/<service>

2) Do you still run everything over a VPN?

~~~
CarelessExpert
Subdomain as well.

External services I need are directly accessible via a local reverse proxy
that's publicly visible over IPv6.

For IPv4-only scenarios I proxy through a linode instance (that also hosts a
few things, including my blog) which sends the traffic in over v6.

Obviously this is all fronted by a traditional firewall.

And before you ask: it's surprising how often v6 connectivity is available
these days. Mobile phone providers have moved to v6 en masse, and even
terrestrial internet providers are starting to get religion.

It's still not available in my workplace (surprise surprise), but other than
that, much to my surprise, v6 is my primary mode of connectivity.

------
gorkemcetin
Self hosting Balsa Knowledgebae
([https://getbalsa.com](https://getbalsa.com)), an alternative to Notion and
Evernote.

------
carc1n0gen
I host my blog on a raspberry under my desk. At some point I'll Get around to
moving my gitea instance there too, which is currently on digital ocean

------
CaptainJustin
Running a few different containers in Docker at home.

\- Hand-rolled Go reverse proxy with TLS from LE.

\- Several Pg DBs for development.

\- VPN server.

\- Chisel for hosting things "from home" while running on my laptop remotely.

\- Etcd

\- Jenkins

\- Gitea

\- Pi-hole

\- A few different development projects

------
danielparks
Postfix, Dovecot, Amavis/Spamassassin, Bind, NGINX.

So, mail, DNS, and a few web sites. I’ve been running something like this for
more than 15 years now.

------
Mave83
powerdns, wireguard, gitlab, nginx, pgsql, mariadb, zabbix, nextcloud,
Grafana, graphite, prometheus, haproxy, postfix, and a lot more

~~~
zamadatix
Curious what made you choose powerdns over bind, I've never tried it out
before.

------
BigBalli
Pretty much everything I develop (excluding most databases) are hosted on my
cloud server. Best $5/mo I ever spent!

------
awat
Tiny Tiny RSS - [https://tt-rss.org/](https://tt-rss.org/)

------
vbezhenar
I have home server to host samba share for my needs, also hosting videofiles
so I can watch them on my TV.

------
KajMagnus
I self host Talkyard, a cross between StackOverflow, Slack, HackerNews.
[https://github.com/debiki/talkyard](https://github.com/debiki/talkyard) (I'm
developing it)

And SyncThing, [https://syncthing.net/](https://syncthing.net/)

------
johnx123-up
Restyaboard (for trello alternative), GitLab (for GitHub alternative)

------
hanniabu
Ethereum archival node

------
jtthe13
Not much: Plex server, and Pi-Hole in a docker.

------
scorown
Bitwarden, Unifi, PiHole

It all started with hosting subsonic

------
danielovichdk
Windows 2000 ISS 5 FTP SQL server 2000

------
nirav72
plex gitea deluge + VPN nzbget radar sickchill jackett grafana pihole openvpn
server unify controller

------
dbeley
\- Nextcloud

\- Ampache

\- Shaarli

\- Dokuwiki

\- Deluge

\- Hugo blog

Everything running on a cheap server from kimsufi.

------
gramakri
I self-host using Cloudron (obviously). My list is:

* Gogs

* WordPress

* Wallabag

* Ghost

* Minio

* Email (yes, this is my primarily and only email)

* TinyTinyRSS

* NextCloud

* Meemo

* MediaWiki

------
bribri
Calibre web

------
sharma_pradeep
Blog

------
nonamestreet
bitcoin full node

