Hacker News new | past | comments | ask | show | jobs | submit login
Gentle Guide to Self-Hosting (knhash.in)
325 points by kn81198 25 days ago | hide | past | favorite | 136 comments



I’m an old-timer, I’m surprised that paying for shared hosting is now “self-hosting.” Nothing wrong with that, but that would never have been called self-hosting ten years ago.

I guess it’s like how “cooking from scratch” evolved. A cookbook from the nineteenth century might have said “1 hog” as an ingredient and instructed you to slaughter it. Now of course you buy hog pieces on foam trays.


There are other issues with the terminology as well. The self hosting community (centered at /r/selfhosted) has a very technical vibe. These people enjoy tinkering with computers. They're like kit car builders.

But there's a whole market of people who could benefit from self hosting, but shouldn't be required to understand all the details.

For example, you can get many of these benefits by using a managed service with your own domain. Things like data ownership, open source software, provider competition, etc.

I think we need a broader term. I've been using "indie hosting" lately.


This is the kind of thing I've been watching unfold with some home "NAS" boxes over the past couple of years. It started much earlier but it's started to become more of a differentiating factor in some of the lines lately because the NAS side of things is basically entirely a solved problem for 99% of people, so the manufacturers (Synology, QNAP, Terramaster, U-Green, etc.) have been adding support for doing what looks a lot like turn-key installation of things like NextCloud, Plex, and a bunch of other services that the self-hosting community has been talking about for years.

I think one of the big drivers of it has been the serious increase in performance and capability of the low power embedded processors from Intel and AMD (and in the last year or so some ARM based ones), like supporting more than 2GB of ram and having multiple cores that can meaningfully do work even with a 15W TDP.


I am of the impression that Synology is pivoting away.

>Starting from this version, the processing of media files using HEVC (H.265), AVC (H.264), and VC-1 codecs will be transitioned from the server to end devices to reduce unnecessary resource usage on the system and enhance system efficiency.

https://www.synology.com/en-us/releaseNote/DSM

They say it's to "reduce unnecessary resource usage" and "enhance efficiency", I say it's the start of a race to the bottom of the barrel now that the market is saturated and BOMs start weighing heavier.


If my device supports the native format of the content, I definitely want it decoded there rather than transcoding on the server. Assuming said format isn't significantly more power hungry than the transcoded codec.


Sure. But they’re not giving the user a choice in the matter. Also, it’s transcoded on the device as it’s backing up, which is the last thing I want to spend battery power on. My NAS is on and plugged into the wall 24/7 for a reason


"digital souvereignity" maybe?


A term I like, but it doesn't exactly roll off the tongue or fingers.


We call it home-running. My company Fractal Networks is starting with "self-hosted" game servers (on Windows) to get the ball rolling. Check us out https://fractalhome.run.


I like that. Home hosting is a term I've seen and really like, but I think it's too restrictive. There may be far more people who want to keep things in the cloud than who want to run in their house.


I call it self-hosting when it's on your server, and hosting at home, if we want to be specific that the server is at home.


What would you call it if it's hosted on someone else's server, but using open source software under your domain, and you have a complete backup of all the data so you could move it home or to another provider whenever you want?


For the last 30 years, that's been called "web hosting" [1]:

  Shared custody
  No confidentiality
  Portable domain identity
[1] https://www.webhostingtalk.com


If someone thinks to themselves: "I really don't like the ways twitter is changing. I'm leaving, but is there anything I can do to avoid the same thing happening with some other app/company?"

If they search around for an answer to that question, pretty soon someone is going to tell them to "self-host a Mastodon instance" or in the near future "self-host an ATProto instance".

My point is that the term "self-hosting" is unlikely to get them what they want, unless they happen to be interested in learning about DNS, IP addresses, ports, port forwarding, routers, firewalls, NAT, CGNAT, TLS, TCP, HTTP, web servers, Linux, updates, backups, etc, etc.

I don't think "web hosting" is going to help them much either.

What most people want is something like a Mastodon instance from masto.host[0] that integrates with a service like TakingNames[1] (which I own) to delegate DNS with OAuth2. I think we need a new term for this sort of setup. I think the term should also include self-hosting solutions, as long as those solutions focus on the outcomes (having a car to drive), not the implementation (building a kit car).

[0]: https://masto.host/

[1]: https://takingnames.io/blog/introducing-takingnames-io


I see both sides. While "self hosting" has always meant hosting yourself, and hosting on other people's systems isn't hosting something yourself, I can see how people can get confused and can call running their self-configured software on a rented VM "self hosting".

It's not as unambiguously incorrect as other silly things people say and do that are technically incorrect, but it is annoying when people don't provide enough context where it matters.

Honestly, the distinction only really matters when discussing privacy. Hosting your own stuff in a rented VM is still self hosting, but if you're talking about how you self host because you care about the security of your data, you're now definitely not talking about rented VMs.

Generally, I think we need to get used to the idea that "self hosting" now also refers to hosting software you configure on rented systems / VMs.


> don't like the ways twitter is changing. I'm leaving

Has there been work to quantify relative network effects in Twitter vs Mastodon, either generally or in specific communities? e.g. if person A was following N people on Twitter (e.g. in a list), what subset or superset of N could be followed on Mastdon?

If a user requested all their data from Twitter, including people being followed, is there tooling to map user identity/handles from Twitter to member names on decentralized alternatives?

> someone is going to tell them to "self-host a Mastodon instance.. from masto.host

Wouldn't that be masto-hosted rather than self-hosted?

In that scenario, Masto.host would be a trusted custodian of a social media identity, somewhat like a bank.


There are definitely plenty of people who would say that using a hosting provider doesn't count, even if you're deploying the software yourself.

The one generally accepted exception to this is network protection. You don't want to expose your home ip address to the outside world if you can help it, so a lot of people use tailscale, cloud flare tunnels, or a vps as a proxy.


A VPS that proxies traffic over Tailscale is another neat option. I use this approach to serve self-hosted services that I want to be accessible over the internet.


Why use Tailscale if you can just setup a WireGuard tunnel?


Tailscale is far, far less work to set up and maintain. Not to use a cliche, but it reminds me of Dropbox vs. rsync.

If you know Wireguard well enough to set up your own and you're willing, you'll have a lot more control and less dependency, which is a win IMHO. But if you are limited by time and/or knowledge, Tailscale is great


Aren't we talking about self-hosting, tinkering with your software for fun and hobby instead of going the SaaS way? Arguing about WG instead of TS in this context is perfectly fine


Indeed, if you got the impression from my comment that I didn't think a debate on WG vs. TS was fine, then I apologize. I think it's a great (and important) thing to debate. My opinion is as stated. I think it's a different cost-benefit analysis for each person depending on time and/or knowledge.


Don't worry!

Staying on the topic, I wonder how easy/complicated is to self-host Head scale, which is the opensource implementation of the TS server.


Some people want the control without it becoming a full time hobby.

I wanted a NAS. I could do it with Linux and ZFS, rolling my own with full control. However, I didn’t want to sink that much time into it, and figured when something needed to be done, I would have forgotten so much I’d need to relearn over and over again.

Instead I went with a Synology. I get my NAS, I’m in control of my data, I can run some stuff with Docker on it… but I don’t really have to spend any time playing sys admin on my weekends.


i self-host TS (headscale), so maybe not mutually exclusive


Wireguard is very easy to set up imo.

Tailscale adds a lot of conveniences on top of Wireguard, though. I don't think most of their value comes from just eliminating the key management stuff from Wireguard setup.


Because they have good PR. Mesh networks are a dime a dozen, some of them have existed for decades and do not even rely on a central server (see tinc for an example).

There are more lightweight projects that rely on native kernel mode wireguard (thus giving fantastic performance) and only simplify key setup, without the need for persistent daemons that have had their own high severity CVEs. If you're asking this question, you might be better served by something like innernet (again, there are tons of alternatives).

There are more alternatives that are fully open and self hostable (including all server components), have support for the native kernel module, while having the same feature set as Tailscale (like netbird, but it's not the only one).

But TS is an HN darling because their devs have a presence here, some of them very well known and highly visible, and the company places lost of advertisements in podcasts and such.


I work in IT for 30 years, wrote a tiny bit of the Linux kernel, self host plenty of things yada yada yada.

When I discovered tailscale it was a godsend - all the annoying, boring, moving parts are gone. Thus is a fantastic product that just works.

I have a backup WG link to my main servers just in case but this is that: a backup.


Just ease of use mostly, Tailscale works even behind CGNAT and automatically manages things for you.


I think you're unlikely to have a very good experience with Tailscale behind CGNAT if you're doing anything high bandwidth like video streaming from a Plex/Jellyfin server.

AFAIK Tailscale only supports 2 modes of connection: direct connect or relayed over WebSockets with their DERP protocol. CGNAT is going to limit you to DERP, which is not designed for transmitting a lot of data. For one thing, that could get rather expensive for Tailscale.


Oh yeah it's not going to be very fast, but for general usage that doesn't involve large transfers it's fine.


I have a VPS configured for BGP peering, using my own ASN, tunneling an IPv4 block and a couple of IPv6 blocks back to my home network over a wireguard tunnel. These wind up on their own VLANs, exposing a few VMs directly to the Internet.

It took a bit of time to set this up (and I fortunately had the V4 block already registered from back in the 90's.) I also had experience with BGP from previous jobs at early ISPs, which helped. Proxying is easier.


In my case I am just interested in the software I'm running behind the proxy. I use CF tunnels to expose my internal services, and spend my tinkering time on the actual services, rather than (to me) wasting the time to bother with worrying about updating IPs or setting up custom auth schemes (I keep a lot of my services locked down entirely behind github SSO, so you can't even reach my e.g. Jellyfin login page without first being auth'd to github as me, which basically prevents all brute-force attempts on my services).


Cloudflare?


I must be becoming an old timer too, I only really consider it self hosting if its on my own hardware.

In case that doesn't make me an old timer, I also actually have pork and home cured bacon in the freezer from hogs we raised and processed. "An old soul living in a new world" feels pretty fitting here.


Hosting from home was always subject to home ISP ToS limits on doing that very thing. When I self-hosted in the early days, it was still paying someone to mount my system in their rack and use their network. So whether that was hardware that I rented from them, built the box myself, or using a VM they provide, it's still the same amount of work to maintain it. That's still different from using Wix/Squarespace, geocities, or using a social media platform.


> was always subject to home ISP ToS limits on doing that very thing

Every ISP prohibition on self-hosting that I have seen specifies commercial use, not just hosting services (since obviously that could technically prohibit tons of normal and authorized uses like co-op games).


I agree with you.

and so does the author, kind of...

"And so, here is a gentle introduction to self-hosting that is not "true self-hosting", but whatever. Sue me."

:)


> Sue me

Or read an HN thread on "true self-hosting", https://news.ycombinator.com/item?id=41440855#41460999


They did not built their CPU from scratch, so this is not self hosting to me - they do not completely own the hardware.

Why arguing on semantics?


The difference is more than semantics, but there will always be vendor innovation to blur boundaries.


No it is not. I self-host on my hardware and regularly consider moving some services to a VM hosted by someone else. It would still be self-hosting because I am in control of the service.

If you want to have a truly service "by you" this is going to be complicated to rewrite (or review) the applications and OS and build your own hardware from something arbitrarily defined as "scratch".

It sounds very much like the discussions of audiophiles about golden cables and what not - while others listen to the music for the pleasure of listening.


> VM hosted by someone else ... would still be self-hosting because I am in control of the service.

A hosting vendor will have detailed "Terms of Service" by jurisdiction, self-hosting will not.

> complicated to rewrite (or review) the applications ... build your own hardware from scratch

There are options between those two extreme scenarios, many discussed at length in HN self-hosting threads.


It's different degrees of the same thing I would have thought.

If you're running your own box, you still depend on network infrastructure and uplink of a service provider, whereas a cloud infrastructure provider may go the other way and negotiate direct connections themselves.

Plenty of valuable lessons await for those who even just provision a virtual host inside AWS and configure the operating system and its software themselves. Other lessons for those who rack up their own servers and VNETs and install them in a data-centre provider instead of running them onsite.

There's only so much you can or should or want to do yourself, and its about finding the combination and degree that works for you and your goals.


Renting space in a colo and running EBGP on leased dark fiber to HE is real self hosting. VPSes while more convenient are definitely nothing like running metal.

For a lot of stuff that doesn't need constant public network connectivity, I choose to run a home lab.


> I’m an old-timer, I’m surprised that paying for shared hosting is now “self-hosting.” Nothing wrong with that, but that would never have been called self-hosting ten years ago.

Depends, maybe? Was the speaker talking about hardware or software 10 years ago?

Because, when I was given the 'self-hosting' option by some SaaS vendor, it meant that I could host it on whatever I want to independent of the vendor, whether that is a rack in my bedroom or a DO droplet.

When I was given the 'self-hosting' option by some computer vendor (Dell, HP, Sun, etc), it meant that I can put the unit into a rack in my bedroom.

Context was always key; in my mind, nothing has changed.


Seems to me the term "self-hosting" tends to auto-adjust its position based on the other end. So if "not self-hosting" is hosting on a shared VPS, then self-hosting is hosting on a computer at home. But "not self-hosting" has now become "hosted in cloud" so self-hosting moved to "shared VPS" instead, as the other end moved.

Kind of makes sense, but kind of also makes historical texts more difficult to understand. In the year 2124, who know what "self-hosting" meant in 2054? I guess it's up to future software archeologists to figure out.


Yes, the goalposts move. When I started w/the internet 30+ years ago, self hosting meant your own 56K leased line.


If you are using a hosting provider, you are by definition not "self hosting", since you are in fact, not hosting (unless you happen to own the hosting provider company).

I actually self-host tools, and that involves having (in my case) a couple of rackmount servers in my spare bathroom, and an rPi5 with a 4x m.2 hat on my desk. Hell, even just running stuff on your own desktop/laptop is self-hosting.

But PaaS and SaaS are just as not-self-hosted as IaaS is. It's literally cloud hosting.


Yup - as they say, "the cloud" is just some else's computer.

It's not so hard to genuinely self-host. You just need a reasonable ISP who is willing to open your connection, and to be sensible about securing your systems.


What if I rent physical server space at a colocation? Is that "self-hosted" by your definition? I'm curious where the imaginary line is drawn.


So in other words, the colo data center is hosting the server for you, rather than you hosting it yourself?

This isn't hard, but sure, just pretend words are all nebulous and ineffable and "imaginary".


Ba-humbug! Hosting at home on a server purchased from a vendor like Dell? That's not true self hosting either. A true Scotsman self hosts on hardware he soldered up himself. /s


I have always understood self-hosting to mean being in charge of your applications and data instead of delegating it to a company. An example might be, setting up Nextcloud instead of Dropbox. Or Taiga instead of Trello.

WHERE and HOW it is hosted, is less important to me. Because if you self-host your own tools, you can freely pick them up and move them to any hosting provider, a cloud provider, or a Raspberry Pi in your basement. Self-hosting FREES you from infra/vendor lock-in.


But isn’t using a 3rd party web host giving up some of that control. Hopefully a reputable hosting company won’t shutdown at a moments notice, but could. Or if they go down, you’re stuck sitting there waiting for them to come back online with no access to your services.

Hosting from home has its own challenges, so I get why people would go to a hosting provider, but I do think some control is given up in the process.


It depends on what you want to control. As I stated, I want full control over my apps and data. I am more than happy to rent power, compute, storage and bandwidth from someone else. I ran the math and found that running my own server 24/7 at home would increase my electricity bill by more than what I currently pay for my VPSes.

I self-host my stuff on third-party VPSes and cloud providers. Partly because my residential internet is not suitable for self-hosting and partly because I trust the infra in a profit-motivated datacenter to have WAY more 9s of uptime than anything I could cobble together in my basement. This stuff helps run my life, it's not my hobby, nor something I want to spend more than the necessary amount of time managing.

If I wake up tomorrow and my providers have gone dark without any warning, I am back in action in just a few simple steps:

1. Purchase a new VPS or two

2. Run ansible playbooks

3. Restore data from backups


You retain most of the control. You have actual laws protecting you from them snooping on your database. If it goes down, then you have a backup, right? so redeploy the backup onto any other provider or at home.


Are those recent laws? It wasn’t a database, but many years ago I had a web host tell me to remove certain files from their servers or they would terminate my account. The stuff wasn’t publicly accessible, I just had it available for myself via FTP so I could get at them from a couple locations. So there was some snooping going on.


What of the event that your SSDs, HDDs, Discs, home devices, etc... stop working? Fire up Torrents or go back to Usenet? Just asking, but you still have online backups and they can't check your database right.


I'm not sure what your classification of "old-timer" is with how it compares to me but I would think of myself as an old-timer as Gen X.

I feel like there's another term for what you're thinking of but I cannot come up with what it is.

Self-hosting definitely was locally hosted on your own hardware back when hosting providers like Linode, Digital Ocean, AWS, etc existed or were as customizable.

Even corporations "self-host" GitHub Enterprise or Gitlab when they set it up on AWS. Self-host just means you're not reliant on creator of the application to host it for you and manage the server.

There are certainly advantages and disadvantages to self-hosting on your own hardware, as there are to using a hosting provider.


You reckon you could even use Github to archive some small things? If Github or GitLab suffers, then some parts of the internet will also have problems, correct? Legitimately asking, is there any way for Github to go around searching for "no-code" content through countless private repositories?


I don't think Github could read your self-hosted Github instance unless there's some code in there that calls back home or provides home the ability to search code in your instance.

In the beginning, self-hosting was seen as completely local partially because there were no good options for hosting on a server, so that's probably where it sort became synonymous with hosting it in your home.


Indeed nowadays killing an animal is hidden behind industrial vail.

Unfortunately, it also normalized and further desensitivied us to the topic.

Though I'm quite happy to see that eating sentient beings gets out of fashion, at least in developed world.


Every time this sentiment comes up I'm reminded that it's a spectrum.


On-cloud self host is where you rent out a vm or full physical machine and then upload your stuff on there. Easy peasy lemon squeezy.


Hosting software yourself is a step towards more.


Yeah, my list of requirements for self-hosting starts with:

1. battery backup

That said, I'm not zealous about it. "Perfect is the enemy of good" and I like ecosystem diversity in general. Better to have a few dozen shared hosting providers than 2 or 3 monopolies.


Love self-hosting and really got into it over the last couple of months. I run a bunch of services for my company now and also in my home lab. I use a Hetzner VPS and provision things either via ansible + docker compose files or via https://github.com/coollabsio/coolify/.

The awesome-selfhosted repository is also a great place to find projects to self-host but lacks some features for ease-of-use, which is why I've created a directory with some UX improvements on https://selfhostedworld.com. It has search, filters projects by stars, trending, date and also has a dark-mode.


Since you seem knowledgeable on this topic I'd like to ask - how risky is it to expose a computer on your network to the internet, if you're somewhat tech-savvy but not very familiar with networking? Is it relatively "safe" with modern tools and VMs or do you need to stay on top and (for eg) always ensure you're updating software weekly?

I've thought of setting up and running a server for a long time and finally have a spare laptop so I'm thinking of actually running a NAS at least.


I've been doing it for about 13 years now with HTTP/s (80, 443), SSH (22), MOSH (lol idk), and IRC (6697) exposed to the internet. You don't need it, but something like fail2ban or crowdsec is a good idea. You will get spammed with attempts to break in using default passwords for commodity routers (Ubiquiti's `ubnt` is rather popular), but if you're up to date and take a few minor precautions it's not all that hard and/or dangerous. That being said, there are alternatives such as Tailscale that are strictly more secure but far less flexible. I've heard of people using Cloudflare tunnels as well, but I'd rather not rely on big players for stuff like that if I'm going through the effort to self host (and don't have any real risk of DDoS).

I would try to set up automatic updates for critical security patches or update about weekly. I know people that self host and do it monthly and they seem fine too. Most anything super scary vulnerability wise is on the front page here for awhile, so if you read regularly you'll probably see when a quick update is prudent. I personally use NixOS for all of my servers and have auto-updates configured to run daily.

An old laptop is exactly how I got started 13 years ago, they're great because they tend to be pretty power efficient and quiet too.


My stuff is always out of date and hasn't gotten hacked yet.

I don't see why you'd want to run ssh on port 22. I run it on a different port and never get login attempts. Yes, if someone targeted me specifically of course they'd find out, but I guess that hasn't happened yet.


> I don't see why you'd want to run ssh on port 22.

I run ssh on port 22 because I like wasting the time of those script kiddies. Also I like to brag about half a million "hacker attacks" on my server per month.



jwz does not like hacker news, maybe copy that link instead of clicking it...


Thanks, I had no idea I was flirting with posting abusive content!


> I've heard of people using Cloudflare tunnels as well...

As a Cloudflare Tunnels user who only recently discovered Tailscale - just go with Tailscale straight off the bat. It's magic, and smooth as butter.


Tailscale Funnel [0] is limited to TLS-based protocols (maybe even just HTTPS) which is a non-starter for many cases.

[0]: https://tailscale.com/kb/1223/funnel


Which cases? Tailscale has eliminated all my fears I had about self hosting and I've been using it a ton. The only issue I've run in to has been a single service (Withings) that uses a web hook to trigger updates for my sleep mat. Their server isn't on my tablet so I would need to expose atleast one service to the wider Internet.


I'm talking specifically about Tailscale Funnel which gives ingress access to services on the tailnet from outside (ie. on the general internet). Any case that doesn't use TLS for a transport won't work. SSH being a notable one, but I can think of several others.


Check out the selfhosted-gateway. You can do arbitrary tcp/udp port forwarding from a VPS: https://github.com/fractalnetworksco/selfhosted-gateway


I'd rather use https://tuns.sh, same idea.


How does tailscale help with securely self-hosting from home? I have it setup to interface securely with my PCs across networks (like at my inlaws), but not sure how it helps if i were to expose something to the world.

Thanks!


On top of this, having ipv6 configured makes things harder to discover but not impossible (As long as you don't use ${ipv6_subnet}::xxxx for your hosts). You can avoid NAT and just expose the nodes you need. Most ISP assign /56 or /64 which is a humongous amount of ips. It's nice if you are just using a flat virtual network in your home lab. The amount of scanners I see for my subnet are non existent at the moment.


That's if your ISP supports IPv6. My current one does, but my last one did not.


The approach most people use is to tunnel into the server. You install a daemon on your computer which establishes a tunnel to log-into from outside your network. Cloudflare and Tailscale have solutions for this that are very popular among the self-hosted crowd.

https://developers.cloudflare.com/cloudflare-one/application...

https://tailscale.com/kb/1151/what-is-tailscale


A god option is to setup a wireguard connection between workstation and servers. All traffic has to go through wireguard.

Because wireguard is UDP and only responds to valid requests, there isn't any open port from the outside. Not even ssh.


Additionally you can use Tailscale for added convenience. Tailscale is a payed service, for a simple home server you can get away with the free plan and their mobile apps work rather well.

Not affiliated with Tailscale at all just shouting them out because they do make things very easy and I often recommend them to hobbyist.


I've been at it for over a decade. Home router has firewall exceptions for SSH (not port 22 though), TLS IRC, and 80/443, which are forwarded to my home server with fail2ban.

I run SSH (requires PKI outside local network), IRC, nextcloud, and ampache (though don't really use ampache anymore :( ).

Home server is encrypted RAID6 Arch Linux. If I had to do it again I'd forego rolling releases and use something more stable, like Debian.

Encrypted backups are done to backblaze once a month. I also have a backup drive that I plug in on occasion, encrypted of course.

Which reminds me my RAID6 drives are getting old now... I'm tempted to move to a VPS.


It is very service-dependent. If you are wanting to run a NAS for e.g. a media server, you may want to look into Cloudflare Tunnels or Tailscale.

I set up Jellyfin and Kavita, and those are internet-exposed, but also Nextcloud, and Portainer, and Calibre, and those are behind github SSO auth, via Cloudflare. Basically, before you can hit the nextcloud login page, you have to auth to github (as me) with 2FA first, so no one can sit there and try to brute-force my nextcloud login.


Keep things up to date and ideally, having your public facing servers in a DMZ/their own VLAN (separate network from your private stuff).

Administrative things like SSH and RDP are best accessed with a VPN but you can configure SSH in particular to be key-based authentication only, which is very secure.


> Is it relatively "safe" with modern tools and VMs or do you need to stay on top and (for eg) always ensure you're updating software weekly?

First step to figure out if you actually need to be able to access it from the outside at all. If you just want a NAS, chances are you can put it on a separate VLAN/network that is only accessible within your LAN, so it wouldn't even be accessible from the outside.

If you really need it to be accessible from the outside, make sure you start with everything locked down/not accessible at all from the beginning, then step-by-step open up exactly what you want, and nothing else. Make sure the endpoints accessible is run by software you keep up to date, at least weekly if not daily.


You'll want to make sure everything stays up to date in case someone finds a vulnerability in whatever software you're currently using. If you have to expose stuff to the outside world, only open the ports you need to. Only allow access to a specific user with a non-default username (or at the very least disable root ssh access), and use long passwords or ssh keys. I think that's generally the bare minimum, but there are online guides to harden your stuff further like using wireguard and fail2ban and stuff


I sat on the fence for a long time wanting to do this, and finally pulled the trigger and picked up a Synology NAS last year. I've had a blast setting up a handful of handy little self-hosted services on the thing. Highly recommend giving it a go!

I haven't had any security issues yet (knock on wood). But it seems pretty low-risk if you follow basic best practices. The only thing I have exposed to the internet is a reverse proxy that proxies to a handful of docker containers.


Just add `sudo apt update && apt upgrade` to your crontab.


A better solution is probably: https://wiki.debian.org/UnattendedUpgrades


Any chance to get my SaaS into "Heroku alternatives" section as well?

https://ptah.sh


Just added it!


That was quick. Means a lot for the tiny startup. Thank you. <3


This is pretty nice. I see sish and inlets. I have a lot more similar tools on my list here: https://github.com/anderspitman/awesome-tunneling

For auth, I also made a comparison of OIDC servers here: https://github.com/lastlogin-net/obligator#comparison-is-the...


Hm, is there a name for the type of software that Coolify is, where it presents a management plane for other servers, vs Dokku where it runs on the server?


Coolify and others mentioned on that website can run on the server itself as well.

It happened that Coolify provides the paid option to sponsor the development, but it is not mandatory.


> Practically, it is foolishness, for what you save in money you lose in time and sanity.

Kubernetes gets a lot of side eyes in the self-hosted community. That's all of self hosting though. So why not go all in?

I've got 3 dell r720XDs running nixos with k3s in multi master mode. It runs rook/ceph for storage, and I've got like 12 hard drives in various sizes. My favorite party trick is yoinking a random hard drive out of the cluster while streaming videos. Does not care. Plug it back in and it's like nothing happened. I've still got tons of room and I keep finding useful things to host on it.


Plenty of people use k8s or k3s for self hosting. But for most, the added complexity doesn't buy enough for the trade-off to be worth it. Keep in mind most people have a single node, so docker does everything they need.

Personally, even with a 4 node setup (of tiny desktops; the hardware you have would easily cost me $200/mo in power bills), I use docker swarm. Old and unloved, but does everything I need for multi node deployment and orchestration with only a sliver more complexity than vanilla docker.


Yeah don't ask me about my power bill, It's definitely in the vanity realm. I have cheap power where I live so it's not anywhere near $200. Still too high though. One day I'll get solar to offset it.


I just use NixOS as a VM and run services as containers directly. Self-plug: I wrote a tool that makes it easy to run Docker Compose projects on NixOS [1].

This way, I get the advantages of NixOS config, while also being able to run arbitrary applications that might not be available on nixpkgs.

As far as storage goes, I just use ZFS on the hypervisor (Proxmox) and expose that over NFS locally.

[1] https://github.com/aksiksi/compose2nix


For a homelab k8s is way overkill. I do it because self-hosting is 1 part utility to 2 parts education.


it's become much easier in the last few years. Talos + Flux + Renovate can be built in a day [0], and simplifies storage/backups/patching even for single-node clusters. There's also a great community, with services like kubesearch [1] providing templates for tons of apps

[0] https://github.com/onedr0p/cluster-template

[1] https://kubesearch.dev


The author nails it here:

> It is 2024, and I say it is time we revisited some of the fundamental joys of setting up our own systems.

Self-hosting really is joyful. It's that combination of learning, challenge, and utility.

+1 to Actual Budget

+1 to Changedetection.io

-1 for not mentioning threat modeling / security. The author uses HTTPS but leaves their websites open to the public internet. First-timers should host LAN-only or lock their stuff way down. I guess that's tricky with shared hosting without some kind of IP restriction or tunneling, though. No idea if uberspace offers something like that.

For folks getting past the initial stages of self-hosting, I'd really recommend something like Docker to run more and more different apps side by side. Bundled dependencies FTW. Shameless plug for my book, which covers the Docker method: https://selfhostbook.com


I'm with the other old timers.

If it's not your hardware running in a space you own or rent, you're not self-hosting.

Currently I have a little Micro-ITX box. But once upon a time I had a proper server rack with 6 U worth of servers, UPS, networking, etc. (Before I was married...)


I'm a big fan of self hosting. I have learned a lot on a small hobby project.

for those who are curious about my setup, I bought a used Dell R630 on ebay for cheap. 1tb raid 1 on ssds, 32gb ram, 32 cores, and i am enjoying running a few small hobby apps with docker, virsh, and minikube (yes i learned all 3). I have a 1gbps fiber connection. I use a 1m cronjob to detect if my IP changes, and i use the linode api to change my DNS A records.


1.5Gb RAM/10Gb disk? Hetzner's basic cloud VPS comes with 4Gb RAM and 40Gb disk for E4.51.


AFAIK Nothing seems to beat Oracle cloud: https://www.oracle.com/cloud/costestimator.html

For compute:

"Each tenancy gets the first 3,000 OCPU hours and 18,000 GB hours per month for free to create Ampere A1 Compute instances. This free-tier usage is shared across Bare Metal, Virtual Machine, and Container Instances."

For block storage:

"Block Volume service Free tier allowance is for entire tenancy, 200GB total/tenancy. If there are multiple quotes for the same tenancy, only total 200GB can be applied"

In other words: you have a 4-core ARM CPU + 24GB RAM + 200GB space for free.


Yep, I was running one of these for the longest time.. until they blocked idle instances! Hah.. thats the kind of usage free gets you, lot of people hoarding it for... nothing. I mean, I could easily have thought of stuff to load the instance up slightly but, eeh.


You can also add an extra 50gb of space to pay like $5/month, that way you are paying and it is still an insanely better deal than any of the other cloud providers


The cost in this is your soul. I prefer not to make deals with the devil.


Do you have an eye on the (potential) price difference?


More the resource difference.


spoiler alert, the article isn't about self-hosting, it's about shared-hosting


But then someone whispers K8S into your ear…


That reminds me ... I have to go feed my homelab K3S cluster some updated CRDs to plan for the next upgrade window ...


k8s was born from devs unconscious desire for another tamagotchi pet


Once you graduate from this guide, be sure to check /r/homelab and /r/homedatacenter ;)


And https://lemmy.world/c/selfhosted , and https://www.reddit.com/r/selfhosted/ . There are a few useful Matrix chat rooms related to self-hosting, too: #selfhosted:matrix.org , #self-hosted:jupiterbroadcasting.com , #steadfast:matrix.org


"Why I self host my servers and what I've recently learned", 130+ comments, https://news.ycombinator.com/item?id=41440855


I am part of a company that promotes self hosting and provides external routing for self hosting [1]

We made Cloud Seeder [2] an open source application that makes deploying and managing your self-hosted server a 1-click issue!

Hope this comes in handy for someone! :-)

[1] https://ipv6.rs

[2] https://ipv6.rs/cloudseeder https://github.com/ipv6rslimited/cloudseeder


From the FAQ: * Q: "What about IPv4?"

* A: "While IPv4 is still widely used, its necessity is diminishing as the world transitions to IPv6. (...)"

;)


I like the concept, but only 5 IPs? With IPv6 you should be offering at least a /64 per tunnel.


Great point!

We offer 5 because we're geared toward helping people host appliances as opposed to raw network setup! We also offer automatic RDNS with this as well as the Cloud Seeder appliance!

Thanks again for your comments and as well thoughts!


I loved the idea of PikaPods until I realised even if I use 10 small (no tiny) instances/services and that too just for me to be used really really rarely) I was getting into integral USD x 11 (or whatever the number is) cost. Can't blame them because it costs money to run things. But I would have rather preferred something that isn't that costly or doesn't go up in prices with number of services/apps used. I wish there was a cost effective solution for this self/web/app hosting.


Miniflux is very good. It even has a telegram integration which will send you notification whenever a new article is published


in general, it's worth noting telegram bots are easy (free) to make and messages can be sent with one cURL command. Very useful, you can even set it up to send after long terminal commands so you know to check back


I am aware. I use it with my backup scripts. But I felt it was cool integration


It's a good writeup, but I do take exception to this:

"Seriously, else-hosting is the practical option, let someone else worry about the reliability, concurrency, redundancy and availability of your systems."

Spend one time trying to get through a maze of automated phone answering systems, then try to ascertain whether the human, when you finally get them, even understands the issue, then wonder how much of what they're telling you is to just get you off the phone, all the time wondering if calling even really does anything, and you'll wonder whether it's better to blindly trust a company that likely doesn't have tech people we're allowed to talk to or to just do it ourselves.

At least when there's an issue with my things, I can address it. Although a bit of a tangent, I'd love to see a review of major hosting providers based on whether you can talk to a human, and whether said human knows anything at all about Internet stuff.


OK:

    This has not been a detailed step by step walkthrough 
    on how to do things, by design. You are meant to go and explore; 
    this is simply a way pointer to invigorate your curiosities
Sorry, but because I came looking for solutions, I found the invigoration aggravating, but then helpful in focusing my attention.

Scalable services and sites I can build, 10 different ways.

My enduring, blocking need is for dead-simple idiot-proof network management to safely poke a head out on public IP from home. And to make secure peer-to-peer connections. Somehow that process never converges on a solution in O(available) time.

</complaining>


> dead-simple idiot-proof

Recent thread: https://news.ycombinator.com/item?id=41440855#41460999

> network management to safely poke a head out on public IP from home

For remote access to private services, would Tailscale/Wireguard be an option? It can even use Apple TV as an exit node.

> secure peer-to-peer connections

Which protocols would you consider secure for P2P use, e.g. which solutions have you tried previously which failed to converge?


Tailscale is that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: