The author talked about this a few months ago on Tim Ferriss' podcast[0]. One of my favorite episodes.
I'm passionate[1] about the concept but articles like this are a reminder to me that we need to make self hosting an order of magnitude simpler and accessible to more people. It shouldn't need to involve any CLI, DNS, TLS certs, port forwarding/NAT traversal, IP addresses, etc etc.
Self hosting shouldn't be any more difficult or less secure than installing an app on your phone. The flow should be 1) install the "self hosting app" on an old laptop or phone. 2) Go through a quick OAuth2 flow to connect your app to a tunnel that enables inbound traffic. 3) Use the self hosting app to install other apps like Jellyfin, Calendar, Nextcloud, etc. Everything should be sandboxed (containers work pretty well on Linux and Windows 10/11 via WSL2) and secure by default. Automatic backups (ideally an OAuth2 flow to your friends' self hosted installations) and auto app updates are table stakes.
There's no technical reason this can't all be done, but lots of technical challenges, and it's unclear whether anyone will pay for tunnels. I'm currently trying to figure out how to do reliable auto backups without filesystem snapshots.
Lets do this. There's literally no reason not to. It could even be a small standalone appliance that you plug in. It could be no bigger than Mac charging brick, and could even function as one.
We have to divorce society from these abusive corporate cloud relationships. It made sense 20 years ago. It is actively poisonous today.
We can easily make a turnkey opt-in peer to peer cloud using today's consumer grade open hardware and software, much of it default off the shelf.
I think the problem is there's very little overlap between people who are interested in this stuff, and people who are interested in what typical consumers want.
People are really focused on privacy, and even opt-in integration with nonprivate hardware and services doesn't happen much.
Convenience features gtt completely ignored, and worst of all, a huge amount of P2P stuff has no mobile support.
Furthermore a lot of it involves a self hosted single point of failure. For me that's a complete deal breaker, it's not acceptable that a service could go down because something happened to a home server while I was away.
And then on top of that, most p2p projects for about 10 years were completely impractical blockchain things that either cost money or huge amounts of bandwidth and compute.
Self hosted, with decentralized identity not tied to a domain name, with automatic backup and redundancy to a selectable cloud provider via an open source protocol, with a very high quality mobile app, and smartwatch support, etc, would be amazing.
But there's not much interest, and it basically can't be done in a UNIXy way, since a lot of the value the clouds provide is in the tight integration of everything, with voice assistants and calendars and a million little things that are individually maybe not even worth setting up manually.
I think the problem is there's very little overlap between people who are interested in this stuff, and people who are interested in what typical consumers want.
People are really focused on privacy, and even opt-in integration with nonprivate hardware and services doesn't happen much.
Convenience features gtt completely ignored, and worst of all, a huge amount of P2P stuff has no mobile support.
Furthermore a lot of it involves a self hosted single point of failure. For me that's a complete deal breaker, it's not acceptable that a service could go down because something happened to a home server while I was away.
And then on top of that, most p2p projects for about 10 years were completely impractical blockchain things that either cost money or huge amounts of bandwidth and compute.
Self hosted, with decentralized identity not tied to a domain name, with automatic backup and redundancy to a selectable cloud provider via an open source protocol, with a very high quality mobile app, and smartwatch support, etc, would be amazing.
But it seems like people these days aren't interested in feature rich commercial style zero maintenance apps, so I'll probably keep mostly ignoring the entire concept of self hosting until that changes.
I’ve been using Caprover (https://github.com/caprover/caprover - think stripped-down Heroku on any given Docker box) for a few years, and it’s hardly consumer-focused, but has accomplished a good portion of what would ultimately be required for such a product. It’s always that last bit where the effort/risk/cost/[insert prohibitive factor here] becomes precipitously steeper and challenging. I think it’s a fairly natural thing, but also it’s got a lot to do with being not only more difficult, but also you’re then faced with tackling it under the full weight of every technical decision you’ve made up until that point, which can severely limit the plausible approaches.
I’d be keen to work on a project to marry a PaaS like Caprover with networking using ZeroTier or Tailscale, packaged in such a way that it could be easily deployed onto most reasonably equipped platforms, or delivered as a service.
Don’t most home internet packages specifically forbid running home servers? They don’t seem to enforce it anecdotally, but annoying that they could technically shut you down at any moment with no notice unless you buy a business plan. I’m in the US and I’ve only had access to two different ISPs though and only ever one at a time, so no way to shop around.
One advantage of tunneling is your ISP doesn't know what the traffic is. Also I expect ISP competition to slowly improve. More people are getting access to fiber all the time.
Sandstorm is awesome but requires significant modifications to apps in order to work within the system. Also, it doesn't solve the complexity problem. You still need a programmer or sysadmin to set it up and manage security.
Sorry, it appears I introduced this bug here, when I didn't realize that the space after the <br> element was actually important in mobile screen widths.
I remember having a conversation with my daughter about this. We were discussing social media and photos on social media.
I told her that in the best case scenario, the future will be homes with a redundant hosting server where everything lives. Families will host their own email, calendar, photos for sharing. And cloud will only be used for backup in case of a disaster, or for migrating.
If you rent an apartment, it will someday come with a hosting service as part of the home address. And features will vary based on the kind of apartment or flat you're renting or buying.
> Self hosting shouldn’t be any more difficult or less secure than installing an app on your phone.
Mac OS X provided self-hosting of calendars, contacts, DNS, email, and websites with the Server app [2] starting in 2011 but it was discontinued in 2018.
I agree. I think people have just been used to the current state of affairs in managing servers. There’s no reason why they can’t be like appliances or mobile OSes.
The problem is eventually this 'appliance' needs to connect to the public WWW and that is a problem for most residential connections because ISPs don't play nice with that sort of thing, at least in the U.S., and now you get into having to configure port forwarding and dynamic DNS and so on.
Depending on your setup you can use dynamic DNS and save yourself the cost of the static IP. Either way it will always be cheaper per GB of storage to host at home than in 'the cloud.'
Yes, I believe that's correct. If any of the services that you are opening/exposing in this way contain vulnerabilities, those could be exploited to gain unauthorized access to the hosting machine. Attackers could then use the compromised machine as a staging area to launch attacks against other systems on your home network.
Putting the hosted machine in a separate VLAN (like a guest network) can mitigate that, but it means you have to do that configuration correctly.
(I am not confident enough in my own abilities/knowledge with respect to these vulnerabilities to try it, and so it may turn out to be very straightforward. I hope to do something along those lines someday but so far the risk has outweighed the reward for me.)
VLAN is not intended to be used like that. You want to rely on a trusted firewall you own, with separate interfaces and appropriate firewalling rules. This can provide an isolation between networks.
Behind this, any pirated server could decide to send VLAN tagged packets that may go trough the firewall if the rules are bad, or read any of them arriving to it.
VLAN's are useful if you want to "tag" packets with ID's going trough specific interfaces for segmentation purposes. The tag is applied from the interface standpoint, so this gives a virtual segmentation between ports of machines you are supposed to always control, like between a port on your router and ports on a managed switch.
In this case VLAN's are configured on the router's interface and the switch interfaces, but the exposed server is not aware about it, and can't change it, so you can know the ID is right.
This is often believed this is required to isolate networks, this is wrong, you just need to have separate interfaces.
It depends on how you technology/security savvy you are.
For instance, here is everything I do:
- Use an open source firewall+router (== Opnsense) and not commercial routers (such as Netgear, Tp Link etc.)
- Open up port 80 and 443 on the firewall.
- Both the ports go to a Traefik reverse proxy that is configured to always redirect port 80 to 443.
- Traefik then reverse-proxies requests to relevant Docker containers.
- Auto-update Traefik every day (through Watch Tower).
- Use Authelia, with 2FA, where I can for the publicly available services.
I assume I am reasonably secure but I've also built this over a few months. You may not get there right away, so start small and slow and don't go crazy early on.
great post! I'd like to mention one more "indie" tip - physical security key is nice to have (2 even better) if you plan to lose/break your phone, or travel frequently. Add the most important auth keys (bank, email, etc) directly to the physical key, back them up on the second one, and now you're less "working smartphone with an active sim" dependent :)
All good until Vultr or any of the services you rely on have to do maintenance (guaranteed to happen at least once in my 10+ years of VPS hosting) or even worse, one of the services loses data (Never happened to me, but I've seen it).
I just want to throw out buyvm.net as a block storage alternative. Not as big as vultr but super reliable and affordable, they have a discord and the owner is great
I'm all for tech independence. But if you need to be spoon-fed the instructions like this and you don't get what most of it is doing, YOU DON'T WANT TO DO THIS. Best case scenario you'll get locked out of your own stuff or important information.
Yes, you should strive for that, and you start by learning. Contrary to popular belief, you don't need to be a linux ninja to be able to host your own website and calendar.
The stuff mentioned in this article are the bare minimum, and you should want to do it yourself without being spoon fed the steps.
With that aside, this is exactly the kind of guide I would expect a three-letter agency contractor or worker to spread in order to "help you" stay off the grid, then unceremoniously drop a disaster on your head.
Totally agree. Better look for local associations that provides hosting services if you don't have any system administration knowledge. They'll help you more, and you'll waste less time and probably money, plus they may help you physically setting up your devices correctly with your services hosted on their servers.
I mean, yeah it's a minimal step by step guide that just feel to be the poster's own todo list... As there's many like that. To get some entry-point information this is great but this is far from being useful in practice.
Basically it hides everything useful to know behind a big script that the intended reader is not even supposed to understand.
I did not have seen any protection for what's come from WAN, not even basic logging, investigation nor debugging methodology. No real backup methodology as well and the guide seems to not take system upgrades very seriously by saying "oh, it could run so for decades, but if you want you can do system upgrades".
This is obviously false to any expert and a very risky approach. This is not how we are supposed to teach internet-connected services self-hosting.
Teaching newbies 'independence' by downloading random untrusted files off the internet and running them as system admin...not a cool guide i would say.
My previous version of https://sive.rs/ti (until a few hours ago) had no shell script, but just walked people through every step. It took like 50+ hours to write up.
But so many people were getting stuck and frustrated trying to type in all those commands, (and mistaking "l" for "1" and such), that I realized I could help more people have their own server if I put most of those steps into a shell script.
Hopefully it'll be enough to give them a taste of the benefits of having their own server, then they can learn more about the steps afterwards.
While i get the pain of users, i've observed that habits stick and only a weak 5-20% of the learners actually drop convenient-but-insecure habits (downloading untrusted files without inspection beforehand) because convinence always wins over. I wouldnt change the system but definitely mention (more than once) the risks involved with what they are doing at the moment and to inspect the code later when they learn more.
C’mon. The scripts are public, you can inspect them before running them. The other alternative is to explain line by line the hundreds of lines in the scripts. Not very practical.
While i agree, the issue is the target audience. If this was directed at more technical and knowledgeable tech-savvy people one-upping their game, i would be very glad and thankful for a shell script. However, its not. Its a potential starting point for being, in cool nerd terms, webmaster, and that has its own set of responsibilites and habits, habits like not downloading files and packages without checking first. While some might change the habit after learning more, i doubt that many will do that.
I really can't believe there doesn't exist a good "home box."
There should be a product that you can buy (a computer) that you bring home, plug in, set up via your phone or computer that:
- can host websites
- can store your files and sync them to other devices
- control your home automation
- host your email
- anything else you might otherwise put on a server
And does it all EASILY with a simple phone or web UI.
Yes I know you can actually buy a computer or server or raspberry pi and put something like NextCloud or Home Assistant et al. on it, but the real barrier imo is the setup and configuration. Even I don't do all this because it seems daunting to configure all of it, and I consider myself a pretty technical person. I really just want to buy a box, plug it in, and like select which apps I want to use, and then it starts working for me.
Looks nice, but the marketing design ('make it just like Apple') doesn't match the product they're selling. Apple is technology for people afraid of technology, but self hosting is decidedly not for a technologically afraid audience.
How will they pay for maintaining all the apps and making sure that they are properly integrated into the platform as they get updated?
Exactly. That would be great. But I think a large portion of the target audience of the home box would rather set this up themselves.
Or not. I would much rather have something commercial (built on open source) like this so I can be more at ease that my data is safe, compared to doing everything myself.
Indeed, it is a good initiative. And that may be useful.
Keep in mind that there's many people self-hosting and exposing services to WAN that ends as spamboxes or worse from misconfigured bits.
The thing is non-techy people would setup such thing and get it running, but have no technical way to maintain it. It's a flying plane in automatic mode with no competent pilot inside.
There are good reasons to use a third-party mail server, IMHO. (I recently made that decision again.)
But the reader should be aware that these writeups of how to do X often involve the writer/publisher getting referral kickbacks from the commercial service they're describing.
I'm about to be in a position of doing something like those writeups, as a microstartup, and I'm not entirely comfortable with the affiliate programs. But the companies monetizing with privacy-invading ubiquitous profiling trackers (sometimes euphemistically called "showing ads" and "analytics"), and otherwise selling personal data, have spoiled most potential willingness of readers to pay for content. So, affiliate programs with an obvious potential conflict of interest is the only way I've thought of to fund the work.
As in my case, there's a potential conflict of interest with the affiliate programs. In his case, he has an interest in funding the trust for charitable purposes and maybe for his 5% drawdown.
Is this really the issue that it used to be, though? I'm curious if I'm the only person who just doesn't send email much anymore in my personal life.
Yes, I get a lot of email. But it's almost all transactional or subscription. The number of emails I send or receive with other humans is pretty dang low. Most institutions these days require using their platform for communications. Most people I care about who I communicate with electronically I do over SMS or Signal or occasionally a Mastodon message.
I still own the domain, so I could easily pick up up and move to a different mail service in probably just several minutes of setting up an account and changing some DNS values. So while not fully independent, the time spent getting outbound email right is going to have less impact than other changes I could make.
It's fine until Yahoo hellbans you with no recourse for 6 months after sending you a cryptic message in an SMTP response to visit a form and fill it in which you do to the best of your ability. Oh and inevitably there's always someone you need to email on Yahoo.
I upvoted this before realizing it's a just a plug for some stuff, "porkbun", "vultr", "freefilesync" and some app called "davx" yeah, no, just by the amount of stuff you need to sign up to outside of the base BSD system, this is not tech independence.
The article starts with independence, and the first thing they instruct people in is becoming dependent on un-required third parties. Sure, becoming a registrar is overkill, but hosting a website on your own machine is a pretty low bar.
I'm passionate[1] about the concept but articles like this are a reminder to me that we need to make self hosting an order of magnitude simpler and accessible to more people. It shouldn't need to involve any CLI, DNS, TLS certs, port forwarding/NAT traversal, IP addresses, etc etc.
Self hosting shouldn't be any more difficult or less secure than installing an app on your phone. The flow should be 1) install the "self hosting app" on an old laptop or phone. 2) Go through a quick OAuth2 flow to connect your app to a tunnel that enables inbound traffic. 3) Use the self hosting app to install other apps like Jellyfin, Calendar, Nextcloud, etc. Everything should be sandboxed (containers work pretty well on Linux and Windows 10/11 via WSL2) and secure by default. Automatic backups (ideally an OAuth2 flow to your friends' self hosted installations) and auto app updates are table stakes.
There's no technical reason this can't all be done, but lots of technical challenges, and it's unclear whether anyone will pay for tunnels. I'm currently trying to figure out how to do reliable auto backups without filesystem snapshots.
[0]: https://youtu.be/0BaDQCjqUHU?si=0wDf-2RH-u9vdm3g&t=1380
[1]: https://github.com/anderspitman/awesome-tunneling