Hacker News new | past | comments | ask | show | jobs | submit login
Start Self Hosting (rohanrd.xyz)
1049 points by quaintdev on March 23, 2022 | hide | past | favorite | 608 comments



Self-hosting is something that we should be constantly iterating on making easier; it's really the path forward for privacy centric folks. The main challenges are managing workload scheduling (SystemD is complicated for a layperson). Networking is another challenge; for instance, if you wanted all or part of these services to remain offline or on a Mesh VPN there's a lot of knowledge required.

There's some projects trying to tackle the workload orchestration piece; CasaOS (https://www.casaos.io/) being one of my favorites but there's also Portainer (https://portainer.io). TailScale and and ZeroTier are great for Mesh VPN networking, where you may need to run some workloads in the cloud but want them networked with your home applications (or just to keep them offline). They also allow you to access applications running on a home server that doesn't have a static IP. Cloudflare Access is okay; I haven't tried it because it deviates from the mesh VPN model significantly.


> SystemD is complicated for a layperson

Is it? It has clean and logical abstractions, and consistency. Services depending in each other isn‘t complex or difficult to understand.

I suspect that a nice GUI would make systemd quite usable for non-expert users.

BTW: It‘s called ”systemd“:

> Yes, it is written systemd, not system D or System D, or even SystemD. And it isn't system d either. [0]

[0]: https://www.freedesktop.org/wiki/Software/systemd/#spelling


> Is it? It has clean and logical abstractions, and consistency. Services depending in each other isn‘t complex or difficult to understand.

For a technologist or engineer, yes. For a layperson, no. The average consumer who desires privacy is probably neither a technologist or engineer, so the longterm target is something that just works.

Laypeople also aren't going to entertain the kind of pedantry that is systemd vs systemD vs System D vs SystemD so making systems that abstract further away from those communities is beneficial.

Edit: Thank you for your correction, as a systems engineer, but I couldn't help but highlight this is a big hurdle even in the Linux communities that I've been a part of as desktop Linux as gained wider adoption by laypeople.


Laypeople don't know that systemd exists. They will install a webserver or something and the package manager will automatically install and enable its unit file.


What “lay person” is going to install a web server??? That’s insane. Maybe a lay faang-er would.

Lay people work in a factory or shoe store or accounting firm. They have 1 or more kids. They hear what their friends are doing, and as long as it only requires signing up on a web page, they will consider it. They will use the same password as they do for their bank.

And that’s FINE! There is life outside technology, and those lay people are busy living it.


You know there are 12 year olds who run their own minecraft servers, right?

I wouldn't call them sophisticated admins, and I wouldn't trust them with anything mission critical, but the servers often work.


There's plenty of simple webservers around. I used to use one at school that was a single (likely self-extracting zip) exe file that ran Apache on the given port; the document root was configurable, but defaulted to the relative path ./public_html. I would be surprised if there isn't a project somewhere that's effectively a QT gui to start a Python webserver like this[1], as a single self-contained exe file.

[1]: http://stackoverflow.com/questions/44586441/ddg#44586701


You are severely underestimating people.


Reading this thread - exactly my thoughts too. Most people (if given the incentive and interest) would understand systemd just fine. Some effort would be needed.


It would take one hell of an incentive to make someone with

* No previous tech knowledge

* A full-time job

* No external help

learn systemd, or how to setup and maintain a web server, or whatever. Individuals may still self-host of course, but I'm skeptical people will do it en masse anytime soon. After all, if the entry barrier weren't so high, we wouldn't have an entire service industry that does this for you (SquareSpace, Wix, Substack...)


I think most of the people on r/selfhosted aren't super technical. Lots of them install a snap or docker container with a web UI for self hosting and they are off to the races.


Looking at the /r/selfhosted crowd comes with some serious survivorship bias though. It's not considering the (potentially large) group of people who would like to self-host, but don't even make it to /r/selfhosted or similar forums.


Sure, but let's be real, if your going to slef host you'll need to be able to do a bit of that.

Would be nice if there were more little appliances that handle stuff like this so someone could just buy it at best buy or whatever and plug it in next to the router.


> What “lay person” is going to install a web server??? That’s insane.

You ... might be surprised.


I've come to recognize my view that "anyone could install this web server" is a view most commonly shared among fellow techies, and not one shared by the greater population.


Yeah. Of course not 'everyone' will be able to, and maybe that's what we should be considering the 'lay person' now...

But there is a wide range of knowledge in this space. A 'lay person' is kind of hard to get down to a spot without potentially lumping people together improperly.

Take my customer for example. She's not stupid. She knows she could probably fix the computer I am fixing for her, herself. But she also knows there are things she just doesn't know and probably should get someone else who does know those things to do it instead. Is she the lay person? From talking to her, I am pretty sure with a couple quick hours of reading some stuff, she could probably set up a basic Nginx server or something very easily for herself. Etc, etc. Yet, by others standards, she is a lay person because she can't utilize Apache or AWS.

So, yeah... that's my take on it.


I totally started there, knowing a lot of stuff was in my reach but would take some reading to succeed with. It's still hard for me to anticipate who will persist and who will throw their hands up and declare they aren't technical.


Yeah, you could say I am even still 'there' to some degree. Sure, I fix computers for people, but that's kinda easy. I know enough to know when to not do something, essentially. And that can be hellishly important as a skill.

But the fact remains that I have a A+ book from Comptia left unread that I found at a 2nd hand store, a book on Bash that I found at a bar also unread, and a book on Python from O'reily via Amazon; yes, unread. All of them unread.

Why? I dunno. Can't be damned to? I taught myself pretty much everything I know through trial and error. If I want something in a program to do something else, I alter the code and reverse engineer it until it does what I want. This is how I make mods for games usually, when I tinker at all. How do I know what to tinker with? I read, ironically. Yeah, I know. Go figure, right?

But still, those books lay unread in a desk next to me as I type this.

So what am I reading? Well, it's quite simple. If I ever get to a point where my own intuition or knowledge isn't good enough, and I don't see the answer staring me in the face via some comments in the code, or some error code thing, etc, etc...

I google it.


You know, nothing is really ever that simple and this comment makes me realize that. You actually hit on a philosophical difference in package managers lol. Ubuntu (not sure about Debian) will install, enable, and start a package, But Red Hat only installs it, because they expect you to configure the service first.


Which 90% of the time makes sense because if you want anything more than the barebones default config (which you usually do), that's best done before everything gets spun up. But I've worked with Ubuntu (and Debian) long enough that I now take for granted that some services are going to have to be downed for reconfig almost immediately after installation. The "auto start after install" practice rarely makes much difference in the final result. After over 25 years as a sysadmin I do wonder how non-experts navigate some of this stuff, what with the often incomplete doc and horribly uninformed (or just plain reckless) forum posts they have to work with. The best place to start is still æLeen Frisch's Essential System Administration (whose 1st ed is where I learned the sysadmin craft), but there really isn't much beyond it (unless you go the BSD route and so have the FreeBSD Handbook to lean on).


That is actually not true for the most part. RHEL-based distros have a vanilla config file, where Ubuntu yes does make some extra efforts to configure the packages in a sensible manner.


I thought I was a pretty good "tech person" until I read this thread... and now I'm more layperson than what this thread considers a "layperson" bc none of this makes sense to meh aha


systemd → does a lot of things on modern Linux systems, amongst which is dealing with services that should autostart (think: both low-level stuff like Bluetooth or user-level stuff like a Dropbox client or a VPN or whatever).

Unit files → fairly simple text files that are used to define such stuff for systemd (usually in /etc/systemd/system and /usr/lib/systemd/system).

Package manager → essentially the same as an app store on a phone. It's how you install and update packages (packages could be a fully fledged graphical app or just a terminal command).

Webserver → what allows you to "run websites" on your own machine. For the simplest example, if you run `python -m http.server` in a terminal on a Unix-like system, congrats, you can now browse your files in a browser (127.0.0.1:8000). If you know your local IP, you can also open it from a phone and download stuff from your desktop, zero apps or cables or FTPs necessary.

So, to decipher that comment above: you usually don't have to worry about how things run automatically because that's usually pre-configured when you install them. In most simple self-hosting scenarios you just "install a website" on your spare laptop or whatever and you're good to go. That website usually serves traffic via a webserver on some port, and you access it via local network IP and a port (example: 192.168.1.100:8000).

Now if you go deeper and want to run multiple things simultaneously, each accessible via a domain instead of a port, accessible from outside of your home, properly backed up and with a valid HTTPS connection, and then you hear about this thing called Docker... well, from my experience, you're gonna wake up on a day like this and go to work as a sysadmin with 3 years of experience, basically writing YAML for a living.

So in conclusion, apart from some outliers like Plex.tv, I wouldn't call the process layperson-friendly, but hey, it might make your tinkering into a career.


It's a bit crazy that it even matters. Or that systemd comes up at all in a selfhosting convo.

I have friends that selfhost stuff on mac, windows, and linux. They are probably above average as far as tech goes but they are all 30 somethings in non-tech jobs, real-estate, finance, advertising, sales.

I field questions from time to time, but it's pretty rare. Never once got a question about an init system. On any os or distro.

These guys aren't writing init scripts, they are going through the apps documentation and typing 'systemctl start appName.service' and then moving on.

or starting it with docker or even one of the clever little web UIs that you can install that helps you install selfhosted apps.

I thing there's a part of us that wants to believe this shit matters to normal users but they just don't care. And if the service doesn't start, they file a bug upstream and the maintainer usually takes care of it.


To be honest, I think the reason systemd comes up in a self-hosting conversation at all is because

- Some people want to talk about how difficult things are, and come up with reasons to make things sound difficult,

- Some people have an axe to grind, regarding systemd.

The original poster brought up "workload scheduling" and while I've done plenty of that at work, I'm at a complete loss trying to guess why you would need workload scheduling for any self-hosting project.


I defiantly agree with you. I've never heard mac people sit around and talk about which init system it's running it's a complete non issue. It just doesn't matter to self-hosters and end users in any way shape or form.


Ah, well, back in 2005 or so, Mac people were talking about launchd. Things have settled down since then.


Back in 2005 I would guess many people using a Mac cared. Now? probably a much smaller percentage of the user base.


I drive in the final nail and tell you that the page in the Arch wiki[0] is a wonderful starter on systemd.

Ah.. I was there when it was all SystemV vs Upstart vs Systemd... the golden days...

[0]:https://wiki.archlinux.org/title/systemd


I still have living systems that I put through the manual transition of the init systems and the /usr/ merge.


Ah! the /usr/ merge, I remember it being called the great symlink apocalypse.


Hence I have never cared about the difference. As long as the package installs and the service is enabled.


I think it has come the time where the society starts to advance without caring about laypeople, if some folks can learn it, if there is documentation, then we can just go on without caring about who doesn't know how to use it, because that's fixable. And I speak considering the German government who had to pull back from Linux because employee didn't know how to use it

Let's start treating tech as the world treats everything else: Ignorance is not a justification


Your opinion is on the extreme end, but I overall appreciate the sentiment that society is about as stupid as we allow it to be.

Sure, there's a midpoint where we would ideally want as many people to be able to use a technology as possible, which means making things easier, but we underestimate the capability of most users and dumb things down so far that those users not only believe that X technology is too hard but believe themselves to be too stupid to do anything that requires even the faintest amount of know-how.

I do think we've gone way too far in the direction of acting like everyone, except we archmages on HN, have the intelligence of toddlers and can't figure out anything for themselves. Everything becomes so easy, but at the tradeoff of everyone being dependent on the centralization.

The incentive has to be there, though. Convincing people to self-host is like telling them to eat more broccoli. Just because it's good for them doesn't mean they're going to do it, and companies of nearly any size certainly won't choose to make things hard on their users.


I appreciate your perspective and I greatly value my own computer expertise. Having said that, I want to ask: what if doctors, lawyers, and engineers adopted the same mindset? Instead of translating their knowledge and recommendations into language a layperson can understand, what if these experts spoke only in their domain's technical jargon?

We would be much worse off. In order to survive, everyone would have to become a dabbler in everything and many would be unable to keep up at all. We would lose all of the efficiency gains of specialization.

So assuming we do not want everyone to have to learn a bit about medicine and law and engineering etc, why is computer knowledge different? Is a computer not merely a tool to accomplish a task? We don't expect people to learn to become a mechanic, let alone an automotive engineer, to be able to drive a car. We expect cars to be reasonably easy to drive and low maintenance, with occasional help from mechanics. Shouldn't we expect the same from computers?


Yeah, it’s incredibly unrealistic tech-elitism to put the burden on everyone else because you can’t be bothered to simplify your design. Anyone who cares about self-hosting should not be advocating for the UX equivalent of setting VCR clocks.

We can be realistic about making hosting accessible, or you can bury your head in the sand with enthusiasts as AWS becomes the entire internet. It’s already pretty obvious which approach is winning.


It is about expendable time. I mean it might take you a few hours or so do it, or even less.

People working in other sectors, maybe with a family when they come home, do not have that skill or luxury.

And speaking from experience, documentation is often greatly lacking. For example just today I had to thumb down a couple of google docs because it was riddled with inconsistencies and lacking crucial information. And that’s a company with near infinite money. And its like that for most software, with great docs an exception rather than a rule.


I don't agree with you, only for the fact that right now, in this age computers are everywhere, everything is digital, it is not luxury to learn how things work, it's survival, it's not expendable time, it's professional time


I have various friends who have made similar statements about food (everyone should cook meals for themselves), cars (everyone should do their own basic repairs and maintenance), homes (everyone should do their own basic home maintenance), keyboards (everyone should do basic soldering and learn a non-qwerty layout), accounting (everyone should have a budget and do their own taxes), gardening, fitness, investing, education, etc.

The number of people who can learn all of the skills that someone in the world considers essential is small. The number of people who can do all those things while having a job, kids, and a working partner is smaller still. The rest of us have to focus on a proper subset.

For me, every minute I waste learning how to configure my server is a minute I could have been playing Legos with my kids or riding my bike or sleeping.


You know, I have kids too, but I still wonder how much more dependent on big tech we might potentially get over the next decades, and how much responsibility I have in protecting and educating them in that regard. I'm postponing self-hosting mail for several years now, because it has A LOT of gotchas and requires tweaking and debugging every couple weeks/months because someone considers you a Spammer again (looking at you, Hotmail (German: Schrottmail)).

But eventually they will be old enough to require an email address, so should I just let them use Gmail and stop worrying, or try to have something up and running until then? At least protect them from the data kraken until they can make their own informed decisions about that. But then I'm not just responsible for my email, but my whole family's email. What about 30 years down the road, when they long moved out? Am I still gonna sysadmin my family's self-hosting galore when they finally have kids on their own? Is it even still possible then? At least for mail it looks like when it's growing over my head I can just sign up with some mail provider (that hopefully values privacy) and let them handle that whole BS.

It might not even be worth it anymore at some point though, if basically everyone else you communicate with via mail is on Gmail. Then Google already has all my email as well.


I think it’s here Linux devs or the selfhost ecosystem should focus itself : making the software more accessible. It’s a bit like Blender before the big UI overhaul. There were actually a sizeable group of people who were very much against it, but now we are further along and its been a great success for both newbies and experts.

An awareness campaign to go the extra mile in terms accessibility would be great (and that doesn’t mean dumbing down, the two seems to be equated sometimes)


Speaking of accessibility, there was a meeting about using a headless cms at my company, so the team involved in it presents the product, something like strapi or something like that, and I ask like "Does it support accessibility features like tts/aria? And they say like "Oh but I don't think we need accessibility for this internal tool"

But also at past companies there was never the idea that internal tools had to support accessibility features, as we give for granted that we would never hire people with some sort of disability and they are just users that stay at home, it's sad and upsetting, as a person who doesn't need any accessibility feature to operate now, I really feel sorry for others who do, and I hope that I will never need them, because like your situation with products at employers like control panels and similar is very sad


I feel the same pain as you do about being complicit in all of this and I have no ready answers.

The best I managed to do is that my family's email addresses use a custom domain I manage so I am not locked into a single webmail provider. Occasionally I have to deal with some issue or delivery delays (especially for authentication emails) but it's orders of magnitude less effort than when I made the mistake of trying to run my own mailserver.


As teenagers when they realize that you (or any family member with access to the server) can look at all their emails, they will most certainly never use it for anything personal and/or move away to gmail as that would massively improve their privacy situation!


I guess put like that, I would say that my thought about the matter were a bit bullshit and I agree with you


I wouldn't say bullshit, just a tad idealistic. Then again, I'm old, and I admire your idealism.

Being self-sufficient is a noble goal, but I think it's also helpful to be realistic about the costs in order to make the proper cost/benefit decisions.


You could say the same thing about any number of fundamental disciplines: “chemistry is survival,” “roof making is survival” “SQL is survival.” The truth is, there’s only so much time, and we each must specialize. Software that decreases the need for broad reaching high-specialization is a public good, and software that increases the need for specialized knowledge across all economic activity is hurting productivity, whatever the bean counters may say.


I think this discussion thread is basically people looking at this in different contexts. A lot of the replies seem to interpret your post as saying "systemd is more complicated than sysvinit was" which is most definitely not what you meant regarding your further replies in this thread. You seem to say "even systemd is too complicated for the layperson". I somewhat agree, if we assume someone totally not dealing with tech in any way on a daily basis, except for typical Office work. But the "in-betweeners"? They totally could. Like, not even Linux users, but curious tinkerers that fiddle aound with .ini files and registry settings to tweak their machines; do overclocking? They totally have what it takes to chew through a tutorial that tells you how to set up a Linux VPS with a couple services running.

For the layperson that's a bus driver by day, not so much, most definitely, but I think no matter how simple you make setting this up, just having to maintain this and having to do something when the setup eventually has some unexpected bug a few years down the line is just too much.


To be fair, a bus driver probably isn't writing a self-hosted service anyway. So if there is an issue with the service init 3 years later, they will probably file a bug and let the maintainer handle it. And they wouldn't be parsing a bash script for a sysV-init if that broke either? SysV-init was to complex for the layman as well. Because gracefully starting and stopping system services with their dependency chains is generally too complex for the layman as well.

There's this strange idea that laymen self-hosters are doing all this stuff and the choice of init system matters in the lease. They were going to run the command to start the service blindly regardless of which init system it was and if it breaks, they aren't going to really troubleshoot either one.


Sure, the bus driver was supposed to represent the other extreme. In reality, they probably wouldn't care no matter how simple the setup is. But the at-least-somewhat-curious non-tech person has probably better chances finding a problem with a systemd service than a sysvinit script, even if they just open a ticket in the end. The info they can provide is more likely to be useful with systemd, I'd argue.


For the record, I'm a big systemd fan. But to a user, taking a debug log is more or less the same either way. They don't care.


>For a technologist or engineer, yes. For a layperson, no. The average consumer who desires privacy is probably neither a technologist or engineer, so the longterm target is something that just works.

In comparison to system V initd startup files, systemd unit files are, arguably, less complicated.

I'd say the "complexity" of systemd unit files is mostly irrelevant to end users.

For a relatively non-technical user, implementing whatever application/service one might want to use should be as simple as installing the relevant package(s) and dependency(ies) via existing, well managed package management systems.

That said, too many developers encourage self-hosting, but don't provide appropriate packages and defaults for most popular distributions.

If developers spent just a little more time creating buildable packages (supporting the creation of binary and source .rpm, .deb, etc. packages) with sane defaults/startup files could make the inclusion of such apps into the standard/extras repositories of a broad swathe of Linux distributions much simpler and, for the non-technical user, easy to install and configure.

Matrix Synapse[0] and Diaspora[1] both come to mind in this respect. Installation and configuration of these platforms requires the installation of several software development frameworks and separate (from the standard system package managers, e.g., DNF, apt, dpkg, etc.) package management tools for the language dependencies.

Requiring installation of software dev environments and building the software/databases/admin tools for such "self-hosted" solutions just confuses non-technical users.

As a professional with decades of Unix/Linux implementation and management experience, I find implementing such platforms simple enough. Just read the docs, install the dependencies and compile/install/configure the software.

For a non-technical person, that's likely a non-starter unless there's a UI that will do so automagically.

Fortunately, there is such a UI for most Linux/Unix distributions -- it's called the system package manager.

Unless and until developers provide distribution developers/maintainers with appropriate packageable sources (or even separate repositories with binaries!) to be added to the default repositories, self hosting many apps will only be the purview of technical users.

This annoys me. A lot. Not because I, personally, mind a complicated set up process for such applications, but because it limits the ability of both Linux/Unix distributions and self-hosted applications/platforms to be used more broadly by non-technical users.

Especially with tools like Diaspora, Matrix/Synapse and others which have the potential to overturn centralized hell holes like Twitter, Facebook, Instagram, WhatsApp, etc.

It's been at least five years since I first installed a Diaspora pod and a year since I installed Synapse and a STUN server. In both cases, had I not been a long-time user/manager/implementor of Unix/Linux and associated sw dev environments, the install would have been nightmarish.

For both platforms, installation pretty much requires knowledge of software development tools and practices, as well as more than a passing familiarity with Unix/Linux shells and environments.

I can't imagine my 64 year-old sister in-law (a reasonably well educated and smart cookie with decent problem-solving skills) taking the time to learn how to use git, clang/gxx or even docker to install this "self hostable" stuff.

That should be the target audience for such self hosted tools, not devs and other technical people.

Taking the time to make one's application/platform easily installable/configurable (and building from git repos and/or Docker-compose aren't "easy" for non-technical folks) by non-technical end users could make a huge difference in this space.

[0] https://matrix.org/docs/projects/server/synapse/

[1] https://en.wikipedia.org/wiki/Diaspora_(social_network)


I'd go a step further: programs aimed to make self-hosting easy should not even need a package manager. They should be self contained (statically linked) binaries. Luckily we have Go which makes this very easy to achieve, and to make things better, Go is designed for web servers. I believe we've had this ability for almost a decade, and still no one is harnessing it to make self-hosting easy and reliable.

Further more, they should require absolultely zero configuration to get up and running. We've had SQLite for many years now, so it's easy to make a webserver that does not require a separate sql server to be installed and configured in order to get going. (There are also non-sql embedded data storage engines, many of them written in pure Go, eliminating the CGO problem).

Ideally the user will download a program for their own operating system (say, windows). They will run this program, provide it with the ip and root password of their linux server which they rented from some provider, and this program will then upload the actual server program to the host machine and launch it.


Echoing this. All the services on my servers (Web, Gemini, Matrix, Fediverse servers and a CSP violations collector) are statically-linked binaries running in sandboxed chroots, with no access to the outside filesystem except a subdir of my data volume mounted into the chroot. Privs are limited to a defined list of acceptable syscalls and other limitations. I'm currently working on enforcing the use of this setup with SELinux policies and transitioning my OCSP fetching and session ticket key rotating scripts to statically linked binaries so I can just stick to one template shared across all services.

It would have been much harder to pull this off if these services required large interpreters, adjacent daemons running, and complex orchestration. Given the amount of time I have, I might have just skipped the SELinux step.


The reason I want things to be self contained is not for someone else to use it as a base upon which to build additional layers of complexity. That defeats the whole point.


I hope you are not saying this is a layperson doable setup. ;-)


I think a good example of this is what Synology has done with their NAS devices. They’re as basic or as complex as you want to get (to a point). At the highest level of Synology complexity, you promote to a real general purpose server or homelab infrastructure.


I'm certainly not a layperson, but systemd frequently confuses me.

I want to edit a service to harden it for example. Oh, wait I shouldn't edit it directly with vi? Because it gets overwritten by package updates. Okay, makes sense, I need to use systemctl edit instead. But that opens a file that has everything commented out. Do I uncomment the [Unit] heading? What do I need to keep and where do I add my additions? I recall there being a comment at the start of this file, but unless I'm misremembering it doesn't answer that.

All I ask of it to do one thing - start something.service after other.service. yet it just refuses to order them this way. Why? I have no idea. I also have no idea where to start debugging a problem like this. There's a billion ways to try and do this after all: do I add Before=something to other.service? Do I add After=other to something.service? Both? Wants=something?


> I want to edit a service to harden it for example. Oh, wait I shouldn't edit it directly with vi? Because it gets overwritten by package updates.

I don't think you're supposed to edit anything inside /usr, except perhaps /usr/local, but even that has a better alternative in the form of $HOME/.local, which is a well defined standard at this point.

Maybe I'm unaware or mistaken but if you're editing anything inside /usr as a normal user, you're either using Linux wrong or doing something unexpected or unusual. This is why requests to make GUI file explorers have root escalation capabilities sound absurd to me. I can't think of a reason why one would need root access when using a file manager, especially a GUI file manager.


> Maybe I'm unaware or mistaken but if you're editing anything inside /usr as a normal user,

I'm not doing any of this as a normal user. I'm doing all of this as root


Even in that case, if you're manually editing anything inside /usr either by being root or by using sudo, you're doing something wrong or unexpected.

Anything inside /usr should only ever be modified by the package manager, not by the root user, or any other user for that matter.

If you want to make system wide changes, make them in /etc. If you want to make user specific changes, make them in $HOME/.config, $HOME/.local. Your package manager should never overwrite anything in /etc or $HOME. If it does, it's a bug.


So here's the thing, right. You are correct. And I agree.

I'll admit something; I forgot where the service files were (I don't use systemd often). So, in my unix wisdom I used `find` to find the files ending with 'service'. The only ones that came up were in /usr/lib

You're right, I'd typically never edit files in /usr. But I wasn't sure how else to do it at the time.


systemctl edit --full does what you want.

I wish package managers would make patching packages easy, this kind of thing is so much more manageable on Nix.


> it gets overwritten by package updates

This doesn't happen. The package manager installs the new configuration under a different name so that you do not lose your changes and can merge them easily.


what they are saying is that they edited the file in /usr/lib , which definitely would get overwritten. You're supossed to copy it into /etc/systemd/ for the appropriate service type.


I think you all proved the point that this system is too complicated for anyone outside of a small group of professional IT people.


It's complicated because it is complicated.

It's like saying a car is too complicated because I can't just swap the engine out without any prior knowledge. It's not designed for the average guy to be able to do that.

systemd is not some product meant for the average Joe, it's an integral part of the system for managing services and other things. To run a web server at least somewhat reliably you don't even need a lot of systemd knowledge but you still need to know some networking, firewalls, DNS, in general how the internet works, how to configure a web server and other services, some basic security. If you don't want to learn these things then there are managed services that you pay to do those things for you or you hire someone to run it for you. Just like you take your car to a mechanic when you're not interested in figuring out how to reassemble the engine.

Yeah, things can always be improved and made simpler but to create something fool-proof for the average person would take a huge amount of work and there would need to be a business opportunity there for someone to invest in that or there would need to be some passionate generous soul that would invest their time in a project like that.

And like I already mentioned, there are already solutions to the "it's too complicated" problem: 1) companies offering managed services, 2) companies/individuals for hire to do it for you.


Are you implying that getting init script customisations overwritten by package managers isn't a problem with non-systemd init managers?

I have lost track how often that happened with sysvinit, because the "logic" how to treat customisations was usually handled by the package manager and they messed it up regularly.

systemd has a standard way to handle customisations. As long as you put everything you do in /etc/systemd/system, everything is fine. It's simple and works across distributions.


> Are you implying that getting init script customisations overwritten by package managers isn't a problem with non-systemd init managers?

Traditionally, init scripts were installed into /etc but package managers (or at least some of them?) took/take care to not overwrite files under /etc but instead let you merge in the new changes.


Comments like yours are a great example of why Linux has a hard time being user friendly. You take something that's deeply technical but easy to understand for yourself, and somehow generalize it to everyone. "If it's easy for me, it is easy for everyone", without noticing that perhaps your expertise plays a big role in making it easy.

The better question is why should a layperson ever need to know about systemd in the first place.

I'm a technical person and I don't know about systemd, nor have I ever needed it.

When designing products for end users who are not technical, the overriding design goal should be easiness and reliability.

Easy: does not require special knowledge or expertise to operate.

For someone who just wants to self host, having to learn about systemd, by definition makes the product/system not easy.

Reliability: does the system implode in unexpected way based on the slight variations in the environment? If it implodes as such then it is fragile. If it does not, then it is robust and reliable.

For the end user, what matters is that the system is easy and reliable.

If the system is easy but not reliable, the user is effectively forced to become an expert in order to fix the system when it breaks. Thus, if a system is not reliable, then it doesn't matter that it's "easy".

Simple/Complicated only concerns the developers. It's important to keep systems as simple as possible under the hood, because a system that is simple is easier to make reliable than a system that is complicated. But for the end user it does not matter directly. Simplicity is about how many parts the system is composed of and how these parts interact. If there are many many parts that all have to interact with each other, then the system is complex.

Maybe once someone learns about systemd they can find it conceptually simple. But that's a moot point. The point is: they should not even have to learn about its existence.

A system where editing text files can make or break the system, is not a reliable system. It's easy to make mistakes in text files. Specially for users without expertise.

Imagine yourself a windows user. You edit a text file. Restart the machine, and now it doesn't boot, or boots in a special text-only mode. (This is not unheard of on linux based systems).


> For someone who just wants to self host, having to learn about systemd, by definition makes the product/system not easy.

Sometimes HN threads seem to self-sustain forever starting from a pointless argument and discussing it like it was the truth.

Learning systemd is not necessary for anything in day to day life, not in desktop use, not in self hosting, default configuration from package managers work fine out of the box. At most you need to learn how to start, stop, disable and query service status.

I've been using linux since decades, way before systemd, never had to learn systemd until I needed to ship my own custom services on my own custom devices. But at the point you're way far from being a layperson.


Systemd is easy compared to init scripts. Maybe that’s the comparison point. I know that’s my comparison point, since I wrangled init scripts for a couple decades before systemd, and when systemd came out, suddenly it felt a million times easier to run services.

> A system where editing text files can make or break the system, is not a reliable system. It's easy to make mistakes in text files. Specially for users without expertise.

I don’t think that’s a salvageable notion of “reliability”. I could bork Mac OS or Windows easily enough by editing text files, if I really wanted to.


Here's why this matters: some systems require you to edit text files in order to work at all. But at the same time, they are extremely sensitive to the content of the text file.

Editing text file to make advanced configuration may be acceptable if the casual end user never needs to do it. When I say never, I mean never ever ever. If you need to edit it once every few months, you better have really really good "linting" facilities to tell the user right away whether their edits are going to be accepted or going to cause the system to break.

Now granted, systemd is a system utility, not an end user application.

If the system overall is built such that the end user never ever ever needs to know about systemd, then it's sort of ok.

Now, when it comes to self-hosted, the line between "system utility" vs "end user application" may be blurred.

If you think a web server is a system utility, then by all means, go ahead and make it fragile and complicated. That's how everyone is doing it.

But if you think - and at least I do - that it's important for casual users to be able to self-host their own websites with ease and reliability and without having any system level expertise, then the points I mentioned about the system not imploding due to the content of text files is extremely important.


Ah, I thought you had a complaint about systemd & text files. It sounds like you are okay with it, unless I misunderstand your comment.


I don't know anything about systemd, nor do I want to.

I absolutely have problems with configuration via text files.

A lot of things on linux desktops that a typical user wants to do require editing text files to work. For example, try searching for information about how to enable Japanese input in Gnome or XFCE desktops. Invariably, the steps involve installing and configuring some specific packages, and editing several text files to let the system know about the fact that you want to use an IME for text input.

This is unacceptable.


> I don't know anything about systemd, nor do I want to.

Well, that’s fine. We’re mostly talking about self-hosting web servers, and you don’t need to know anything about systemd in order to run a web server on Linux, any more than you need to know about fuel injection computers because you drive a car.

The reason I’m familiar with systemd is for other reasons besides hosting a web site.

I don’t know how problems with IMEs or desktop Linux got brought up, I’m not sure I’m following the conversation.


The conversation is about why it takes for something to be usable by end users.


I think in order for it to be a "conversation about X", more than one person has to talk about X.


Did you even read my comment? You are attacking a strawman.

> You take something that's […] easy to understand for yourself, and somehow generalize it to everyone. "If it's easy for me, it is easy for everyone"

No. I am taking something that‘s easy for me, and suggest making it easy for everybody.

> The better question is why should a layperson ever need to know about systemd in the first place.

They shouldn‘t, and I did not say so. That‘s the strawman.

Does an average Windows user know about the NT kernel? No, and he does not need to.

> When designing products for end users who are not technical, the overriding design goal should be easiness and reliability.

I agree.

> having to learn about systemd, by definition makes the product/system not easy.

You would not have to learn it though. You could just flip a switch in the GUI.

> they should not even have to learn about its existence.

And they would not have to.

I don‘t know how my car works, and I don‘t need to. I can use the simple interface (steering wheel, pedals, shift) which is exposed to me.

> A system where editing text files can make or break the system

> You edit a text file.

Again, complete strawman. You would not edit text files.


Technologists have a very skewed idea of what's complicated vs easy with computers. Things we think are absolutely trivial are often insurmountable hurdles for laypeople.

(This can, of course, happen if you put a technologist outside their element, too)


> Services depending in each other isn‘t complex or difficult to understand.

It is for me with Systemd - I had to spend hours (on two different occasions, if I remember correctly on Debian & Linux Mint) trying to understand how to set a dependency against an NFS filesystem mount so that a DB would not be started before that, and to make that work reliably => Systemd's docs & behaviour (& special distro settings related to systemD?) weren't that great for me.


I swear, writing it as SystemD isa shibboleth of systemd haters.


For the record, I actually like and use it. I'm just at work and didn't put much thought on how to spell it. I also didn't really expect someone to care that much in a general, high-level discussion.


they are all in favor of SystemE


More like SystemSh


SystemŞ?


If you know what systemd is, you are, by definition, not a layperson.


End users would not need to know. They could just click the activate button in a GUI.


What lay person does anything with systemd though? I have all my services in a docker-compose.yaml... Sure, I remember the days before systemd, I remember upstart, Gentoo's rc.conf. I still think it's useful I can find my way trough the internals of a Linux box, but for me all that stuff is far in the past. This is how it goes nowadays: Install the system in 20 min, clone the infra as code, put the data back, start the infrastructure... Where does the init system still play a role?


If someone says it’s complicated, then yes it’s complicated. It may not be complicated for you, but by definition someone finding it complicated makes it complicated.


> It has clean and logical abstractions, and consistency.

Sorry, but an actual layperson would already be lost upon reaching the word "abstraction".


I am talking to technical people here. You would not expose that terminology to end users.


I loved using LingonX on Mac, one of the things I'd really love to do is make that but for systemd stuff.

I think the abstractions are good, but there's a lot of terms that are needed just for the basic "Spin up this process on boot" (one might say that explicit is better than implicit, but I dunno).


> Services depending in each other isn‘t complex or difficult to understand.

I am not an expert here, but to the best of my knowledge systemd has no concept of inter-host dependencies, making it effectively useless for anything distributed. For example classic lb+app+db cannot be directly controlled by systemd.


Totally agreed, for the sake of humanity we should strive to make self-hosting as easy and as seamless as possible.

But why stop at self-hosting? Beyond self-hosting, it could be extended to local-first paradigm meaning that there's a choice to have a scalable on demand auxiliary cloud based for handling bursty access demands if you need it. In addition, you can have extra backup for the peace of mind.

I'm currently working on realiable solutions (physical wireless and application layers) to extend the local-first system to be automatically secured even you have intermittent Internet outage unlike TailScale and ZeroTier [2]. This system will be invaluable where Internet connection is not reliable due to weather, harsh environment, war, unreliable power provider or lousy ISPs [3].

[1] Local-First Software: You Own Your Data, in spite of the Cloud:

https://martin.kleppmann.com/papers/local-first.pdf

[2] Internet outage:

https://en.wikipedia.org/wiki/Internet_outage

[3] NIST Helps Next-Generation Cell Technology See Past the Greenery:

https://www.nist.gov/news-events/news/2022/01/nist-helps-nex...


> I'm currently working on realiable solutions (physical wireless and application layers) to extend the local-first system to be automatically secured even you have intermittent Internet outage unlike TailScale and ZeroTier

Sorry but i don't understand what this means. Internet outage should not affect your LAN services, no matter what selfhosting distro you use.


For silo self hosting, Internet outage should not affect your services, but for most of local-first services it's critical to maintain your systems security with necessary credentials. For an example consider a rental car that you take for camping in the countryside. Internet outage that makes your rental car useless is unacceptable.


I think i'm even more confused. Why would my rental car need network access at all? (it's already a shame and a great source of cost/problems that it has any electronics)

Also, what prevents my TLS/SSH/whatever certs from working in an offline setup? i do that all the time without downsides, so i'm probably missing your point. if you have a page explaining your project and usecase feel free to link


Having started my career in hosting, I would suggest that this world is unlikely to come back except for exceptionally small applications with minimal business impact. What does self-hosting provide which end-end encryption does not?

Self-hosting means:

- Needing to know how to configure your linux host across firewalls, upgrades, backups.

- Negotiating contracts with network service providers. While verifying that you have the right kind of optic on the network line drop.

- Thinking through the order of operations on every remote hands request, and idiot proofing them so that no one accidentally unplugs your DB.

- Making sure that you have sufficient cold spares that a server loss doesn't nuke your business for 6-12 weeks depending on how the hardware manufacturers view your business.

- Building your own monitoring, notifications, and deployment tools using both open source and in-house tools.

- Building expertise in all of your custom tools.

- A 6-20 week lead time to provision a build server.

- Paying for all of your hardware for 3-5 years, regardless of whether you will actually need it.

- Over-provisioning memory or CPU to make up for the fact that you can't get hardware fast enough.

- Getting paged in the middle of the night because the hardware is over-provisioned and something gets overwhelmed or a physical machine died.

- Dealing with the fact that an overworked systems engineer or developer is never making any component the best. And everything you touch will just passably work.

- Everyone will have their own opinions on how something should be done, and every decision will have long term consequences. Get ready for physical vs virtual debates till the heat death of the universe.


I think there might be a vast gap between what what the article is talking about, and what you're suggesting. Somebody self-hosting their project management app on a LAMP server on some random web hosting company is one thing. What you're talking about is something else entirely.

And yes, having a web hosting company manage your server isn't "real" self hosting, for all the reasons you described, but it's a far cry from dumping all your data with a big data-guzzling giant. Your data isn't calling home, isn't being used by a company, it's sitting on your server under your control, and only you manage it.

I think that's their main gripe.


There are a few tiers of hosting providers ranging from "We rent you server space", "We rent you a physical server with internet access", too "We rent you a VM" or "We rent you services you may want to access".

The privacy angle of self-hosting generally ends as soon as you are renting a physical server with internet access. Someone else has access to the hard drives, the network traffic, the hardware, and via hardware remote access the OS/Data. At this point you would need to trust the legal bindings on what the hosting company can and can't do to your physical machine.

One really exciting angle on end2end encryption is that your provider may be technically incapable of any malpractice (for better or worse, I may wish my provider could have a sys eng bail me out every now and then).


I think the point is that most of this, other than buying extra hard drives, is solved by having a decent FOSS project.

For example, an OS that has a nextcloud-like suite of services, and a very easy to use GUI to enable a VPN / mesh network for all of your devices pretty much removes much of the concerns you mentioned regarding networking/firewalls/etc.


... so, basically, just how we ran an IT services department in the 1990s and early 2000s.

Except that:

Build servers took a day or so depending on the approval chain.

Hardware could be leased, or the capital expendite written off over 3 years, plus it came with a 4-hr onsite service warranty (if you worked with the right partners), and it being a capital, rather than operational, cost had bottom-line benefits.

24/7 service coverage for major incidents was very doable if you planned correctly, plus you owned the kit and so could control how you brought in resources to support incident recovery, rather thank waiting for your service provider to put the green dot back on their service status page.

//CSB The max total outage time for one corporate where I ran the IT dept was 12 minutes in 3 years, while we swapped out an HP battery-backed disk cache controller that took itself offline due to a confidence check failure.


aye - and it took time to setup all of those things, time to maintain the gear, delays for business/dev teams while the IT department made sure they knew how to run something stably.

> Build servers took a day or so depending on the approval chain.

This would only be true in a large shop with cold spares or virtualization. Server hardware generally has a 6-12 week lead time. The exception being if you are paying out the nose to a reseller who could do faster delivery.

Just imagine the time it took to setup Nagios or Zabbix for monitoring. In a small shop you are probably talking about at least 1-3 days of work + calendar time for hardware. Add to that some time for dealing with scale of metrics storage etc. depending on the shop.


You’re talking about running a tank battalion in WWII when we were talking about learning how to drive a car with manual transmission.


i started self-hosting a bunch of stuff last month: Pleroma (like Mastodon/Twitter), Matrix (chat), Gitea (like Github) and Jellyfin (like Plex, a media server). AFTER i set up the hardware/OS, these each took about 1-2 hours to setup, and it gets faster each time as i get more accustomed to the common parts (nginx, systemd, Lets Encrypt, and whatever containerization you use).

today i accidentally nuked everything by not flushing the disk before rebooting and then naively letting fsck try to ‘fix’ it (which just makes things worse since it unlinks every inode it thinks is wrong instead of helping you recover data). now i’m manually dumping blocks and re-linking them in, supplementing whatever’s not recoverable with a 3-day old backup. that’s probably gonna take an entire day to fix up.

after this i have to figure out a better backup solution, because it costs me $5 of API requests every time i rclone the system to Backblaze, making frequent backups too expensive.

after that, i have to figure out the email part of things. AFAICT it’s pretty much impossible to 100% self-host email because of blacklisting. you have to at least proxy it through a VPS, or something.

and in between that i may spin up a DNS server to overcome the part where it takes 60min for any new service to be accessible because of the negative caching common in DNS.

no, this stuff is just way too involved for anyone who hasn’t already spent a decade in the CLI. i’m only doing this because i’m a nerd with time on his hands between jobs. self-hosting isn’t gonna catch on this decade. but maybe we can federate, so that you just need to have one friend who cares about this stuff manage the infra and provide it to their peers as a social good.

also, i don’t think privacy is the right angle for promoting self-hosting. a good deal of the things that people self-host have a public-facing component (websites; public chatrooms; etc). if privacy is what you seek, then you should strive to live life offline. the larger differentiator for self-housing is control.


After I've built a server for a purpose, the one thing I want most is a script that does it again. Spending another identical hour on a similar server just makes me sad.


That's precisely why i started working on https://codeberg.org/southerntofu/ansible-selfhosted

Its abstractions are still a bit shaky and a lot can be improved so it's far from ready for the general public, but i still consider it a great step forward because for the supported configurations, all i have to do to setup a new server is:

- edit config.yml

- run roles/deploy.sh

- enjoy

I'm happy to answer any questions on why (politics) and how (technics) i'm building this, and i'd be more than thrilled to receive feedback and contribution. In the past week i started working on a test suite so it's easier to contribute.


> today i accidentally nuked everything by not flushing the disk before rebooting

What do you mean? Did you interrupt the reboot process (eg. repetitive ^C)? Otherwise the OS should flush everything properly.

> after this i have to figure out a better backup solution

If you have other friends doing selfhosting, giving them a HDD to put in their server so you can rsync your data is a good solution. Also, keeping another local backup is a good solution. Doing both is even better.

> AFAICT it’s pretty much impossible to 100% self-host email because of blacklisting

It depends. It's almost impossible with some bad-faith providers (google/microsoft) otherwise everything works well. And even with those bad-faith providers, residential IPs usually have better reputation (if only because your ISP likely blocks outgoing TCP 25 by default and you have to ask them to unfilter it) than VPS IPs which all have a long history of abuse.

> and in between that i may spin up a DNS server to overcome the part where it takes 60min for any new service to be accessible

If you didn't previously query that domain name on your resolver, it will not be cached and will resolve quasi-instantly. The question is how long does your authoritative name server take to apply your changes to the config: if it takes any longer than 10s you should consider switching providers.

> but maybe we can federate, so that you just need to have one friend who cares about this stuff manage the infra and provide it to their peers as a social good.

Very good point! That's the entire premise behind hosting coops like disroot.org or various members of the libreho.st/chatons.org federations.


> What do you mean? Did you interrupt the reboot process (eg. repetitive ^C)? Otherwise the OS should flush everything properly.

here’s my best guess: in my setup i have a host running a qemu vm and most of the interesting stuff happens inside the vm. originally that vm image was just 8 GB, but then i got a HDD to dedicate to it. with VM powered off, i partitioned the HDD and then dd’d the VM image onto it. then i booted the VM via KVM passthrough of /dev/sdb…

it booted fine; i ran ‘df’ and noticed that i forgot to resize the fs to the HDD, so i ran resize2fs. 3 days later, i `shutdown` the VM and then `reboot`d the host. the host didn’t actually come back: the power light and activity lights were off. after 5 minutes of this i power-cycled at the wall socket.

host came back up. vm wouldn’t boot. ran fsck from the rescue shell. now it booted, but no services were operational. since i couldn’t login to the vm (ssh broken + password logins had long been deactivated), i shutdown the vm and mounted its fs on the host. ‘df’ showed that the host thought the fs was only 8 GB in capacity.

i don’t think it was outright disk corruption, because the poweroff wasn’t that messy (but i come from btrfs, which has handled like 20 power faults on me w/ zero issue: idk how solid EXT4 is to these things). my best guess is that somewhere along the way, the changes from resize2fs didn’t actually make it to disk, or were overrided with stale in-memory values. maybe when i updated the guest’s kernel some post-upgrade script did something to push the old fs size to disk somewhere. or maybe the host had the old 8 GB fs size cached and flushed that during shutdown/start. unfortunately i’m not sure i’ll ever know.


For laypeople self-hosting is out of the question for now. I'd say the more immediate problem is that even for competent engineers this is a difficult task with all the artificial restrictions put in place in the name of security, anti-fraud, etc.


This. I think -- speaking for myself mostly -- folks move to cloud for the simplicity a web interface provides. If you like, there's usually a CLI that abstracts the compute and other things. Self hosting -- at least whenever I did it -- was always: start with a VM, install Linux, configure ports, configure security, install a webserver, deal with the security of that, manage conf files, deploy website, etc etc

Host a static page on github pages makes all that a ton easier and also free.

That's a trite example sure. But when I was at a previous company who did almost everything on premises I couldn't help but think if we had an internal portal/system a la GCP's console or that of Amazon's that could be a way for devs to spin up resources and have it all managed and even be a bit programmatic (no no K8s doesn't solve all of this it's its own bag of crazy) then we'd not need cloud much since we'd not need the almost infinite scale that cloud offers.


The problem is certificates and WAN access, and lack of MDNS on Android. There's basically no way to do anything that doesn't involve some manual setup, aside from developing a new purpose built app, and maintaining it in addition to the product, probably on two platforms.

If Mozilla still had FlyWeb things could be plug and play.

I have a set of proposals here to bring some of that back: https://github.com/WICG/proposals/issues/43

And some are considering the Tox protocol, but in general, we have not solved the most basic issue of self hosting. How do I connect to my device in a way that just works, LAN or WAN, without manually serting up the client or registering for a service?


The only model I've ever seen this work is the Mac/Windows model. You provide a standard installer for the server (or even distribute it via the app store). The user launches it through the standard graphical app launch model (Finder or Start Menu), the server displays a suitably user-friendly GUI configuration panel, and then minimises itself to the notifications tray.

The linux model of "first learn how to use a package manager, edit configuration files by hand, and configure init scripts" is never going to be something that I can comfortably explain to computer users like my parents...


Most consumer platforms don't have functional automatic backups, so this is a pie in the sky at the moment. Even for a proffesionsl, self hosting is kind of time consuming


You can still self-host and use external resources to manage network and system security. You would keep full control over the machine this way. Having professionals sensibly partitioning different resources in respective subnets is still one of the most valuable defense mechanisms against many threats.


Quite surprised at seeing CasaOS mentioned so often here. It's quite a young project & best as I can tell it was sorta a sideproject of the guys sitting on their hands while trying to ship Zimaboard kickstarter hardware during a ship shortage.

Good for them that it is seeing traction :)


Huh, ZimaBoard [0] (Hardware SBC project by the CasaOS people) looks super cool. Sadly still on pre-order, but that is almost exactly what I want.

[0]: https://www.zimaboard.com/


> Self-hosting is something that we should be constantly iterating on making easier

I'm pretty sure that's exactly what we did and ended up where we are today. Any sufficiently-advanced self-hosting is indistinguishable from AWS?

I'm not sure how joking I am.


You don't really have control of any of the hardware on AWS, and therefore they can track everything you do. (If they say they don't, you just have to trust them - there's never a real way to verify) If you're fine with that, then OK - but the public has been shown time and again this doesn't always end happily. So if they leak all of your life's passwords in plaintext, you have to be ok with that.

Which is exactly why OP pointed out this is where we should be headed if we care about privacy.


tailscale is strong for network-centric use cases.

openziti is strong for app-centric use cases - put the (programmable, zero trust) network into your self-hosted app (via SDKs for various languages), rather than putting the app on the network.

https://openziti.github.io/ (quick starts) https://github.com/openziti

disclosure: founder of company selling saas on top of openziti


That’s pretty cool. So I could use Ziti two write a client-server app where the server is only accessible/visible to clients running Ziti with appropriate keys?


yep, literally shut down all inbound firewall ports and link listeners. keys are bootstrapped and you can add your own CA if desired (RFC 7030).

https://ziti.dev/blog/bootstrapping-trust-part-5-bootstrappi...


It's pretty easy to write a unit file for a service and install/use it. A layperson could easily follow a guide with just a few of the most common cases.


As a layperson, no I don't. You will be surprised how many of us couldn't easily follow and understands guides for self-hosting. It took me a while to set it up since the documentation expects me to know everything. Which that is fair because self hosting are technical and requires some knowledge to set it up properly. The challenge is that where I can learn this knowledge in condensed manner? There probably are something out there but they are not centralized enough to help laypeople to run it. Let alone trying to learn how to keep it secured.


These may seem like very easy tasks for you.

If you truly believe that a layperson can "easily follow" a technical guide or that a guide with the most common cases is sufficient to maintain a webserver... then the only thing I can say is that your experience with both laypeople and webservers is worlds apart from mine.


This thread is sad. You don't need to create a unit file to install Fortnite. Why set your sights so low?


There was a time when everyone and their brother were self hosting: napster, kazaa, hotline etc. Why has this trend stalled for 20 years


What about the hardware side of this? All this talk about software...


Can CasaOS run compose stacks or simply containers?


can do both, iirc.


First off it's systemd not systemD. And it's not complicated, in fact it's simpler than writing shell scripts. Portainer is a bad docker clone but more often than not doesn't work with docker scripts. But containerization is the wrong way anyway. If you can't setup a certain service on your machine you should not expose that host to the internet. That might sound arrogant but only for the inexperienced mind. Packaging everything in a docker package doesn't remove the burden of responsibility of operating an internet-exposed host. Idk what you're even trying to say. It feels like you're just throwing buzzwords out there without knowing what they mean or what they're for. If you want VPN wireguard is your choice. If you need a gatekeeper/firewall opnsense or openwrt.


This is why I'm building Timelinize [1]. It's a follow-up to my open source Timeliner project [2], which has the potential to download all your digital life onto your own computer locally, and projects it all onto a single timeline, across all data sources (text messages, social media sites, photos, location history, and more).

It's a little different from "self hosting" but it does have a similar effect of bringing all your data home and putting it in your control. We have to start somewhere, might as well start with bringing in all the data we've put out there. (It's not a replacement for self-hosted media servers, for example.)

The backend and underlying processing engine is all functional and working very well; now I'm just getting the UI put together, so I hope to have something to share later this year.

[1]: https://twitter.com/timelinize (website coming eventually)

[2]: https://github.com/mholt/timeliner


Probably out of scope for this project. But if this would include Browser history, and essentially include all the webpage you viewed, it would not just be Data you created ( which is currently the case ) but Data you consumed all on your computer.

Anyway I love this idea. Storing your data as Timeline, such as simple thing yet I never thought of it. Please submit shown HN when you are ready.

Edit: I was wondering why the username look for familiar, turns out it was the author of Caddy Server :)


I could definitely add browser history. That should be a pretty easy one.

And thanks! I'll show it off as soon as I can.


Sounds nice.

Do you know some tool, to have all your feeds in one place. I hate having to use Instagram, but a few friends post nice things. Like timeline but with your own feed with only the things i want to see from the sources i want.

Like a daily "You missed this posts, images and ..."


I vividly remember talking to Eric Freeman at a conference (JavaOne?) about his LifeStream notion. I've wanted it ever since. (Freeman coauthored a book about JavaSpaces and IIRC had a booth demonstrating an implementation. https://books.org/books/javaspaces-principles-patterns-and-p... Another terrific idea ahead of its time.)

For instance, I'll remember where I was while listening to a podcast or song. But I won't remember which episode or the artist. I'd like to cross reference and search my listening history with my location tracking. Should be easy, right?

I've dabbled a bit with journaling, habit tracking, etc. I've created Shortcuts on my iPhone for quickly adding entries. When I circle back, I intend to add my location to each entry. Resulting in something like "20220324T120000 LAT/LONG I just saw a pair of bald eagles at the beach".

Another to do item is always tracking my location. One of the apps I've got installed is OwnTracks, but I haven't config'd the server stuff.

Any way, I'll definitely be trying your timeliner. Thanks for sharing.


Yeah, so that's kinda the idea. It's like an automatic life log from all your digital data. It can import Google Location History, for example, so you can see where you were at what time of day. Location History is kinda creepy with how accurate it is and how much data it contains (including mode of transport and device motion along with confidence levels!). So if we add a way to import your listening history, it will automatically cojoin with your location history and you'll have what you need.


This is nice, I've always wanted to build something like this, but it would integrate life choices and can show you where one's timeline "dead-ends," because you made the other choice.

The idea is inspired by the movie Bandersnatch. There's something so powerful about reflection with clarity.


That's fascinating.

I do have plans to add context to one's timeline; for example, to optionally overlay the weather on that day, or major world or local events. That might be helpful in understanding your own timeline entries in hindsight.

So the life choices thing is interesting. You're talking about divergence points, or nexus events (there are different terms in different literature/cinema), and charting your world lines, basically. (Some Loki and Steins;Gate would be recommended watching.) I am not sure how to do that, but would like to figure that out...


Not the first project of this kind, I see. But something we need more from. But a kind of problem all those project have is their lack of modularity and thus their ability to integrate with other projects. For example, why are your sources builtin? Can you call external sources to allow any user usage of custom sources?

IMHO there are three parts for those tools. Fetching data, storing data and Letting People working with them. But most project I've seen so far for this, are doing all this together, instead of having individual parts, which would allow other people to build optimized parts for themself.


It's easy to add data sources, but they still have to be programmed: it can get data from APIs (which can be scheduled/automated) or file imports.

I'm not really sure what you mean by "optimized parts" but I'm happy to get more feedback on that!

(As soon as Timelinize has a private beta I'll probably create a forum for more in-depth discussions like this.)


> It's easy to add data sources, but they still have to be programmed:

And do they need to be programmed in go and compiled into the app? Or can one write a shell-script or python-program and collect from their output?

> I'm not really sure what you mean by "optimized parts"

If you modularize a software to the point that the parts are working independent of each other, then you are able to replace them per demand. For example, you can use a different UI, which is better for your workflow. Or use different sources which are only relevant for your personal circumstances. etc.

We have this with email, where mail-servers and mail-clients are independent parts of the ecosystem. Where open protocols like IMAP allow for scripts and external programs to work side by side with your client to attach missing functionality.

This level of modularization is missing in most of those aggregator-tools I've seen so far. And I think it's doing more harm that benefiting the users. For example, what if your app is saving its entries via IMAP, and your UI would load it via IMAP to present it in an optimized interface? It would mean your timeline could also be filled by all other mail-capable sources, while the data in your backend could also be customized by any IMAP-capable tool, like filters, etc.


This sounds very cool, please submit a "Show HN" once the basics are working!


Oh I will, for sure! I will need a lot of feedback.


Have you considered using something like hypercore[1] for the timeline sharing? Or maybe you don't plan on making timelines shareable?

[1]: https://twitter.com/HypercoreProto


It's a possibility! Haven't got there yet.


Wow; I love this idea. Thanks for writing it! Also, I love how pluggable it is!


This looks very interesting. I see that Facebook is one of the data sources. Would you know if it’s possible to get posts and comments from Facebook groups (even if it’s just the ones by the user)?


Not sure actually... I'd have to look at what the Facebook API makes possible these days. It's been years since I've looked at the Facebook data source (I think it's pretty basic right now, just gets your own posts on your own wall/profile.)


Love the project. Got an emaillist? I'm not a frequent hackernews users


very curious if you've come across prior state of the art here e.g. singly/the locker project... this stuff is annoyingly fiddly and standardizing it all seemed like tilting at windmills


Self hosting is hard. You need to take care of security, backups, software updates, software installation and so on.

Even on something like a QNAP (which can be compared to managed hosting) this can be hard. Flip the wrong switch and you expose something to the world. Missed a security update: your device is now vulnerable.

While I host a lot of things myself I can understand self hosting is not for everyone.


I used to love running my own servers with all the services etc. I’d manually write beautiful bash scripts to keep it all nice and easy to rebuild on the fly. My first job had 10 Ubuntu servers (on site) and I was the only guy who used Linux at home and had experience with sql.

I have never volunteered to maintain servers since, it was horrible and everything was always my fault (it kinda was, I was a hobbyist at best with no real production Linux experience.)

I do still end up as the dev ops/infra guy at every place I’ve worked but at this point I’m probably one of those stubborn senior guys who wouldn’t like the way the juniors went about it.


Yeah I tried self hosting everything. Getting it actually running is the easiest part. Its the maintenance, backups, and security that are 90% of the job. You can get it working pretty easily and forget about it and it will run for a while until something goes wrong or it needs to be upgraded.

Now I'd rather leave hosting to a someone dedicated to it who has internalized the latest state of things for all the relevant bits of software and is constantly keeping this knowledge in their brain. Set and forget self hosting can't work in the current environment we have where things require constant security updates and complex security hardening.


For home hosting the trick is KISS.

I used to backup to external drives. Now I use bare ones since finding big externals got difficult.

I use (and probably abuse) docker compose. K8s is great but compose is easier.

I use a single makefile. Kinda ugly but it's fine.

Bunch of friends and family use my "services". They usually chip in for hard drives and stuff.

I have a few central points of failure but it keeps things easy. My uptime still beats most big clouds - though I have it easier.

I accidentally took down my server for a few days from a botched hardware install. It's a bit funny because now we realize how critical the home server has become to us.. on the other hand, already got the spouses blessing to build a backup standby server.


I've recently started running unraid at home on an old desktop PC and it's really nice. I've also migrated my unifi controller, plex server and pihole to it and it's very easy. Way nicer than the previous setup where I had random dedicated devices each needing their own type of maintenance (unifi controller on my gaming pc needed me to download/install updates manually, plex server hardly received any updates running on old windows laptop and I was always worried about breaking it, and I almost never looked at the pihole running on a rpi).

Now I have a single dashboard and can upgrade each container with a single click, and everything stays on the happy path.


Sounds like you might've had an unusually bad experience. Might've also been the distro; I don't like Ubuntu much myself. :P

Maintaining inherited environments is also much more painful than ones you get to design from the ground up. I work with varied environments, and one with ~250 RHEL / CentOS machines has approximately the same level of maintenance burden as another with a dozen or so Ubuntus because the first environment has had configuration management from the beginning and the second is a complete mess that I've slowly tried to reverse-engineer and clean up.

When your change management works, maintaining a dozen servers isn't all that different from maintaining a thousand or more; and the need for change management and automation doesn't really go anywhere even when you don't self-host things.


What do you suggest as a maintainable distro?


I like RHEL and derivatives more for servers myself. It's probably just preference, but I find that RHEL-like distros step on my toes less often. In particular, I don't like debconf at all, and Ubuntu pushing snaps everywhere also leaves a bad taste in my mouth.


I'd love to see a blog post that says, this is how to setup X (I dunno.. mediawiki, owncloud, whatever).. and then go fully in-depth into _everything_ surrounding it.. security, backups, logging, alerting, monitoring, backup testing/restoration etc.. a blog post that really covers everything for a well-protected 21st century hosted application that won't leave the owner in tears after a year!

There's honestly so many posts that make it look so easy, but without everything else that would normally make it a job position in a company :)


It should start with how to make your system upgradeable too. I've server that started on Ubuntu 16 and made a helluva mess upgrading to 18. Due to php changes i've had to use ondrej's packages for later php... but that will break on a (very overdue) upgrade to 20...

All these script kiddie tutorials are terrible at showing how to maintain a server for years.


This is where docker really shines. Unless you’re a php developer or have a lot of experience with it, gluing it all together is best left to some clever person maintaining an upstream docker image.


Docker is not a good solution. Many security focused systems (important for self-hosting) are on FreeBSD. FreeBSD doesn't allow docker, because of it's major security vulnerabilities.

Docker is great for getting toy projects to work somewhere as a last resort, if the dependencies are strange and you need a convoluted (read: badly thrown together) environment to set up the app.

A well made application should not need docker to run.


What do you mean? The world is running on containers.


Docker and I are not friends. The quickest way I found to fill up my limited VPS hard drive was to install Docker. All the work arounds to limit it failed. Then there's the whole lack of concrete control over iptables, where a tiny mistake can open you up to all sorts of horrors. So, it's great that Docker works for many, but I get the exact opposite of warm and fuzzy for it however.


Perhaps Nix could help? It's great for configuration and reproducible builds without the overhead of containers (which in practice aren't often reproducible).


I think the hard part is that would be largely dependent on specific implementation, which itself is very opinionated. I could write a post on how I run, maintain, and secure Docker Container X on Ubuntu Y using vSphere with Synology and get 100 comments on why CentOS is better and I'm wasting time/money with vSphere over Proxmox, etc. Cloud doesn't have quite this problem. Once you've chosen a cloud provider, you have significantly fewer options in each category, minimizing this option-overload.


Write your howto on your private blog and disable comments. Problem solved. You can thank me later :-)


These are called instruction manuals and no one likes to read them.


I realy hate the part when they say "But this is outside of the scope of this manual."


I am certain you have spent the time to ask everyone if they indeed do not like to read these, but I disagree.


> You need to take care of security

Easiest solution is to just host stuff on a local network without access to the wider internet. E.g. running on an old laptop/raspberry pi/server in your basement.

Sure, that means you can no longer access your self-hosted stuff when you're out of the house, but the tradeoff is peace of mind about your data leaking or worse.


> Sure, that means you can no longer access your self-hosted stuff when you're out of the house, but the tradeoff is peace of mind about your data leaking or worse.

Lots of things I'd consider self-hosting are functionally useless if I can't access them from my phone while out and about.

I could put my phone on a VPN, but that's just another layer of complexity to add to the self-hosting process.


I do a split approach -- Most services are available internally only, some are reverse proxied out. It used to be caddy2, but after a recent issue and switching to TrueNAS, I just use Traefik with k8s Ingresses and only set it on the few containers I would like accessible.


Tailscale solves the “a vpn is annoying to setup” problem pretty nicely.


It does, but I found it to drain a lot of battery on mobile (iOS). I'd say that for a simple setup you're likely to be accessing your services via their LAN IPs or their TailScale IPs, so if you have these bookmarked etc. on your phone then it means you either:

1) Have separate bookmarks for accessing services via the LAN or the VPN, and need to remember to turn on and off the VPN when you need to access things out of the house; or 2) Always access services via the TailScale IP, which means you need to be connected to the VPN on your phone even to access them from your LAN, which in turn either means toggling the VPN on and off even when at home or leaving it connected all the time and letting it drain 20% of your battery every day.

It's a great service but I didn't find it to make accessing self-hosted services from out of the home on a phone to be as nice as using them at home, or using a managed service.


You could also configure dns via Tailscale so that the same host name resolves to different IPs if you’re connected to the VPN.


Tailscale makes accessing a Raspberry Pi in your basement from outside of the house genuinely easy, including from mobile devices.

I think Tailscale opens up all kinds of new opportunities for self-hosting.


You should probably use headscale instead of you care about self hosting.

If you don't trust Google drive with your passwords, why would you trust a company's server that manages access to all of your devices?


Never heard of headscale before. Doesn't it require the control server to be accessible publicly?


Yes, that's the trade off. Let someone else manage the control server, or manage it yourself - not self hosted vs self hosted


Setting up a VPN is pretty easy these days. If you don't want to run it on your router, you can look at something like Tailscale for remote access.


*headscale If you care about privacy or self hosting, use headscale instead.

You have no idea what tailscale is really doing.


>You have no idea what tailscale is really doing.

please elaborate...


What I mean is just the general statement that can be applied to any company or server. Even if they release all of their code as free and open source (not entirely true, as I understand it - their public servers that handle connecting your devices is not), there is no gaurantee that they are actually using that specific, unmodified code on their servers. They could add or remove whatever they like to the software before deploying it on their servers without you knowing. You have to trust whatever they may write on their blogs. Some people may not care, but the winds these days are shifting away from trusting companies with their private data.


That's not really a solution if you want to self-host mail, or a blog; those services only work if the wider internet can see you.


That helps for external threats breaking into buggy network services, but it doesn't help for compromised apps/images/dependencies exfiltrating your secrets.


A compromised app on a local network has no one to phone home to.


If it's an air-gapped local network, then sure, but how useful is that? Are you disconnecting your phone/laptop from the internet when you access the air-gapped network, or do you use two network interfaces on every device?

I assumed the GP was talking about a typical home "local network", one behind a NAT - so no incoming traffic, but usually, it allows any outgoing traffic.


How about add a remote apple host. Not for the world but just you?


"Flip the wrong switch and you expose something to the world."

One strategy for dealing with accidental misconfigurations is to employ a "network slug"[1]:

"A Network Slug, or "Slug", is a transparent layer 2 firewall running on a device with only two interfaces. ... The purpose of a Slug is to reinforce a security policy or to block uninentional leaks of information."

[1] https://john.kozubik.com/pub/NetworkSlug/tip.html


I have never head this idea described in text before. However, I have made firewalls this way for decades. They were typically for stuff that ran in a datacenter so it would be a 1U server with three NICs.

I would really like to make such devices for home or office use. What would be a good device to use for this? Unfortunately, RaspberryPIs do not come with 2 or 3 NICs. Any recommended alternatives?


I would have a look at the openwrt project’s database of supported devices. You can filter for devices with 3 nics (though not sure it supports filtering for “3 or more”).

https://openwrt.org/toh/views/toh_available_16128


use VMs. qemu/kvm. the Tor-based Whonix OS takes the approach of one VM running a Tor proxy and another VM running your application software. the latter VM only has access to that proxy, and no other network interface. it’s effectively the same approach as i understand a slug to be, but with the hardware virtualized instead of physical (or course you don’t have to use Tor — you can define whatever interface you want: a VPN, a firewall, etc).


I am using these:

https://www.seeedstudio.com/Rapberry-Pi-CM4-Dual-GbE-Carrier...

... to make firewalls and bridges with rpi cm4 ...


Got one of those. It is hard. Very hard. Absolutely freakin’ hard to make a bump-in-the wire dynamic 5-tuple blocking “hub”.

It also does “waterfall” egress packet delaying.


I'm not sure I understand what you're describing ...

A slug should not need to be dynamic nor should it be complicated in any way ... in fact, it is one of the simpler systems I have ever deployed ...


Does it do Suricata, Zeek, Snort, Transparent Squid (with valid signed CA cert), and a furtive SSH port in which to monitor and API to block ports?


I think all those are anti-features on a network slug. As I understand it, the device is intentionally simple because it is there to ensure some misconfiguration cannot expose some port that should not be exposed.

I have implemented firewalls similar to this in the past. They typically had three network interfaces. Two of them were configured as bridges and then I use ebtables/iptables to filter traffic flowing through. These two interfaces would have no IP address and would not be visible on a traceroute, etc.

The third interface would only be connected to a separate admin network. Or it might not even be plugged in. In the latter case, the admin needing to change anything on the device would have to be physically present and bring a "crossover" ethernet cable and plug their laptop directly into the third NIC of the firewall. From there, they would be able to ssh into the firewall and change config.


A network slug does not have an IP address. You cannot connect to it over the network. I'm not sure you understand what the device is and what it does.

Let me give you an example - I have a "port 22 slug" and what it does is block all traffic of all kinds except for TCP22. That's it. It does nothing else and it does it transparently without having an IP address of its own. If I wanted to reconfigure it, I would connect with a serial console.

Make sense ?


Yep. That’s why a lone but shadow port is taken from the high-end of ports … just for SSH (on the inside). Two interfaces. No bridge. Raw Netdev.

Almost like an overglorified but managed hub.

If you like your MAC, you get to keep your MAC.


Self hosting is hard. You need to take care of security, backups, software updates, software installation and so on.

I'm pretty sure we all used to that and it was mostly fine.

I get that the mainstream computer user has been lost to techno-infantilism. But why should we?


For me the issue is that I now have (let me count) 15 different devices in my household with unique configuration needs that it’s up to me manage. I could handle it when it was 1, 2, 3. Now it’s just too much.

I recognize that this embarrassment of riches is in part my own fault. But this is my answer to your “why”


As someone who used to have a server in my dorm room but switched to outsourcing it I stopped because the list of technologies I had to keep track of kept monotonically growing and I had no interest in making it my day job.

If it becomes simple again I would gladly self-host.


> Even on something like a QNAP (which can be compared to managed hosting) this can be hard. Flip the wrong switch and you expose something to the world. Missed a security update: your device is now vulnerable.

It doesn't even require actively flipping switches, but can be from not knowing a vulnerable feature was enabled by default. My QNAP got hit with ransomware because of a vulnerability in the cloud access software that I wasn't even using. I've since locked down all non-local traffic.


Wanted to reply saying the same thing. I didn't really muck with the settings on my QNAP NAS and then checked into my files one day and everything was encrypted with some txt files telling me to send BTC to some address. I just formatted the disks, lamented not backing some stuff up, and moved on.

I'd say the point being: I'm a software engineer who knows better about these sorts of things and still got caught with my pants down. You have to be very judicious with respect to security. You can't just plug and play and say "I'm too busy to worry about that."

Another thing I'll add is the amount of software tools they have on these NAS machines strikes me as 1) very impressive for a company their size and 2) a huge surface area rife for being hacked. When it happened I wasn't surprised at all.

I've since stopped using it because at the end of the day I'd rather pay Dropbox to have peace of mind.


I tried it but there are so many traps you can fall in, like security settings as mentioned by you. When i had my server online back then, it was hacked 1 week later :D


I hear a lot of stories like this. I've been self-hosting for a few years out of my home. I have a symmetrical gigabit fiber connection. My IP changes very frequently (DDNS and a low TTL solves that problem for my use cases).

_anyway_

I haven't been hacked.. yet. /me knocks on wood

The precautions I take are basic:

  - Use unique and secure credentials on each service I expose.
  - I only expose ports 80 and 443 to the public. 80 HTTP redirects to HTTPS/443
  - I keep my software updated (docker-compose pull)
  - Nightly backups to cloud storage and local disk
  - I "airgap" my home network from my hosting network. There is no shared hardware between them including firewalss/routers, switches, etc.
I figure cloud services and SaaS get hacked anyway. I can't enumerate the breaches my data has been a part of. If my self-hosted stuff gets hacked at least I can do the forensics and actually see what happened and what was accessed. With a 3rd party all I can hope for is what their PR department lets out.


The first hack I noticed was that someone had set a password on my redis server because the default was no password and I had accidentally exposed it to the wider internet. This was exposed for 6 months before this happened. Who knows what else was accessed without me knowing.


It's pretty silly how many services are public by default when ideally they should only listen on a unix domain socket (or nothing) until you configure something else.


I'm interested in how you set up your home and hosting networks without any shared hardware. I've been running my own websites from home for awhile on their own machines, but never considered they could be on a completely separate network all the way up to the modem.


My ISP provides me with PPPoE into my house. I have that Ethernet going into a small switch which both networks connect to via a firewall. Each network establishes its own PPPoE session and receives its own (dynamic) IP address.


IMO separate hardware for your self-hosted network puts you into a whole new class of hosting at "home."


Not necessarily. For my use case it’s one extra 4 port gigabit switch and a single pc that runs everything containerized including the NAS, firewall, and apps.


It has also gotten much easier. For instance running your own full blown email server with docker-mailcow. There's a great UI tool that helps to setup the required DNS records. I remember doing the lengthy postfix + dovecot + SASL + MySQL + Auth + this + that guides. No need for it anymore.


I agree but I think about it in the reverse way: the hosting is easy, what you get when you use another company's service is the maintenance. Just like every other option where we choose who will maintain something there are trade-offs. You can maintain your own car if you want, but it'll involve things! We all look at our lives and decide which is best for us for each thing.

Personally, I tend to self host the things whose maintenance I at least find satisfying, and hopefully enjoy. Otherwise I pay someone (through ads or my own money) to do it for me.


I'm amused by the implications here that 1) the outsourced alternatives are better than you are at keeping up with the 'hard stuff', and 2) that in an outsourced scenario you can't "flip the wrong switch and you expose something to the world". This thinking is why I can't tell you how many incident post-mortems I've done where I have to once again hear "...but, but, but...we outsourced this to them so this couldn't happen...".


Depends on whether you're referring to a SaaS provider or something more like a MSP.

I'd like to believe the engineers running Google Photos or iCloud are spending a lot more time on keeping my photos secure and available than I would be willing to put into a server running in my basement.

In the case of a business hiring an MSP to manage something complex like firewalls, Active Directory, server patching, then sure it's reasonable to assume that if they made a mistake, the impact would be equivalent to you making the mistake yourself.

It's possible you need to tell whomever you are reporting to for these post-mortems, they should be outsourcing to reputable service providers in order to free up time and man-hours, not necessarily just to save financially. I suspect that is the real problem.


Thanks for the condescension about how my clients handle outsourcing. Not knowing anything about how they do it makes that sort of low intellect, zero content second guessing easy.

Big Hint: AWS has had something like 5 times the downtime my biggest clients on-prem datacenters have had this year. So...no, the FAANG engineers aren't doing better than my clients.


You can use a popular Linux dist and turn on automatic updates, and use Snap apps that update by themselves. But you still would not have control - apps could update with breaking changes. The only way to win is by choosing simple tools that are either considered "infrastructure", or simple to build and even patch yourself if needed.


> apps that update by themselves

Maybe I'm too old (experienced) and cynical, but I always read that as "apps that are going to brick themselves". No thanks.


This one has bitten me hard on servers and desktop computers. And lately on mobile too. The last area were I still had automatic updates enabled.

The problem is, that one can't reasonably wait a few days on every update and look online for breaking changes. Especially with mobile apps that have sometimes a really unreasonable update frequency.

I still have not found a satisfactory solution for me personally.


Docker has taken much of the pain out of it though. And if kept on local network safety is largely a non issue.

Drop in replacement while outside LAN are admittedly a little harder and more at risk of mistakes


To the extent permitted by the hosted service, you should still backup your data. If you manage to accidentally delete all of your hosted photos or if your account is compromised, I wouldn't rely on most services going to their backups to restore your data. Unless it's a site-wide issue, most places will say "that's too bad" and send you directions on how to protect your account.


> Self hosting is hard. You need to take care of security, backups, software updates, software installation and so on.

automation is not a thing? I'm pretty all cloud providers do it...


I have a home server. It runs on a 2013/14 HP Pavillion laptop. I am indeed a linux user. I currently have arch linux installed on it. I run a blog at [0] and use a dynamic dns provider.

I must say, it has not been easy at all. I learned a lot of things. I did not attend any school lectures to learn this. I am still in high school, but thanks to the beauty of the internet and a ton of effort I was able to get a static blog working, learned a lot about free software, how sometimes just using certain web servers is just too complex.

The main web server is at [0]. It runs on my home internet connection. Thankfully they do allow for port forwarding. But it's certainly a good exercise to understand how much we give for granted to giant and sometimes small corporations.

I've learned to write and express my feelings among lots of other technical knowledge.

references

[0] https://blog.trevcan.duckdns.org/

[1] https://trevcan.duckdns.org/


I love arch for a lot of things, but I've learned to keep stick to more "traditional" operating systems when it comes to servers. There's simply no auto-update functionality available in Arch so you need to constantly remind yourself to update. I wish you the best of luck with your arch server endeavours, may they treat you better than they have treated me.

Ubuntu is pushing me away with their idiotic focus on snap every new release but I'm not hosting anything on a server that I can't just run sudo dpkg-reconfigure unattended-upgrades on to auto install security patches. I still need to reboot my servers from time to time, but that's rarely ever really necessary for security. It's a shame because I think the "closer to the metal" approach of Arch works quite well for server setups.


My utmost respect. I ran a Linux box at home while I was at university. It acted like a NAT/masquerading server. It was the time when DSL routers weren't a thing. At least not widely available.

But I never had the idea to use it to host internal or external stuff there.

Kudos to you.


My business just started self-hosting, but we severely underestimated the cost of air conditioning (installation), SFP adapters, enterprise Ethernet and electrical infrastructure and ended up spending as much on peripheral stuff as servers.

So far the whole operation has cost around $30k (small business). We stand to save $1-1.5k per month at the moment, less the cost of symmetrical fiber about $500/mo.

Next year it will pay off but this year we definitely won’t break even on the servers.

How has this has changed our culture? Well, what I have enjoyed about the whole process is now we have a bunch of spare capacity, so we can spin up demo servers for SMB clients for even a few months and it “costs us nothing”. Then we can arrange payment later when they fully appreciate the service. We don’t take any risks anymore with bills that need to be settled to cloud providers. This has changed how “generous” we are in the outset with our clients. It feels really healthy to be able to do this.


I'm glad you did. One of the things that scare me the most about the current state of the Internet, it's how centralized it's everything in the "cloud" with a lot of the Internet infrastructure depending on like 5 providers. Self hosting it's dirty, time consuming, expensive, risky, but it's predictable.

The problem with XaaS it's how little control we have. Not only the rules of pricing can change any day, but if you're already dependable on the service, they could easily shut down that service if it's not profitable enough, and with some luck you will only be able to migrate in a few months.


In the end I think it will boil down to a question of anti fragility or resiliency VS. efficiency.

And there is actually imho no black and white answer. Take a current, very small freelance client of mine. I built them a homepage based on Kirby CMS. Hosted on uberspace.de. On the same shared server there is Matomo being hosted for basic stats.

Another client I was involved in went another way. They had their site built by a team of designers with webflow and are using Google Analytics.

In both cases the decision makes sense given the conditions and constraints of the respective projects.

Personally I would have not recommend to use webflow, but that was not my call to make. For GA I provided a pro/con evaluation and the client decided to go with GA (not my primary recommendation). I still think for his use case at the time a sensible decision.


What I don't get is if the problem was the risk of unexpected bills from a cloud provider, then why not just use simple virtual servers that you can rent for a fixed price from almost any hosting provider?

Gives you all that extra benefit you speak of of the extra compute headroom that you can use for spinning up any demo server application you want, and saves you all the costs, risks, and headache of managing your own hardware.

Why the debate of cloud vs on prem? The logical sweet spot in the middle seems to be vps to me.


I mean, I see VPS as the worst of both worlds.

You're still managing a server, but now you're paying through the nose to do it.

The costs just simply don't work out nicely to anyone who can afford to buy at even small scales.

A 4 cpu, 8gb ram machine will pretty commonly run you $50 a month, with less than 200gb storage. You're paying 600/year for a machine that I can build (with better specs) for ~400.

Not to mention that the machine you buy yourself will have a lifespan of 3 to 8 years.

So sure - once you factor in bandwidth, storage location, power - you're probably not going to save money in the first year. But by year 2 you are, and by year 4 you've saved a considerable sum.


My thoughts are same as above, and also -- if the app that you're hosting only runs on Windows -- then Windows VPS are really ridiculously expensive. Some providers charge around $30/mo for a key. You save a lot of money being able to buy/manage your own Windows keys.

ATM we're running Windows under KVM and the performance is great anyway. But if you have to run a lot of Windows boxes, it seems like the cost effective way is to buy a Windows Server DC license ... then all the guest VM licenses are free, and with the right server hardware you can probably cram like 20 VMs on a single box.


Dumb question. I’m building little social apps, and I’m hosting stuff in GCP or AWS.

I have a few old computers that are still kinda beefy (quad cores and a 6-core, 16+ Gb ram, some old but still working 3TB hdd). Would it be stupid to run some of my code and data on these? Not everything I have needs to be geographically close to users or something like that, such as a mass notifier service that runs every so often. For example I’ve got a few auxiliary services that I’m sure I could run just fine off these machines, but I chose the cloud because it’s what I’m used to from work.. I have a fiber connection at home and I’m anyway powering my laptops and other electronics 24/7..

Obviously not a bigger business like your use case but I’m just curious. Some services cost quite a bit and I feel like I have better hardware for some things (non database) than what I pay for.


You aren't really paying for computing power and storage - you are paying for power/network/facility/server redundancy and availability.

We host out own servers, and the bulk of the time is ensuring that the power doesn't go out, that there is a backup internet connection, that the HDs are on RAID, that the server has a backup server available (and the router, and the switch, and the NICs), and that the management network is available, and that the location has proper security monitoring, and that the offsite backup happens nightly, and that we have an employee that knows how to do all of the above... All of that is time consuming and expensive.

...BUT, some people/some companies actually enjoy that sort of thing, in which case, it can work out financially - though often you're sacrificing one of the 9's in your 99.99% uptime/reliability.


In almost all cases - you won't actually get 99.99% uptime from your cloud provider anyways.

That's around an hour of downtime every year.

A much more reasonable estimate is 99.9% - or approx 9 hours, unless you're doing serious planning and spending to account for outages. Hell - the Dec 7th 2021 outage for AWS was nearly 10 hours alone.

On your own, 99% is probably a reasonable prediction (roughly 3.5 days), and I think people vastly over-estimate how much small amounts of downtime actually matter for lots of services.


I would recommend plugging the hardware into power measuring units before the wall sockets to give you an estimated running cost. Depending on where you are in the world electricity can be crazy expensive.

The cost of purchasing a lower power SBC (x86, ARM) might be very quickly offset.

A RaspberryPi can handle a surprising amount, especially the 4/8GB models. The trick is using low resource software.


If you're already paying for business internet in the location you have, and your power costs are reasonable, and you don't mind possible downtime - I'd vote go for it.

Honestly - Most cloud services are exorbitantly over-priced. There's a reason AWS is ~15% of amazon's total revenue, and Azure now pulls in more revenue than any other MS product.

I throw old machines into a micro-k8s cluster. It costs me literally pennies on the dollar for much better computing hardware compared to a vps.


Well, to host anything serious, you’d have to have the right infra.

Like a good edge router, redundant power supplies, RAID arrays, a backup mechanism, and a very good upload connection (ie 500mbit). By the time you setup all that, you may as well be using new hardware to make the investment worth it.


I don’t need a reason why to self host, I need nice, clear, up-to-date tutorials on how to self host various services.

Self hosting should be easy enough for everyday people. Perhaps preconfigured servers that treat services just like apps. Once I have a server setup, I should be able to install (and uninstall) services in a single click. The OS can handle permissions and containers.


There are numerous projects which have attempted to create this.

https://sandstorm.io/ was the biggest one but as far as I can tell its largely unmaintained and most of the apps are outdated

https://yunohost.org/ probably has the best "just works" experience but I didn't like that it wasn't using any kind of containerization which has caused them issues with shared libraries like PHP being difficult to update. As well as security concerns about one insecure app giving access to the whole server.

Ultimately the problem is just extremely difficult / high maintenance. And no one wants to pay for this work.


Sandstorm is in need of more coders to help maintain and update apps, but it's not abandoned. I use it, both personally and professionally.

It results in a better experience for end-users because applications are actually sandboxed. This (mercifully) means that any security issues in the out-of-date applications does not become a cause for panic. The downside is that packaging those applications is not trivial.


I always check for yunohost on these self-hosting threads. Standing up a yunohost on my raspberry pie has been on my to-do list for a long time.

Unfortunately their default for my raspberry pie didn't "just work" on the Saturday evening I tried it. It was my first foray with that raspberry pie, so installed a different server OS and spent the rest of the evening setting up a basic server for an HTML file and learning more about SSH. That was my experience as a non-IT engineer. I'd be interested in other people's experiences using yunohost (or sandstorm for that matter).

Maybe the solution isn't to make an idiot-proof stack of tech. Maybe we need a central repo of tutorials and how-tos so that any idiot could self host? Something better than the scattered YouTubes and blogs I remember seeing when I Googled after this.


I think we need more than tutorials. The actual software is just harder to host than it needs to be. What we need is some kind of standard where a tool can just automatically plug it in to the stack, start up the docker container, route nginx in, setup certificates automatically, set up SSO automatically in a standard way, Backup the data in a standard way, etc.

These 1 click install services like yuno host achieve it through huge amounts of work per app and patches over upstream.

Problem is its a monster of a job that requires upstream projects to be onboard and ultimately most of these tools are meant for enterprises to run who just have a dedicated ops person so complexity and maintenance are less of an issue.


It's kinda sad that something like yunohost is still the best "just working"-solution we have at the moment. I tested it some weeks ago for a homelab-server, and holy crap was this a poor experience.

But the general problem with those projects is, they all are packaging their own apps, and most of those have a very low number available. Some of thise apps are outdated, or are not well tested. It's quite strange that we have dozens of linux-distributions, each with thousand of packages, yet we have no good solution that actually works well enough.

You either have solutions which are hiding everything in a tanglement which is hard to understand, or you must do all the work yourself, or you live with the in-betweens which offer only a handful of apps. Maybe in another 20 years we have something workable on all levels...


> the best "just works" experience but I didn't like that it wasn't using any kind of containerization which has caused them issues with shared libraries like PHP being difficult to update. As well as security concerns about one insecure app giving access to the whole server.

imo the reproducability guarantees of docker aren't enough and domething like NixOS is needed.



I am with you. I think the future is something like Umbrel[1].

Because frankly, I would rather have the server running on a little device in my home than having to mess around with things like SSH and a VPS. An app that is running on a little computer in my house is both more understandable and easier for me to maintain.

[1]: https://getumbrel.com/


Umbrel looks really cool. Is it possible to deploy without maintaing your own copy of the bitcoin blockchain yet?


> How can I uninstall the Bitcoin and Lightning node?

> Currently, Umbrel installs a Bitcoin and Lightning node by default and it is not possible to remove it. Over the coming weeks, we’ll migrate the Bitcoin and Lightning node to the Umbrel App Store and your Umbrel would then start from a clean slate.

From the FAQs on their website.


YES!

I think the single most important thing of any software is "how do i install this". Thats the first thing i search for on a github repo.

And please no outdated tutorials, that sucks so bad ... that i give up and don't use it.


Most things offer a docker image, so maybe learn how to work with those.


Simple things are easy :D But running docker, with multiple images that should interact withanother and with the public and it now it gets complex.

"Just docker run" is not always the answer

Look at Radarr:

https://radarr.video/#downloads-v3-docker

It's nice that they give tipps about pitfalls, but there are more than this and a step by step tutorial would also be good.

Often times you have to google and search 10 reddit posts. Thinks like digitaloceans tutorial work best.


Usually if it really is more complicated than one container, they'll have a docker-compose file.


The projects i see on github ... most do not Your results may vary


Docker containers don't work for most self hosted solutions, since most self hosted OSes are security focused, and use FreeBSD, instead of Linux, in order to get away from some security vulnerabilities. Docker is a pretty large security vulnerability. It's better than windows, sure - but I think everyone would agree that shouldn't be the bar.


It's not as easy as "just run the docker image". Maybe it is if you just want to run a single one. But as soon as you want to run multiple it becomes a very complex job of configuring nginx and lets encrypt. It took me several hours to work out how to host nextcloud and get the nginx config working.


Wow. Thanks for that insight. I went the middle ground and am using a shared hosting provider with great tutorials on how to get things running.

Nextcloud was 5 minutes (or 15 if one includes setting up ssh key in the web frontend for my account). WordPress was 3 minutes, Matomo also 5 including configuration.

I know that I am using a central service and am not self hosting. But for > 13 years this setup "just works".

I had a masquerading server at home once (back in the early 2000s) and updating, securing and just maintaining it was a hassle.

So to me the current setup is stable, mostly secure (and more secure than I could make it) and balances my needs for control and stability and ease of use quite well.


I'm guessing the "why" eventually can trigger experts to craft mechanism and associated tutorials/docs to show the "how". That is, i think people should understand the compelling reasons why self-hosting could be important...and maybe there will be much more incentive to get experts to create more things - and easier - for lay people to adopt them...For example, if tons more people start demanding that easier self hosting options exist (both mechanism AND how to docs), then we would have many more entities - both commercial and private - incentivized to generate better/easier on-ramsp for self hosting. But of course, you're right that ultimately, eventually, the "how" to get to such a nirvana is essential too. That is my guess anyway.


Unraid can do something extremely similar to this. There's a plugin that provides a repository of Community Applications that are essentially docker configuration templates designed specifically for Unraid. You can search for say, HomeAssistant and install it with just a few clicks.


Unraid is great, but be warned, it can spiral out of control. I started with unraid and a gazillion containers, now I have that, 2 mini PCs, and some networking equipment that I never thought I'd want or need. It's a lot of fun.


Something like Seedboxes. Piracy usually shows the way.


Lots of good points about the challenges of self-hosting throughout this thread, especially maintenance, security, and time-investment.

Here's my solution to all of them:

Invest in your common infra. Docker provides stable images configured primarily with env vars. I have a docker-compose host with logging/monitoring/alerting. All service-specific files are mounted from a NAS that has backups. All network access is closed by default, but exposed via a central login proxy (tailscale would be an easier alternative, but my Beyondcorp-esque system lets non-technical family members use my services easily from anywhere by tapping a yubikey).

That's 3 pieces of infra to maintain (docker host, NAS, login proxy) but I can check all the boxes for self-hosting 15+ services. O(n) services with O(1) infra.

I regularly spin up new services in under 10 minutes, while only having to touch 3 files that I am already familiar with (docker-compose.yml, dnsconfig.js, nginx.conf). I've run stable services for years on this stack. The only painful issues have been upgrades to the docker host, docker ipv6, and hardware issues.

This is all on a recycled computer in the basement, with a cheap VPS as a stable public entrypoint.


But then you're adding even more parties to trust as it's often the case that Docker images are not provided by the same people that are maintaining the project.


Fair point, but I haven't hit it in practice. Tons of services are embracing docker as a first-class output. I just checked and I run exactly 2 images that are from a third party.


As far as I understand 'below' the application layer there is usually a basic image (like alpine) in docker? Do these first parties maintain these as well? If not the trust chain just got longer.

I would call myself at least somewhat technically capable. But I actually never grasped docker beyond the 'I can pack an image and deploy it to AWS' stage so that I can access an internal tool I built at work over the internet.

I was not really understanding what I was doing and was more or less blindly following some tutorials on the net.

When building and deploying things to the shared hosting environment I use privately I have a better (albeit far from perfect) understanding of what I am doing while I know that I am trusting the underlying infrastructure and the people behind that.


How'd you go getting docker + IPv6 going? I spent hours trying to get docker containers to get native IPv6 IPs and eventually gave up because it was to painful.


I feel for you. I also wasted a lot of very painful hours trying to get it to work. I even had it working for a while before a docker update broke it -- turns out docker-compose's ipv6 support that many people relied on for years was a "bug" that they "fixed".

Ultimately I also gave up and now have a combination of port forwarding, nat64, and 10+ socat proxies in my docker-compose file. (Specifically, intranet->container and container->intranet are ipv6; but container->container is still ipv4)

More generally, I now try to keep my docker host as stock as possible. Whenever I'm reaching for daemon.json I just catch myself, take a step back, and say "what's the stupid but easy way to get this working".


Damn, that sucks. How painful.

Honestly tempted to ditch docker-compose in favour of just a bunch of LXC/LCD containers in place. Sure, I mightn't have all the nice networking but damn each container getting internet-routable IPv6 address is just damn nice.


Why? All you have to do is add a prefix to both the docker daemon and individual networks configured by compose. I do it and it's painless, never hit a bug. Just ensure the prefixes are at least /112.


Docker will not solve the problem. It's just another layer in the pile of garbage. It's just another confusing system that the user must learn and become an expert in to even get started. Totally the wrong solution.

You want something that just works without the user having to know anything.


If you really want mainstream adoption of self hosting then you need to stop calling it self hosting and rebrand to "personal cloud". The ease of use of cloud software includes zero install, zero management and consumption based pricing. Desktop and mobile had hardware packaged with software and a simple install mechanism with ease of use as a staple for mainstream users.

Self hosting has zero standardisation around hardware, software, install mechanisms. It's a Dev led movement that has everything to do with control and ownership over ease of use. You want mainstream adoption of self hosting. Rebrand it, standardise it, make it easy for non devs.


That's what Western Digital does with their "My Cloud" product line and honestly it makes me cringe.


That's because its a product by western digital. No one wants that. Let's put it like this. Cloud 1.0 was infrastructure, Cloud 2.0 was services, Cloud 3.0 is personal/private.


I respect Western Digital and think they're trying their best to do a good thing. It's that word in general though. Buzzword paradigms always make me feel unwell. As someone who's usually a ahead of the herd in terms of adopting tech, once the broader public catches on and starts making up jargon, I always get a sense that it twists the meaning I personally associated with these concepts and causes me to feel negative emotion about parts of my work life once tacitly normal.


Everything starts out as a buzzword but it's only because it's trying to distil down an entire category into a word. As much as you may dislike it, every industry is quite literally built and defined that way. Something has to be a hook, even if you can explain in detail what it is. Cloud is just this idea that everything goes to a remote place that appears as one thing, which you don't control or manage. In all these trend setting new categories you either play the game or lose out and get left behind as a relic aka like WD, IBM, Seagate and everyone else.


What you'd call a cloud I'd call a datacenter and a datacenter is something we use when a problem is too big to fit on a computer.


I come from an era of datacenters, colos and whatever else but I learned to adopt new terminology. Cloud isn't just a datacenter, it's all the services on top of it that exist remotely. All the things you don't manage. All the services you make use of. Anything you are not personally installing is in the cloud. That's how we've come to know it and that is the language the mainstream user knows. Just as we'll have difficulty accepting the rebranding of the internet to the Metaverse, it will be a thing that spans far beyond network connectivity.


Who's we? In many countries Metaverse is the Internet by your definition. Lots of PR money has been spent making that the social truth for probably billions of humans. I'm sure everyone over in those continents who likes to use the actual technology that underpins the buzzwords is being force teamed too into thinking they're a dinosaur for not accepting Facebook's dominion.


The world of Synology products is fascinating in this regard.

Take photos - They’ve got iOS and android apps that replace your photo app; a truly self-hosted server you run in your home with pretty easy to use DNS support tools. Even shared albums work without much fuss. I think they’ve invested in the UX in recent versions, and it shows.

https://www.synology.com/en-global/DSM70/SynologyPhotos


I've had a Synology raid for a few years but I'm completely baffled by it. There seem to be three options for everything (Photo Station, Moments, Photos. Similar situation for video). Nothing ever seems to work and it's very slow. It's never clear exactly where you're supposed to put your files either. Constantly doing security updates isn't very reassuring either. I feel like I'm going to get hit with ransomeware all the time.


Synology Photos is what Moments has been rebranded too. It's fantastic and our whole household uses it.

Apart from that admitted indecision of product naming, I love my Synology. Synology MailPlus handles all my selfhosted mail without any fuss. Synology Drive handles file sync to the NAS but you can use whatever protocol you want.

The Docker support is really handy. I run more than 35 services on mine without it breaking a sweat. This is a DS-218+.

> Constantly doing security updates isn't very reassuring either.

That's why you do them. To keep your system secure. I'm not really sure what would be more reassuring to you since security is an always-evolving landscape that requires mitigations quickly. It's not something you sit on.


It is an idea that Apple should have done with Time Capsule. Instead they want to grow their Services Revenue.


Photos is great but lacking. It seems like all of the other iterations of Synology's attempts to make a photos app.

It starts off great and then never receives any attention. I bet their working on Gallerys next.

Photos can be great but the facial recognition is extremely poor and not there yet..


Definitely pro-sumer, I think professionals are the primary audience, though as an individual this cuts most of the effort out of the process for me.


Exactly! I guess above by “UX” I meant far more than the screens you interact with - running the app, storage, integrating with mobile and home ecosystems, etc. Sure it’s fun to learn how all of that works, but for a few 100$, you can really move a family to fully self-hosted (content) in a day.


I personally don’t trust the Synology media apps to be around for a long time. I don’t remember the exact names, but IIRC, the photo management/sharing app did change from DSM 6 to DSM 7, and there seem to be multiple apps from Synology just for photos. Hyperbackup seems to have issues that haven’t been fixed for years. I’m not that confident on using Synology’s apps for anything long term.


The hardest part is drafting a series of questions for the end-user to understand and answer before we get that “MAGIC-PRESTO-CHANGO” get those configuration files that just works.

I blame the program providers.

Some Debian maintainers are trying to do this simple querying of complex configurations (dpkg-reconfigure <`package-name>`). And I applaud their limited inroad efforts there because no else one has seem to bother.

I have made a bash script to configure for each Chronyd, named, sshd, dhcpd, dhclient, NetworkManager, systemd-networkd, /etc/resolv.conf, amongst many. They try and ask simple questions and glue appropriate settings then run their own syntax checkers (most are provided by the original stream).

Postfix, Shorewall, and Exim4 remain a nightmare to my evolving design. CISecurity and other government hardening docs were applied as well and then some I took even further like Chrony had its file permissions/ownership even further and MitM block feature as well.

These are dangerous scripts where it can write files as root but as a user, you will instead get configuration files written out in appropriate directories under `build` subdirectory.

If these designs work across Redhat/Fedora/CentOS, Debian/Devuan, and ArchLinux well, I may forge even further.

https://github.com/egberts/easy-admin


The problem is configuration itself. Things should not need configuration.

Configuration is for builders, not users.

When you buy a car, do you "configure" the gearbox? The engine?

Imagine yourself buying a car, and the car dealer starts to ask you "where would you like to place the gas tank?" or "How many pipes do you want going from the engine to the gas tank?". Oh and by the way, if you place the gas tank in the wrong place or choose the wrong number or placement for pipes, the car won't start at all; it might even blow up!

This is basically what debian is asking the user to do.


If you're comparing configuration to a car, the builders would be the people hosting the software and the users would be the people logging in from the web interface.

Software drivers are the people that use Gmail, iCloud, Fastmail, you name it. Self hosting means building your own alternative, for better or for worse.

For comparison, I can order a gearbox but I've never been under a car, let alone worked on one. If I want a specific gearbox because I like the feel of it, I'll need to learn how to install it or pay someone to do that for me. If I'll ever need a car, I'll probably buy one with all the components I need, and with no real preference for the technology underneath.

Debian is a tool to build your own car. If configuration is too difficult, use a tool designed for your use case instead. There are plenty of big-name brands like Amazon or Google, or smaller brands like shared hosts that will do all the difficult parts for you. Stuff some PHP files on your favourite shared host and follow the five step install guide and you're up and running.

Of course defaults still matter, but people just sticking to the defaults often find that their niche use cases don't align with the defaults. Defaults merely provide most of the right work flows to most users.


That!

There is still no default setting for a properly run email server.

Gotta configure some 5 different packages worth of 600 settings.


I'm a big fan of Mailcow (https://mailcow.email/). You can get it running with just docker-compose up, updating is a breeze, and it brings you a fully fledged mail server with tons of good defaults (and tons of other settings you can manage via the web interface). The web mail UI (SoGo) is clearly directed at organisations but it's very pleasant to work with. Their admin interface makes it possible for mere mortals to have a mail server with spam filtering, antivirus scanning, multiple domains, (temporary) mail aliases, catch-alls, batched IMAP-sync from external mail servers, mandatory TLS for incoming and outgoing emails per mailbox, you name it. They even give you all that control if you buy a managed service from them, which is frankly stunning for a small, independent mail service.

You still need to copy the necessary DNS records like MX/SPF/DKIM/DMARC records, but the web UI generates them for you and also checks if they're set correctly. It also does some autoconfig for clients like Thunderbird and Outlook and supports ActiveSync. Contacts and calendar are synced through a cut-down version of Nextcloud (not very usable for much else, though) so you can have the necessary *dav sync for IMAP/POP3 clients, but you'd need to set that up manually if ActiveSync isn't available on your client.

If you also host other stuff on your server, you may need to add the necessary reverse proxy config and disable the built-in Let's Encrypt support, but if you only use that server for mail then you're fine without.

Honestly, Mailcow on a cheap server is good enough that I think most people with some Linux skills can run their own mail server now.

Hell, the server even comes with stuff like basic office resource management (like reserving a conference room for a meeting). If you're willing to take the risk of being responsible for backups and such, you could quite comfortably get your family or a small business on Mailcow.

Alternatively, mail-in-a-box provides a full setup, including the necessary OS stuff, for a dedicated mail server. I've found that to be using a lot of deprecated technology though, and there's no way to use it if you also host other stuff on your server. They solve the DNS issue by making your servers an authoritative DNS server, which I'm also not too big of a fan of. Nothing too bad, just a different take on the same concept that didn't work well for me.


Using a cloud service is more like riding a train.


That’s why pressing enter several times for a manufacturer-recommended default setting works for most car drivers, oh wait, I meant end-users.

Mean Time to Consult the Manufacturing Specs (MTTCTM, or HOWTOs) should be cut down … drastically.

Oh yeah, say “YES” to that fancy option of the Positron rear axle (cue in increase to manufacturing delivery time). This is the Information Age! You’ve only lost the “trial period” to get it right.


Bringing self hosting to lambda users is _REALLY_ hard and Big Tech won't let you do it too easily.

Many corp email smtp servers will IP block your email server (big thanks to spamhaus), or won't support no-DNS email address and servers (which is RFC from the start) or won't have the decency to handle grey listing or will send all your emails to their spam boxes (gogol) even though ppl did remove your emails from their spambox.

IAPs won't provide a stable public IPv4 address or IPv6 prefix. UPNP NAT port redirection (IPv4) will have bugs on the IAP router/modem.

Buying a DNS and configuring a domain is a pain. So few DNS registrars support automatic domain configuration via the standard dynDNS protocol (is this even a thing?).

The self-hosting devices on user domestic LANs will be pown by very "smart" hackers pushing those very users towards big tech (I wonder who is pay... pushing such hackers to do that...).

The path of least resistance will win, always, even if it means giving way to much power to some corps:

Lambda users _will use_ comfy centralized services mostly, and those centralized services, once big, will try to zap away any alternatives or interop (which most used in the first place to get there).

Like lambda users _will use only_ the pre-installed OS on the computer (or mobile phone) they bought, same idea.

I am talking about nearly everybody else who is not "us", the 0.1% (ironical).


"lambda users"? I've not heard that term before.


It is a French idiom, meaning a random or ordinary user.


AWS Lambda, "serverless compute". No servers to manage. No runtimes to deal with. You load functions into a cloud provider (AWS for example) and feed the function inputs from other AWS services like a queue or an API endpoint. Pay only for the execution of the function based on input.

It's really amazing conceptually, but like the parent said, there aren't many self-hosting options out there. IIRC there is some Apache project, but is there anything in the world for which there isn't?

edit - I just realized that isn't the context of the OP's statement. Heh.


> I just realized that isn't the context of the OP's statement. Heh.

I just waned to reassure you that you weren’t alone in the confusion. :-)


Same here. Is that a synonym for a lay person?


French here. Lambda user is definitely the term we use for the Average Joe. Comes from Greek. Never heard of Lay Person or read about it before, happy to discover the term !


Thank you; today you taught me something! :-)


I suspect the grandparent is francophone. It means “run of the mill”.


> Many corp email smtp servers will IP block your email server (big thanks to spamhaus)

Nope.

Spamhaus doesn't block self-hosted email servers. Spamhaus just publishes a number of lists, which postmasters can use or not, whether for filtering or just for scoring. The PBL in particular is likely to catch people self-hosting from a retail connection, because it lists most residential IP address-space.

But it's the receiving mailserver that does the blocking, not Spamhaus.

And it's down to the policies of the receiver's postmaster what lists are used and how they are used. That requires judgement and research, and some postmasters lack the former or don't have time for the latter.


Also I believe you could use a paid public relay service (like mailgun) to get around those blocks.


GP never said Spamhaus blocks anything.


True. but he did say that the blocking is "thanks to spamhaus". That is not true.


You can and should self-host about everything, apart from email.


You know what would be kind of neat? Like, a web site you'd go to called makemeoneofthose.com, and you'd click some buttons, and then sometime later you'd have a hosting setup that you own with some software, web server(s) and database(s) on it, and then you can go hack on it yourself, add some features, whatever. Like they send you some AWS keys and say "It's all yours. Good luck and don't forget to pay your hosting bill."

And now you have a blog, a picture-sharing thingie, a bulletin board, a whatever.

Maybe there could even be a version where you pick a datacenter and somebody racks up a PC for you with the software on it.


We used to host our own software. It was called an application and it ran on your personal computer. We just need that, but running on some appliance instead, like a NAS. Package the service up in something like docker-compose, have a way to sell it, install it, update it and support it. Synology is pretty close with their Docker support, but still pretty far.


The problem is you’re fighting a battle against global economies of scale for what is essentially a hobby or personal project. This is not a winning battle and most companies prefer to outsource the risk to someone else they can point to shareholders and blame.

People get caught up in the technical aspects of developing for cloud but I’d bet those weren’t anywhere near as important as risk outsourcing for the executive. At that point cloud was still new and the thought was we can run our infra if we need to.


Not to advertise, but I'm building exactly that at https://pibox.io - also solving other problems people have identified in this thread like automatic valid certificates, DNS, remote access, etc :)


Wow, love it! I host a matrix server on my current NAS, but I can’t put the database there cus spinning drives are just so slow. I’ve got the DB on a random Mac right now, but this is my new upgrade path.


You also need stuff like networking, TLS/certs, and DNS which aren't easily packaged, at least not in a way that doesn't require you to make sketchy changes on every client device.


> You also need stuff like networking, TLS/certs, and DNS which aren't easily packaged

The only thing that cannot be packaged is changing your home network settings. For this you need to click buttons on your modem/router. Fortunately, many selfhosted programs (eg. prosody for XMPP chat) and distros (eg. yunohost) have check commands or panels to figure out what's not configured well on your network and guide you through the process.

Also worth pointing out, Yunohost distro is also intended to be used over a VPN precisely so you don't have to deal with networking setup. Yunohost was bred in the non-profit ISP scene here in France and so your local ISP will provide you with an "internet cube" (SBC) and a VPN access giving you real public IPv4/IPv6 so that:

- you don't have to configure the network

- you don't have to change DNS settings when you change connection (your server works if you take it with you over 3G/4G/whateverG)

- your ISP doesn't get to filter the network (unless it filters VPN access but that's rather uncommon)

See also https://internetcu.be/


Something like Cloudflare Argo tunneling would work great for this. No certs at all for the user to mess around with, it terminated on the public internet, not in your house.


I guess I’m assuming at least some things are on a private network, in which case things are much more complicated.


No, not at all. You can tunnel traffic from any machine, anywhere to be terminated at a public IP.


I think you're misunderstanding the objective. I don't want most of my services (e.g., personal finance, photos, Plex, etc) to terminate at a public IP, that's the whole point of the private network in the first place. So for those explicitly private services, we now need DNS and TLS and in the latter case ideally something like LetsEncrypt so you don't have to manually rotate your certs (but the normal verification methods don't work because your service isn't accessible to LE in the first place--maybe you can run some bastion/proxy?).


But the hardest part of hosting anything is the maintenance over time.


Yes! This is what experience has taught me too.

We tend to underappreciate the importance of time in everything. A button click can instantiate something powerful (and useful (and easy-to-use...)), but it will degrade over time, and eventually flat-out stop working.

I had a stack that worked just fine for my own needs, but it ran on shudder Python 2.7 -- everyone knows how that worked out (I chose to rebuild my stack on a different platform).


> A button click can instantiate something powerful (and useful (and easy-to-use...)), but it will degrade over time, and eventually flat-out stop working

Software doesn't degrade over time (other than, you know things like cosmic ray bit flips, but in most realistic situations that should be fully mitigatable.)

The needs of the software user (including hardware and software they want the piece of software to interact with) may evolve, but that's different than software degrading over time.

> I had a stack that worked just fine for my own needs, but it ran on shudder Python 2.7 -- everyone knows how that worked out

While there's no further first party support for that version of Python, if it worked properly before, Python 2.7 and the software running on it probably still works properly now.


I would absolutely use "degrade" to describe what happens to public-facing or Internet-connected software over time—eventually you'll have to upgrade it for security reasons, and you'll often find that this is way more involved than just upgrading the server-side package itself, or even its immediate dependencies. The alternative is even more work back-porting security patches. All this is assuming someone's actively working on the software you're self-hosting, at least enough to spot, advertise, and fix vulnerabilities.

Ditto the average Rails/Python/Javascript project, as anyone who's tried to resurrect one that's gone so much as six months without being touched can attest. Which might not matter except that a ton of the software people might actually want to self-host are in one or more of those high-entropy ecosystems. Extraordinary levels of care and organization on the part of the creators and maintainers can mitigate this, but that amount of taste and effort is vanishingly rare.

These are degradation due to a changing environment, sure, but I wouldn't describe it as due to evolution in the needs of the user (presumably "must not have any well-publicized remote vulnerabilities" was a need from the beginning).


This comment was brought to you by someone who never produced/maintained software that had to withstand a 24/7 onslaught of automated exploit kits and port scanners over an extended period of time.


Or written any software other than a one-off script, if I had to guess.


If your software is not publicly accessible, it may be possible for you to continue running on 10+ year old dependencies indefinitely. For anyone else, other than a hobbyist, it is just not practical.

Otherwise, you are going to be influenced by external factors (security vulnerabilities, wanting to use a feature only available on a newer language version or OS, etc.) If you are a business, you'll also run into more practical concerns, like engineers not wanting to work on a mountain of technical debt.


Sure, but my old Google cloud apps on python 2.7 will one day get rug-pulled and forced to upgrade. It can only stay working forever if the platform doesn't change underneath it.


> Sure, but my old Google cloud apps on python 2.7 will one day get rug-pulled and forced to upgrade

“Degradation over time” was being cited as a reason not to self-host. Pointing out that not self-hosting exposes you to risk of others changing the environment so it no longer supports your software is a diametrically-opposed argument.


Oops! I missed that point entirely.


And we can call it cPanel ;)


cPanel isn't "cool" so it doesn't get a lot of credit here, but it is actually an amazing product that solves real problems. It makes running a server -- even hosting email -- almost effortless. Combined with a decent host, you don't need to have much technical knowledge at all. It really does make running your own server accessible to many, many people who would otherwise be unable to do it.


Additionally: Setting up PHP/MySQL applications on these servers tends to be "upload files, load page" level simple, and cPanel hosting is still generally a fraction of the cost of modern "cool" cloud products.

Sure, I have some neat modern things I'd like to do, but I also have a shared hosting that's been doing it's job for pennies since 2011.


The digitalocean marketplace is kind of like this. Also sandstorm.io.


I was so sad when sandstorm kind of fizzled out. I'm still hoping Kenton is on a secret mission to somehow bring it to life within Cloudflare. How cool would that be? One-click installs of docs, email hosting, photo sharing, etc apps from a server app marketplace, onto a cloud server you control. (Insofar as you "control" anything on a cloud host, but I feel like that's pretty far, still.)


It's still slowly but surely chugging along. A small number of people (myself included to a small extent) are working on it. There's even a budget:

https://opencollective.com/sandstormcommunity

We've discussed the one-click install thing at some point (not necessarily with Cloudflare), I imagine that's still of interest. There were some issues with the setup process that would need to be addressed first.

Kenton is in the loop and he still has the keys. But, he's busy with other things so he only does a few occasional but vital things.


> onto a cloud server you control

Or a box in your house, which is where my Sandstorm server lives. :) I think there's a lot of potential for actual self-hosting, though servers like Sandstorm need to have reasonable defaults and make it easy to manage domain setup and backups and security updates, such that one can get a box, plug it in, and reasonably quickly get to "don't need to touch this ever" territory.


I have thoughts but not a lot of time - so forgive the terseness. I love the idea of this, but I'd take it further and even have a category in upwork for getting services spun up and maintained.

But that's really the problem - maintenance. Right? Once something goes wrong _for whatever reason_ the user is then (for the immediate needs) just as stuck as with a cloud provider who disabled their access.

Thankfully there is a better course of action - e.g. find someone to fix it for you. Maybe on upwork as well?

But where are you hosting this? Is it AWS? Did _they_ suspend your account? I guess my point is that unless you host on hardware in your house (or another accessible place) you're at the risk of losing access to your data for any myriad of reasons. And even then, there have been warrants where devices were collected and went into a years-long battle as evidence.


I can’t trust up workers to properly fill out a spreadsheet. I really don’t think I’ll be getting the cream of the crop for sysadmin work.


This, but they also manage all the updates for me too.

Ideally the only difference between self-hosting and relying on a cloud service would be, I own the servers and therefore the maintainer has no legal right to bar my access.


A lot of cloud providers offer this. Cloud ocean for example, you search for the application you're interested in, click lauch and you've got it deployed in a docker container on a remote machine.


I am not related at all, but seems like a good dude:

https://www.molecule.dev/


Interesting landing/marketing page. But once I clicked to test it I ran into these notifications for nearly everything I would have preferred to use:

> We have not yet started working on our XXX implementation, but you can select it and submit to let us know you're interested. Development is prioritized by demand.

Or the "We are currently working on our YYY implementation..."

So nearly nothing I would have wanted to use was available currently.

Looked interesting from the outside. Looked to me as of more time was spent polishing the marketing than the actual offering.


yeah it seemed grandiose, he posted before and got a similar response, too many combinations to manually glue together


>you'd click some buttons, and then sometime later you'd have a hosting setup

Docker-compose comes pretty close to this. I had no idea wtf I was doing when I got started and it resulted in a functional thing surprisingly often

Not quite the SaaS vision you describe, but point is you can stumble into something functional pretty easily these days


Seems like you could do this pretty easily with a Docker image and a config file. Actually, I've done this with AWS (use a pre-existing image to get some open source wiki software up and running, which I then customized)+


But then you have to know how to maintain it all yourself. This is hard. If you already have the knowledge to maintain such a tech stack, that allegedly neat tool would only be marginally useful.


A lot of hosting providers do offer OSS applications which can be installed with one click, like WordPress or Coppermine. The latter is, I quote:

> a multi-purpose fully-featured and integrated web picture gallery script written in PHP using GD or ImageMagick as image library with a MySQL backend.

And SSL certificates are for free and automatically generated.

An example: https://www.netcup.eu/hosting/#webhosting-details

https://www.netcup.eu/hosting/webhosting-application-hosting...


I love selfhosting. Right now I have this in my personal docker-compose.yaml: NextCould (3 installs, each their own MariaDB instance), HomeAssistant, Mosquitto, Vaultwarden, an Nginx served static website, Unifi controller, nzbget, Samba, librespeed, Wireguard, 4 MineCraft servers, AdGuard home, FoundryVTT and Traefik as reverse proxy for https (it's all 1 yaml file, everything! At least, excluding the HA config etc). All on a 16 GB RAM, corei3 based server. Home Assistant tells me it is consuming about 30 W right now (and generally stays between 30-35W). That's about 70 eur a year for multi-terabyte personal cloud, and docker-compose makes managing it very easy (docker-compose pull, docker-compose up -d). Over the past 2 years I had only one issue (I had to pin Mariadb to 10.5 or NextCloud complains).

Oh, the initial costs are of course quite high, including all disks I'd say about 1000 eur, so it's quite the hobby (I have a nice Fujitsu motherboard (3 y/o) and Fractal Design case (12 y/o), it saw 3 builds now, I started with a super cheap atom based board, then a Pentium dual core, and now the corei3 system that can handle a lot more disks, the nvme root drive makes it so fast.) I wonder about my next system. I also have a corei3 based Nuc (as htpc) and that thing is also very fast, silent and energy efficient. And it has nice and fast external I/O. Not sure yet, but my current system will last at least another 5 years.

My father has a Synology NAS and for some time I thought that would be my next system because I'd get tired of the associated sys-admin tasks at some point (I started with a Gentoo system and there were no containers, meaning you have to set up php-fpm, then mariadb, then download Next(Own)Cloud, then update it regularly, pff and the migrations to other systems...). But docker-compose really changed that for me, I think the Synology would be more work.

Btw, a nice podcast on Selfhosting where I got a lot of inspiration from: [0]

[0]: https://selfhosted.show


My hosting stack seems to be similar to yours. In addition to the services themselves, I run a watchtower container to check for new images for me, which then notifies me through yet another selfhosted solution: gotify. I have watchtower setup not to automatically recreate the containers (I've been bitten by postgres updates a few times too many).

Speaking of Wireguard: I've been looking for a web-based management interface to define Wireguard networks with (using the server it runs on as a sort of central "hun"), but haven't yet found anything I really like and/or found simple enough to use. What does your Wireguard setup look like?

Watchtower: https://github.com/containrrr/watchtower Gotify: https://github.com/gotify/server


I use this image: ghcr.io/linuxserver/wireguard [0]. Under environment I can set the number of peers and it simply spits out that number of peerX.conf files and QR-codes (as PNG), which I then manually set up on the different devices. Not really simple but also not complicated. I hear a lot of good things about tailscale and I feel like I have to start playing with that...

Oh, gotify looks really nice, I'm still looking for something like that. I'd love to be able to receive notification for events in my house (as detected by Home Assistant for example).

[0]: https://docs.linuxserver.io/images/docker-wireguard


> Btw, a nice podcast on Selfhosting

Ironically, not self-hosted (served from fireside.fm).


In the podcast they talk a lot about when to self host. Sometimes it makes sense, sometimes it doesn't. For example this podcast's community is on Discord, but for their other podcasts they maintain a Matrix server. It's interesting to hear them talk about the joys and pains that both solutions bring.

I used to run an email server from my basement, now I also know that that is not something I want to self host anymore :)


Ah yeah, I figured it was for a good reason. I just thought it was funny.


How do you do your offsite backups?


A Raspberry Pi 4 with a 2.5” 5 TB drive at my parents house (Pi also runs their Home Assistant instance). I just manually rsync over ssh whenever I backup the pictures from our phones to the server every now and then. I’m lucky that most people here have 100/100 fiber I guess but I could do with less.


No thank you.

I'll have to take care of backups, security, availability, updates, etc. I prefer to use a managed solution.

If you don't want to lose data on being banned, just do your own backups, which are by themselves much less time consuming to handle than full-blown self-hosting.

I'm fine with the occasional service being axed, I'll just migrate to another one. Often, somebody writes a migration script and open sources it, making that even easier.

It is good though to promote and vote with your wallet for services that give you good and dependable support.


> I'll have to take care of backups, security, availability, updates, etc. I prefer to use a managed solution.

A hosting cooperative is often a good compromise. You get to mutualize services and maintenance with other people who have the same needs.

On a spectrum from selfhosting to cloud computing, hosting cooperatives lie in the middle when users have agency but don't have to take care of everything by themselves.


We're not quite publicly launched yet, but I've been working on making self-hosting easier for several years now. People often ask "why would I self-host?" and it's hard to pin down one answer - instead the answer depends on your values - but there is an answer. This post is excellent because it's not "do it for security" or "do it to see fewer ads" or "do it to fight big tech" or "don't give photos of your infant to Facebook". It's all of those reasons, but it's also more broadly (and deeper in the kool-aid), because it helps fix the internet itself.

> This engineering talent is supposed to be solving world’s problems but instead they are ensuring how everyone wastes their time

Agreed! If software was sold for its utility instead of its addictive properties - this might start to change. Self-hosted / open-source software does need plenty of "hosted" accoutrements though: backups, remote access, etc. Shameless self-promo: we're trying to solve this over at https://kubesail.com


One problem with "self-hosting" is you're usually still hosting in the cloud – say, on a DigitalOcean droplet or something. That means you're spending $5-10 every month or whatever, plus you're still vulnerable to some amount of cloud-provider fickleness (though much less than a SaaS provider).

Much better would be a physical device in your own home.

Lately I've been dreaming about a home wifi router that doubles as a personal server, with a few TB of storage, with tailscale-type networking for remote administration and a nice App store for self-hosted apps (FOSS or otherwise).

Not a company I am going to start, just one that I hope someone does.


The internet is a network. You are on the edge of the network. There's a gateway between you and the network. This is impossible to eliminate. It's a feature of the network.

The only mitigation you can do is own your data and programs, so that you can change your gateway if they don't serve you faithfully.

From this point of view, hosting on a provider is not any worse than on your home network, as long as you regularly backup your data to your local machine.

Switching from one VPS provider to another is a lot easier than switching from an ISP to another.

Switching VPS can be done in minutes. Just login to another VPS website and sign up for an account to start renting a server from them.

Switching ISP takes on the order of days instead of minutes.


> The internet is a network. You are on the edge of the network. There's a gateway between you and the network.

Everything you have said above is wrong, and sounds like propaganda from Silicon Valley cloud computing cults:

- the Internet is a network of networks, not a single unified network

- you are not "on the edge" (this is only valid from a cloud vendor perspective): if the target audience for your services is in your neighborhood and your ISP is user-friendly, you are in fact "in the center" right where it matters and you can provide much better services to end-users than from a remote continent

- there's routers on your path on the network, but there's no gateway between you and the network: you may have a LAN with local addresses but that doesn't even have to be the case (you can have public IPv6 for everyone on your LAN), and if Google just happens to be next door to your house you can take your own cable to them (assuming they would let you) and do BGP with them directly from your home router (assuming you can configure it, and you get an AS number)

So i agree with you what you describe is typical in the centralized setups that Silicon Valley has pushed to (successfully) render our lives miserable, but there's no reason this has to be. People from non-profit ISPs such as NYCMesh or the FFDN federation certainly make sure we don't have to live in this perpetual nightmare.

> Switching ISP takes on the order of days instead of minutes.

Good point, but this is only because public-serving infrastructure have been cannibalized by corporations. It doesn't have to be this way, in the great scheme of things. Moreover, once you have a decent neutral ISP you usually never need to switch.


Given the choice between $5/month and, say, $300 upfront most people will choose monthly. I think both options should be available but realistically not many people are going to buy the box. There have been a bunch like FreedomBox, IndieBox, Helm, etc. but they don't seem to take off.


Interesting – Helm definitely looks cool, close to what I had in mind. Pricing of $400 plus $99/yr subscription sounds tough – and it doesn't include a wifi router, which I think is an element that would make more sense to the average home user (and ensure it's not connected to the internet over wifi).


I prefer the router+server approach as well but Helm takes a different approach of VPNing all traffic through AWS so it can have a public static IP; unfortunately this introduces a bunch of cost and overhead.

For hardware there's some interesting stuff like https://www.servethehome.com/inexpensive-4x-2-5gbe-fanless-r...


You should check out internetcu.be or or freedombox.org, both very successful at bundling a selfhosting distro with some consumer-oriented hardware!

PS: No i'm not mentioning the many hardware-first solutions which are usually just a scam.


Thanks, freedombox.org does indeed look cool!


Check out Synology NASes (and now they make a router too). Basically exactly what you describe.


Very interesting – thanks!

For reference, https://www.synology.com/en-us/products/DS720+ lokos like a representative model. Seems to offer competitors to Google Photos, Drive, Gmail, Docs, Sheets, and more…


Those cheap $5 servers are weak, probably best to compare to a rasberry pi which is cheap.


powerful enough to serve a lot of basic websites (blogs, forums, chat .. etc) without breaking a sweat


yes, and so is a rpi.


I run a few services from my home but still have to rely on aws/fly.io for some portions of my infrastructure.

I really want is to learn how to rent rack space from a colocation. The documentation available does not make it easy to learn. Can I just buy an old 1U blade, throw xen on it and show up at my nearest colo? What do I need to preconfigure to ensure I have remote access without giving remote access to the colo as well? Do I get physical access to the data center?

Wish I could find some guides on this topic. 95% of blog post tutorials are just ads for the latest trendy cloud startup/language framework.


I did this once. Don't overthink it too much - yes, it is as simple as finding a rack with sufficient space, power and network, plugging it in and going. You'll most likely get a public IP and have no access to your neighbors, so they won't really care what you do with it as long as it's not illegal or against the Terms of Service for your host. So yeah, if you want to do it, just do it. Get an OS you know, install an SSH server or Remote Desktop, and rack it up. If you can get to it on your LAN, you'll be able to get to it on the public Internet. Also, quickly learn about good auth and firewalls and fail2ban.

That all said (and said with the clarity of age and knowing I was a stubborn kid who did things "because I could"), the experience of spinning up a VPS today on Linode or Digital Ocean is effectively the same, infinitely cheaper, and a lot more fun than racking a server somewhere. I can script up a fleet of servers from my bed at 1am just because, and can't tell the difference between SSH'ing to them versus that one box I did 15 years ago. If you want to do it, go nuts and have fun, but you really aren't really missing much over conventional VPSes these days.


Thanks for the response!

I gotta disagree with you though on cost. You can get a beefy refurbished dual Xeon blade for a couple hundred bucks. Rack space where I live is like $50/month for 1U and gets much cheaper/machine as you scale up. $50 on aws will get me maybe 1 medium ec2 instance and an s3 bucket. With a used blade I get 20x the compute for the same price.


You're overestimating AWS/cloud costs by a decent amount.

t3a.medium is under $30/month and that price only goes down when you reserve for the year. Save even more if you can run your service on ARM/Gravitron.

A VPS service like Linode will have even better pricing than AWS.

Driving over to a data center isn't free (time and cost), either. Those used Xeon blades are cheap for good reason – the companies that originally owned them consider them EOL. There's "no such thing" as dealing with hardware failure (except for occasional stop/starts) in the cloud.


You’re right about the time not being free and certainly hardware failure. Although HW is way more reliable than people let on.

But aws is extremely expensive per unit of raw compute performance. Obviously I concede that the stability and availability is unmatched compared to DIY solutions.

For my specific workload I need lots of memory and cpu cores. (Lots meaning my single 32c/64t 32Gb Xeon tower in my home). I almost fully utilize my resources and STIL need to pay aws for storage/tunneling/DNS/etc.

Consuming equivalent compute in aws would be hundreds of dollars per month.

Hybrid is just my preference!


You're quite welcome. I'm not trying to dissuade you, just provide a point of view I've got from having felt the same way.

I'm definitely not comparing to AWS, because yeah those can get super expensive, super fast. What you're paying for with AWS is Amazon-tier stability (whatever that's worth these days), but the difference in uptime between them and a Linode is more than fine for my needs.

With your $50/mo, make sure that includes power - Xeons eat watts. Also, be sure to compare apples-to-apples on bandwidth. By comparison, Linode's dedicated CPU plan (not shared, closer to bare metal, but still not) starts at $30 for 2 CPUs and 4GB of RAM and 4TB of transfer, and they'll take care of keeping you on the latest hardware. Again, I don't want to dissuade you, because colo is fun and it's cool to think about your own box out in the world. If anything, I'm envious of how easy it is nowadays compared to when I drove 3 hours to South Bend, Indiana to colo a box of my own, or the first time I needed to engage remote hands because the box got in an irreparable state.


Sadly the answer is, as often, it depends!

Many rack space rentals will not permit you to just install whatever PC you fancy because it is potentially a risk to the neighbours in terms of fire or bad hardware, most will happily quote you to buy one their approved ones!

It is pretty easy to get a rack space provider where the provider cannot access the machine but this can be good or bad. In some cases, I would rather they could shutdown the host if, say, the RAM is broken and replace it but if you would prefer to do this yourself, that is fine.

In most cases, you will be given a public IP address directly mapping to your machine via a router/nat lookup so whatever services you open on your machine are open on that public IP address so pretty easy to setup RDP/ssh/whatever.

Probably the biggest issue though is the extra work or hassle if something goes wrong. I remember at a previous company where some guy would frequently have to drive for 30 minutes each way to go to a data centre to perform certain updates that couldn't be done remotely.

YMMV


> Many rack space rentals will not permit you to just install whatever PC you fancy because it is potentially a risk to the neighbours in terms of fire or bad hardware, most will happily quote you to buy one their approved ones!

I have _never_ experienced this. The only restrictions I've seen on colo contracts I've gone after were related to UPSes and things with large batteries in them. So a big stack of laptops would be a no, but if I wanted to put Atari ST's or Dell PowerEdges or white box builds or bitcoin miners it doesn't matter. I guess I've always done things at at least a half or full cab, never single Us at a time.


With the ones I have used you just click around on the homepage selecting what you want on the server and then pay. Some sell second hand repurposed servers on auction that they will set up for you. A while later you get an SSH login on the server and that's it, your server is running somewhere in a basement/bunker/old mine and you can go visit it if you want but in general you can do everything remote. There is even stuff that can let you see the bootup in bios from remote (Called KVM I believe). Some help you set up backups on the server and help you with setting up programs on the server but then it starts to get expensive.

You can also just rent a space to place your own server but I haven't tried that.


In your experience did you have to sign up with a partner ISP at the colo? Or is that done for me and just part of my colo bill?

Is power use included as well?


Colocation provider will bring the circuits to provide best-path connectivity based on packet destination. There shouldn't be an additional charge for this. They are incentivized to manage their bandwidth so data transfers fast, as they are likely charged wholesale for fiber availability.

You will likely be charged 95th percentile mbps based on your usage. (Again, "pipe space required" to your needs.) Basically, whenever you're busiest -- 4pm-9pm are popular times for us in the USA.

Some customers limit their bandwidth themselves (like, only allow max 12mbps file downloads, etc.) especially when they have the hardware to support huge bandwidth. Or your colocation provider can perhaps limit max connection to 100mbs or 1gbps if you want.

Power is usually leased in amps. If you go over amps the circuit will break -- at worst case scenario. But typically they get in touch with you and tell you to upgrade.

Also, they do want to know vaguely what your service is. Because you'll likely lease their IPs, they will question you if you do a lot of email (caution for spam), or run a Tor exit node (legal hassles for them in many cases).


you’re not all that far off

* you’d have to sign up with a colo provider first. since data centers in physical buildings, this just depends on where you live

* when you sign up with them they provide you with info like ip addresses or how to connect to their network (they might have dhcp, or you might have to configure static ips). usually there is a initial setup fee, around 1 month of rent.

* if you just rent a a 1U space you usually can get physical access to it while accompanied by someone working for the data center. usually this is during business hours, but each data center will have its own rules. if you rent larger units, such as a full rack (42U) or half a rack you usually get a key card and can access it 24/7 (this usually involves a phone call for them to remotely open a lock)


I've never worked with a colo vendor that once you contacted them didn't have exhaustive support for "how to we get to the point where we can start billing you", usually including an actual human that you can ask questions.


If you have a cabinet, and neighbors are caged to prevent your access to those, then you may get physical access. Call a small provider near you and ask.


I love self hosting. I made my own cloud platform [1] with app launcher [2] and add-on games [3], file conversion server application [4], and anti-virus server application [5].

I'm currently working on the third iteration of the Cloud and app platform [6] which features completely noSQL and cookieless user and session management features. They are my passion projects.

[1] https://github.com/zelon88/HRCloud2

[2] https://github.com/zelon88/HRCloud2-App-Pack

[3] https://github.com/zelon88/HRCloud2-Game-Pack

[4] https://github.com/zelon88/HRConvert2

[5] https://github.com/zelon88/HRScan2

[6] https://github.com/zelon88/HRCloud3


Interesting, we have done very similar things in completely different ways: http://github.com/tinspin (I made my own HTTP server and JSON database and on top of that I made a cloud platform with multiplayer games).


Self-hosting is not always the answer for a lot of people.

Self-hosting are not easy for laypeople (someone who are not familiar with it) to try to get their feet wet with it. For myself, I am on the level of beginner and I do struggle to stay on self-hosting path. When I set it up, I learn there is more steps that I have to do because the documentations and guides did not bother to explain those step and expect me to research more to find the information about it.

My biggest beef with self-hosting is that they expect us to set up the SSL/TLS certificate without explaining the step to set it up. Some guides does have section about it but never provide the details about creating CA for my self-hosting needs. I turn to Google/DDG to find information about it and they are all over the place or leading into dead-end.

There are few others thing I have gripes with self-hosting. I like self-hosting and they are pleasing for me as I don't need to rely on third party solution. The gripes I have is the documentations that are over the place or sparse information about it.


I think the whole "self hosting isn't easy" meme gets repeated so much that people just take it as given now and default to managed software. Or, someone might argue "Well, my grandmother who knows nothing about tech cannot self-host, so it's not viable!" ignoring there is a huge spectrum of competence between grandma and a seasoned Linux sysadmin. People aren't morons, and there's enough info out there on how to do it. I agree it's not organized very well, but it's not like setting up a web server is dark wizardry.

With all the tools out there and easy access to VPS services and even bare metal for your basement, there's never been a better time to self host. And not just web servers, but E-mail, git, photos and media, and so on, it's very accessible.


I agree it's overblown. It's amazing how robust of a setup (more than sufficient for residential use!) you can get with little effort given how easy things are nowadays.

I've been self-hosting a lot of load-bearing household stuff (I have stuff on the "wife-critical" path: if it goes down, "the internet goes down" and I get a text from her) for almost 10 years and I've only had 2 incidents of particular reputational-risk note:

1) a routine reboot of the main server triggered a BTRFS bug that blocked mounting it again. This took an evening and a reboot into an arch linux ISO to fix (arch had a new-enough version of the btrfs tools that had the ability to fsck/repair the fs).

2) my proxmox setup was initially installed with zfs and zfs-on-root. This exploded and the "on root" part stopped working one day. This was the most annoying thing to fix so far because I ended up dumping any interesting data to an external HDD and just re-paving the server, this time reinstalling with just ext4 and lvm (which is admittedly a setup I'm much more comfortable debugging). No issues since then.

Both these events are from over 3 years ago, so it's been smooth sailing in recent times.


The complaint is fair though. Trying to find a complete or the "correct" guide to something is very difficult even when you already know roughly what you are doing.

I took me ages to work out how to setup postfix properly from about 10 slightly different "guides". The Postfix book wasn't even that helpful. There are also lots of very out-of-date guides that might have been OK for 2015 but not anymore. They don't get deleted because "link juice"

It is sad but true but you get one little bit wrong and you potentially leave a door wide-open.


Postfix is a special kind of hell though, in that getting a good setup requires wading though decades of legacy stuff and patching together a bunch of non-default stuff to get, for instance, dkim signing and stuff working right. I've done this before myself, and agree it was super annoying and not fun, but I also think it is potentially the biggest outlier in self-hosting difficulty I've encountered.

Lots of services are barely more than - apt install, systemctl enable --now, ufw allow 8080 (if you even firewall within your network).


I actually found Postfix fairly easy to configure once you have a solid understanding of Email (which took me a good while at first). Dovecot on the other hand...


Majority of the documentations I came across usually have the mantra of "Do this and you are golden". I know it is not dark wizardary, it just the documentations are aiming for someone who have the experience and the technical knowledge of this. Whereas there are people who are pushing "self-hosting is the answer! Even your tech-inept grandma can do it!" without providing documentations for inexperienced people like me. Annoyingly that some guides have parts that have a links to other guides that barely provide information about this. It is like "I know how to set it up but I am not gotta tell you how to do it, so here the link that might help" and it didn't help at all.


When I have begun to install and manage servers, more than 20 years ago, I did not have any kind of prior experience and I did not have anyone whom I could ask.

So I have just read the handbook, but I have read it completely, which needs more than a day.

It is likely that there are also other operating systems and Linux distributions that have good documentation, but I can testify only about those that I have used in the beginning, the FreeBSD handbook and then the Gentoo Linux handbook.

Both handbooks were good enough to convert anyone into a system administrator.

Unfortunately, both handbooks are not as good in 2022 as they were e.g. in 2002, because they have not always been updated after every change, or the updates have not been as detailed as the original parts of the handbooks.

Even so, both handbooks remain reasonably good today.

Especially the FreeBSD handbook is good for someone who lacks experience, because FreeBSD is much more self-contained, i.e. there are a lot of choices that have already been made for you and you do not have to worry about them.

So for someone who is inexperienced, I believe that the fastest way to managing a server remains to read the complete FreeBSD handbook and install and configure a server based on that.

There are programs which are available only on Linux, but the administration of a Linux server requires much more work than for a FreeBSD server (even if much less than for a Windows server), so for a beginner I think that FreeBSD with its more complete documentation and less possible choices is easier to try.


I'm skeptical that your layperson would be able to keep self-hosted applications secure constantly. Hell, huge corporations have a difficult time with it.


I have this issue too. When I tried to set up self-hosting, I assumed that there are steps that requires me to expose it to the internet. Turn out that it already exposed and didn't (or barely) provided the information of how to close it off securely and keep it private network only. When I tried to find information about it, there was always guides that are not consistent with it. Some will say I have to go in php.ini to do this, then go to SQlite to do that, then go to other files do there, then adding 20 steps to keep it secured. I'm just wondering why there are not any centralized options to do this. I just want a option that I can tick in the software and left it off as that.

I understand those documentations are not for laypeople for me. However it is annoying when people out there kept pushing the self-hosting for beginners narrative without providing the necessary tools for laypeople to keep themselves secured and reliable.


>I understand those documentations are not for laypeople for me. However it is annoying when people out there kept pushing the self-hosting for beginners narrative without providing the necessary tools for laypeople to keep themselves secured and reliable.

And that, in a nutshell, is the problem.

A few clicks, a configuration form and integrated tools to set up external dependencies (i.e., LetsEncrypt certs), et voila! You're running a self-hosted application.

AFAICT, this is more about developers not creating the packaging/configuration/management tools necessary for effective use by non-technical users.

Sure, I can write a sql query to modify the schema of an applications' database, but my highly educated and intelligent physician brother would just throw up his hands in disgust.

Make self hosting easy and people will use it. And Docker-compose isn't "easy" for a lay person.


It sounds like part of the difficulty has to do with the general poor quality of online tutorials. There is a need for properly written guide books and magazines, but unfortunately, it seems like there is no way to pay for people to write them.


> My biggest beef with self-hosting is that they expect us to set up the SSL/TLS certificate without explaining the step to set it up. Some guides does have section about it but never provide the details about creating CA for my self-hosting needs. I turn to Google/DDG to find information about it and they are all over the place or leading into dead-end.

If you have your own domain pointed at your server, the Let's Encrypt certbot can automatically pull in a certificate and configure your apache/nginx webserver (alternative webserver caddy has this feature built in as far as I know).

If you don't have your own domain, don't go with self-signed certificates. Get a free https://desec.io/ subdomain, and they have their own certbot plugin to generate automatic certificates.


> If you have your own domain pointed at your server, the Let's Encrypt certbot can automatically pull in a certificate

Yeah, but don't have a mistake too many times, or Let's Encrypt will block you for a week until your rate limit times out.

I hit this. I understand why Let's Encrypt has to do this, but it's very annoying and you have no choice but to do nothing for a week.

There needs to be something in between Let's Encrypt (free) and a couple thousand a year (other CAs).


If you use Caddy, you'll almost never run into rate limits from Let's Encrypt, because Caddy rate limits itself, and will fallback to ZeroSSL instead of Let's Encrypt, and even fallback to LE's staging for additional retries against LE before trying the live one again if it works with staging. See https://caddyserver.com/docs/automatic-https#errors


Use the LetsEncrypt staging server for testing. When you have a process that works, switch to prod.


That's a tautology saying "Don't make mistakes."

A DNS misconfiguration can cause your Let's Encrypt to do weird things on a configuration that was (and still is) perfectly correct.

That was how I hit it. I eventually figured out what people screwed up in DNS. But certificates still didn't clear. So I spent an extra couple hours staring at DNS trying to figure out what I missed when the issue was that we bumped into the rate limit at Let's Encrypt (which is REALLY low--I think 5 failures is enough to trip it) while the DNS was bad and the only thing we could do was sit around for a week with dead certificates.

Not fun.


Sorry, quick comment, didn't mean to be glib.

I've hit the problem you describe, and I feel your pain. I also respect LetsEncrypt's choice to rate limit failures. I renew a couple dozen domains at a time, so one error can quickly cascade into being blocked. IIRC the block timeout starts at 24 hrs and goes up from there if you keep trying -- this is easy to do if you don't see the raw response error message!

After being bitten by this a couple times, I added a dry-run step to my autorenewal script. If the dry-run exits with success and generates a good new cert for the domain, I repeat by pointing to the LE prod server. This works every time (so far, but for years now).

I'm suggesting that any LetsEncrypt certificate automation system (or docs) targeted at relatively low-sophistication users (i.e. not you or me) should include this sort of dry-run check so that the user doesn't paint themselves into a corner with a somewhat persnickety, but essential, service.

Also of course, it should attempt to renew after 60 days, so that if things go badly wrong, there are a few block-timeout retries available before the 90 day expiration.


To do this right you should also think of backups, updates, and monitoring. Self-hosting is true freedom but doing it right for things like email is akin to running a small business. On the positive side docker makes many things a breeze.


I tried with Docker before and it is not a breeze as you think it is. I tried to use Docker for Calibre-Web and it is a pain to make it work. Because Calibre-Web required to access their database in the filesystem outside of Docker. Docker provided minimal (more of lacking) information of how to expose the filesystem for Calibre-Web to use their database. Calibre-Web cannot create their own database, it relies on Calibre, standalone app, to generate the library that it need to have access to. It took me ages to finally to find a way to expose the filesystem and only provide permission to access that particular library.


I am surprised by this shortcoming of Calibre image. I guess the trade is learning how to install calibre vs. learning how to deal with docker. I'd also agree that even if you use docker and installation is easy - for any self hosted apps you are using for a long enough time you end up learning enough about them to be able to install without docker (and avoid managing docker in addition to everything else).


Calibre does have their "book sharing" solution that are built in the software. However it is more like content server. Calibre-Web is a third-party solution that are not affiliated with Kovid Goyal (creator/main developer of calibre). And Calibre-Web is basically browser version of Calibre without requiring other people to use Calibre to access Calibre-Web. So Kovid did not create this calibre-web image from what I understand.


This was my single biggest hurdle when I was trying to set up a personal VPN to remote manage things- I didn't really understand what I was supposed to do, or why things just never quite worked.


If you find self hosting too annoying you could always try Yunohost to have one click deploys for the most common services.

https://yunohost.org


Self hosting also implies building (or using) your own self hosted product. That's a significant requirement, particularly if you want social features.

I'm going through this dilemma with books. Goodreads lost my account of nine years. I've managed to recover most of the data from a backup and set up my own blog. I'm self hosting! But my blog is very spare and is not backed by a database of books, book covers, etc. Also it has no social features, no easy way to see other people's reviews or find related books or... I could imagine building all those things but that's like building a whole product! I could also imagine some self hosted book product I could just use (analagous to Picasa in the story) but it doesn't happen to exist.

Meanwhile there's a pretty great product for books in Goodreads, other than the crippling disaster of losing a user's account. Also some good cloud competitors like The StoryGraph. So maybe I should just use their product and hope my data is safe.

PS: I was at Google when Picasa was acquired. My memory is that the plan was always to focus on the hosted version. Maintaining a desktop standalone product was very much not in the Google business model.


Try this, I think they have some covers as well as other meta data. It has been years since I used it.

https://openlibrary.org/developers/dumps


Maybe I didn't explain myself well. Yes, I could get a data dump from many sources. It is a lot of work to turn that dump into a product that I self host.


You don't have to write that stuff. There is a fairly well-known project licensed under AGPL3, that's fine for self-hosting if perhaps not commercial use. Just search around.


I enjoy 'playing' with self-hosting things, as I learn that way.

However, I would never host anything important. Why ?

If I'm in an accident and hospitalised, or something similar, there is no way my family will be able to manage/maintain/troubleshoot the systems.

There's a reason they all use gmail/gdocs, and it's not because they love Google.

I am a lifelong fan and user of technology, but the 'lay person' typically isn't and really doesn't want to be. Managed services may remove any semblance of privacy, but they offer the one thing that most everyone wants - convenience.


I haven't tried it, but Piwigo[0] looks promising for photo albums & management. That or Ente[1] although Ente doesn't have a self-hosting option like Piwigo.

If you really want true self hosting you would run it off your own on-prem machine and use your ISP to push & pull content. Putting things on a VPS is not really 'self' hosting as you're entrusting a third party to not get their datacenter burned down, or the hard-drives corrupted, etc

That said, the only caveat to hosting in your own house is it could suffer a fire, and your data is wiped, so having /BOTH/ a VPS and an in-house on-prem solution means you're not putting all your eggs in one basket and you have a contingency plan in place, which one day may be worth it. It buys you peace of mind because of the redundancy.

[0] https://piwigo.org/get-piwigo

[1] https://ente.io/


> That said, the only caveat to hosting in your own house is it could suffer a fire, and your data is wiped

Well, there are other reasons to prefer using external hosting. Home connections are typically port‐filtered, have dynamic IP addresses, and have a low IP reputation, and your ISP selection is very limited. Whereas if using a VPS there are so many options that it’s easy to shop around.

But you can still self‐host while getting the benefits of a VPS. Just forward ports from the VPS over a WireGuard tunnel to your real machine. Then all the actual infrastructure is on hardware you control, and the cloud provider has no access to your TLS private keys.


Yes, and you can even do this quite cheaply. Oracle cloud free tier has a nice traffic allowance: https://paul.totterman.name/posts/free-clouds/ . Add tailscale/cloudflare tunnel/plain wireguard for connecting your home server to the cloud instance.


I am comfortable re-building my self hosting setup from scratch/backup. I enjoy the sense of agency being able to fix something myself vs wait for a cloud service to return. As I rely on my self hosted setups more, I also build in the appropriate amount of high availability features required. You will learn a TON of skills that are sideways related to software engineering. It's very empowering to be nearly entirely self sufficient with your profession. I can write/test/deploy software (ie pay the bills) and never have some critical service or infrastructure carpet pulled out from underneath you(ie dockerhub,github) and prevent you from doing your work.

This is such a niche attitude/market but it has been incredible to see the surge of self-hosted applications/services over the last 5 years.

It is also relatively easy these days with modern ci/cd tools to have a "portable" enough stack that in the event of an emergency you could purchase a few linode instances and be migrated to a vps environment in an afternoon.


IANAL but I believe another reason to true self host, at least in the US, is that rules for things inside your house have extra protection. Sure they can still get a warrant, but this is a totally different level than what they need to get the same data off of a VPS.

Do you really have any search and seizure protections on a VPS?


> Do you really have any search and seizure protections on a VPS?

I'm aware of this, which is why I do full disk encryption of any VPS instance I operate. See the Third Party Doctrine[0] which applies to the US only AFAIK.

[0] https://en.wikipedia.org/wiki/Third-party_doctrine


The comments here illustrate the main problem with selfhosting today, which is that it's too damn hard. Until it's as simple and secure as downloading an app on your phone, we're not there yet.

You should be able to take an old Android phone, install a Nextcloud app, go through a quick OAuth flow to tunnel traffic through a VPN provider, and be done with it.


Self hosting is great, in theory, but terrible in practice.

I was thinking about self hosting my podcast [1], but it's an insane amount of effort and will cost money. Compared to using anchor for free, that's owned by Spotify.

For starters, I'd need to figure out RSS, figure out how to distribute the podcast and how to store it (+ costs). Not impossible sure, but not ideal either.

[1] The Language of My Soul Podcast https://anchor.fm/lang-of-my-soul


Clicked the link, only to find that the site is down (presumably from too much traffic due to HN).

The irony is... pretty heavy.


What are you all self-hosting? For me -

- Gitea (git forge)

- Maddy (email)

- Calendso (scheduling)

- Vaultwarden (password manager)

- linx (filesharing)

- Syncthing (file syncing)

- Wireguard (VPN)

- a couple of metasearch engines

I am not mentioning all the tools and services for monitoring and management.

Self hosting is easy for me cause I am managing all of this with NixOS.


- Vaultwarden (passwords)

- FreshRSS (RSS reader)

- Homebridge (gets some non homekit devices into Homekit)

- Minecraft Server (kids)

- Valheim Server (me and my buds)

- Syncthing Discovery and Relay servers (I am paranoid, for file sync)

- PiHole (network adblock)

- Wireguard (all our devices have it installed, combined with PiHole = adblock on the go)

- Grafana + InfluxDB (to monitor system health)

All this is running in a 16 GB space eating VM that's backed up offsite. Maintenance is not too bad, if something goes wrong I'll roll back in a flash and investigate later.


- Wireguard (VPN)

- Pi-hole (Adblocking and works with VPN)

- Plex (Media collection)

- Plausible (Web analytics)

- Home assistant (Smart home)

- Uptime Kuma (Monitoring)

- Traccar (GPS tracking)

- 5 nodejs web apps

Wireguard and nginx ports are only opened to internet.


- Nextcloud (personal data)

- Mailu (email)

- Harbor (docker registry)

- gitlab (git+ci)

- portainer (deployment)

- matrix+bridges (chat)

- openVPN

- grafana, prometheus, ... (monitoring)


Self hosting can also be a great option to protect against authoritarian regimes. After my family’s VPN was banned in Russia a few weeks ago, it took me an hour to set up Wireguard server with Algo VPN on digital ocean. Now I’m supporting uncensored internet access for 3 families back home, while Russian authorities playing cat and mouse games with popular VPN providers.


Dear Gods of OPSEC, I hope your username isn’t your real name.


Good luck on that side. Russians are great people and not everyone supports Putin.


The post is conflating two separate things as if they are the same.

1) Personal stuff that you created and own. For example photos on Google Photos. If Google decides to remove a random photo from my collection, that would be a big problem for me. But they don't. On the upside, the probability of Google losing my photos is an order of magnitude lower than my personal hard disk failing and me having forgotten to back it up.

2) Stuff that others created like movies and songs. I really don't care if a show that I was watching drops off of Netflix. I don't have the same emotional investment to it as the stuff in #1. I'll just find something else to watch.


Yes, completely valid to treat it as the same when it's something you want to have access to without any third party denying/removing that access.

That you have no attachments to movies, music or tv shows is just you. Others may want to continue enjoying the media long after it has been removed from online services.


Google issue is whether they will pull the plug of the whole service, change name or what. Then you will ask what. And if you are not looking in that several months … it is really what.


Anyone know of a good YouTube channel that reviews self-hosted programs? I don't mind self-hosting but I don't have the time to install, configure and deploy 50 different video library products and then decide which one works for me. I'd rather watch a video and listen to someone who has done that exercise, because it saves me a lot of time.


Please bring back desktop apps!

There are so many apps that just do one thing really well, and don't really need updates unless they're to fix compatibility issues after OS updates.

"Web app" is now synonymous with the SaaS model, which means over time, the product becomes bloated with features designed to appeal to the next biggest segment the company currently doesn't have. And there's no way to opt out. Dropbox and Evernote come to mind, but everything falls victim to this eventually.

I like apps like Sketch. 1-time fee with 1 year of updates, which is fair. If I'm happy with my Year 1 features, I don't have to update.


This article is a bit delusional and oblivious to market dynamics.

1. Privacy: Self hosting is not necessarily more private than cloud services. The security of self hosted services is only as good as the effort put into maintaining it. Who do you think invests more in security: the giant corporation or a free open source project? Even if the project is well maintained, there are many ways your server can be compromised. It’s only as safe as you’re willing to make it. The best way to be safe for me is not self hosting, but cloud hosting with E2E encryption.

2. Longevity: even though self hosting technically means nobody can discontinue your service, everything eventually gets discontinued. Your server will be out of date at some point. You will need to update it. You might be too busy to do it and your server will become a security risk. Again, middle path and ideal way for me here is: use cloud services, encrypted, AND save the data locally as well.

3. Usability & market dynamics: John Doe doesn’t have the time or knowledge to self host, which makes self hosting dangerous for him for the reasons mentioned above. If you’re going to self host, you need to know what you’re doing. If you do it half way, you’re better off staying with a cloud service. The cloud will always win because it’s easier for everyday people. And because it wins, there will always be more money and development happening in it. We need more cloud services that use encryption by default, and provide data migration tools. The more this becomes a standard, the more the “big cloud giants” will have to step up and match this new standard. For me, THIS is the way not just nerds but everybody benefits from a safer, more reliable Internet.


Your response to this post is a bit oblivious to motivations other than profit and metaphors other than markets.

Additionally, re: (1), static sites are more secure with no maintanence than using a browser with Javascript enabled. (2) HTML and files lasts forever. There is nothing to update. (3) You keep assuming the needs and complexity of a for-profit business and the risks associated with that. But human persons don't have those complex needs or the associated risk of complex, dynamic setups that enable entire teams of people to work on something and constantly move it around.


1. I don’t understand why you conflate security with privacy. Or to be more precise, it depends on your threat model. A badly secured self hosting will make yourself vulnerable to targeted attacks over your privacy.

While it’s an issue you should consider, those attacks are pretty unlikely. However traditional cloud services will harvest every bit of what they get about you with a frightening efficiency but they’ll never automatically scan your server for vulnerabilities to read your mails.


I think there needs to be clarity about what is harvested and how. Most centralized services actually respect people’s privacy to the extent that they’re not asked to infringe it by law order.

Most major tech cos have encryption at rest and highly regulated access checks. It’s also not clear that they actually do harvest every bit of data they can. They might for the purpose of better UX within the service, but Google ads doesn’t collaborate with gmail or Google photos for example. There are, however, botnets all around the world scanning the web for security flaws.

This is why, in this sense, I argue that most people are actually better off using a safe, centralized service with encryption than try to reinvent the wheel at home and be more exposed.


> Most centralized services actually respect people’s privacy to the extent that they’re not asked to infringe it by law order.

cough cough


Security is necessary to maintain privacy. If someone gains access to your systems, nothing you had on there is private anymore.


I am not aware of any OS which will stand up to the internet without management for months at a time. Until that problem is solved, self hosting remains a dream.

Ideally you should be able to set up a machine, and just have it work for at least a year, on average, with no need to intervein, do updates, patches, etc.

Like the old Novell systems we keep hearing about, serving files from rooms since closed off to humans for years.


Any selfhosting distro on which you enable unattended-upgrades will do the trick: libreserver.org, yunohost.org, freedombox.org are good examples.

Of course, it's even better if you setup the backups before you forget :)


How about commercial nas? I have a Synology and thus it gets updates, it keep working without internet for months.

I bring it with me at the vacation house and it's like i am moving my personal cloud


Not quite self-hosting, but in the same spirit I've slowly been working on a simple local archival system for anything I don't want to lose. It's changed my life.

Even across years of content, it's required less storage space than I expected. The more I archive, the less I need to rely on online search engines or worry about linkrot. It's also helped me cut down on how many tabs I keep open in fear of losing information.

If I can't recall some piece of information, I can do a fuzzy global search through the text of all articles I've saved in a specific category, for example. If I find some obscure fix for something deep in an old reddit or HN thread, you bet I'm archiving that so if I run into the same issue a year later I can easily fix it again without trawling through 50 Google results.


What do you use to organize all of this unstructured data in a way that is searchable and retrievable?


It's somewhat structured; I use both broad categories and a tag system. I can also add additional comment text to archived pages. It's all patched together with shell scripts and some Lua (since that's what I'm familiar with). `ripgrep` is the utility used for searching. It's fast enough for me even when I don't use any kind of category filtering, but I have a beefy computer and use NVMe drives, so YMMV.


I used to produce and record music and used a website called imeem to host my works. At some point it was bought out by MySpace and all non-licensed music was removed (granted there was a ton of stuff uploaded by individuals who did not own the rights to the work they uploaded) including stuff uploaded by the creators.

My work was pretty sub-par at the time, but I felt the burn pretty badly. Since then I’ve had very little faith in any site that allows creators to upload their content.

I still have work uploaded to SoundCloud, but also have backups stored locally and on my self hosted nextcloud instance for this reason.

This is probably more along the lines of the current situation with Vimeo than it is with Picaso, but I can still feel the burn from time to time.


The author mentions but doesn't address the Picasa problem, which incidentally is the one I care most about.

What do I do when all the useful software is cloud based and requires me to store my data with the service provider in order to use it? Self hosting is not a solution.


Good point. I use Photoprism to manage my pictures.

https://photoprism.app/


When I hear "I have nothing to hide" my response of "OK, just send me your browser history" is usually met with silence.


I've gone down this path a while back and self-host Gitea and other things: https://taoofmac.com/space/blog/2022/02/12/1930

I will be moving my KVM/LXD setup to Proxmox eventually (probably when I get new hardware) and am looking into low-wattage servers (ARM would be nice, to continue the grand tradition of running services on an NSLU2 a few years back, but there just aren't any good ARM server boards with lots of RAM and NVME storage).


> I will be moving my KVM/LXD setup to Proxmox eventually

How come? I'm running proxmox currently but I'm considering just using a regular distro with lxd because I'm almost only using lxc containers...


Old kernel, and I would like a UI other people (my kids) can use.


I've been self hosting since the early 00s. I used to host:

* Email (for years rolled my own using Procmail, Postfix, Squirrelmail before finally using Zimbra Email Server - an all in one software package)

* FreeNAS/TrueNAS for storage (3x SC836 16 bay Supermicro servers)

* VMware vSphere for virtualization (clustered setup with vmotion running on 2x dual Xeon servers)

* Elastix IP Phone Server (Asterisk PBX hooked up to Aastra IP phones in every room in my house)

* Cisco Call Manager - proprietary IP phone server (helped with my job...)

* Plex + the usual accompanying VMs

* HomeBridge + HomeAssistant

* Openfire (because everyone needs their own chat server)

* ZoneMinder (CCTV server hooked up to cameras around the property)

* Zabbix (to monitor it all)

The whole family (about 20 folks) would use the above for their day to day personal and small businesses.

It got to the point where I was spending several hundred in licenses and close to £1k in electricity a year.

I'd get paid by some folks to maintain the above, but after a while the fun of building gets replaced by the chore of maintaining - and the few outages I've had (can count on one hand - proudly) were a royal PITA since I'd have to drop everything and try and get it back online again.

In the 2000s and 2010s the skills my homelab taught me were still relevant; Routing, VLANs, Firewalls, NAT, subnets. But having just studied for an AWS certification I was struck by how much less application those basic skills have in today's marketplace - the cloud has representations of those sure, but maintaining a homelab doesn't give you exposure to what a VPC or a Security Group is.

Also back in the day it was far cheaper to self host vs rent servers in the cloud - today (especially after energy price increases in the UK) self hosting my now much smaller homelab of 1x Intel NUC and two Synology boxes (250W total) is gonna cost about £640 a year in electricity alone. That'll buy me a lot of stuff in AWS/GCP and frankly I'd rather have the practise and experience with a more relevant tech stack.


> * Elastix IP Phone Server (Asterisk PBX hooked up to Aastra IP phones in every room in my house)

How was your experience doing this? Also how do you get upstream to "the phone system" for lack of a better phrase. I really like this idea of this but I know nothing about how the peering works.


Self hosting seemed so very daunting up until a year or so ago. I decided to give it a shot while struggling to find a way to keep my notes. OneNote isn't good (no Linux support), wasn't a fan of Evernote, Nuclino was crawling on my old laptop and I ended up finding BookStackApp.

This led me to find a cheap VPS, install it using the install script and then figure stuff out from there. It led me to setting up a home server and working my way through the entire setup - format and mount drives, automate backups, automate hdd health checks, setup smb, docker, traefik, emby and so on.

At this point I'm looking at experimenting with Proxmox as my server is overkill (it also made me realize how few resources are used in these setups... we end up needing 2-3000$ systems to just run an OS... which is absolutely ridiculous). Linux showed me that in order to do any meaningful work you don't need a 3k machine. In any case, I'm in the process of arranging ALL my notes in order and I plan on publishing a guide that walks a user through the setup step by step.

I know people are talking about a lot of the complexities, but you can always share your knowledge. Help someone setup an old linux box to use as an smb nas... get them to install jellyfin or emby or plex on it and even there you have already massively helped them in the right direction. I think it's our responsibility to share our knowledge and empower people to migrate or at least understand what's involved.


I truly do miss Picasa so much, and I'm still mad at google about its loss. It was used extensively in family history research centers, and did a great job of automatically picking out pictures of your ancestors in old photos. I wish google had open-sourced it. Losing Picasa seriously made me distrustful of putting my personal data (in this case all of the annotations) into a proprietary app. I prefer open source, but if I can't get that then the real line for me is open data format.


Fellow Picasa user in mourning here.

I have settled on XnView MP [0] as my Picasa replacement in terms of managing my photos locally.

It took a bit of UI + options tweaking to get it close, but it is pretty fast for my purposes (quickly browsing folders of photos). My main image library folder is synced on Dropbox so I can have a nice local version of things on each computer.

It doesn’t do AI stuff, but for photo management it gets the job done.

[0] https://www.xnview.com/en/xnviewmp/


My personal site is a static page on github pages, behind cloudflare. It almost never goes down. This page appears to be down, probably because of the HN traffic.

I don't buy counter arguments to “I don’t care, I have nothing to hide”. This is completely reasonable if you mean it as "It's extremely unlikely that github or cloudflare will ever choose to censor any content I care to share. Therefore I'm okay with taking that risk in exchange for the free, easy, scalable web hosting"


>It's extremely unlikely that github or cloudflare will ever choose to censor any content I care to share

are you absolutely certain that in 5, 10, 15 years, _all_ things you do and views you express that are widely acceptable today will continue to be so?


In 15 years I may be dead, and yes I'm pretty much certain.

Life is too short to worry about these things. I'll say controversial things in private or anonymously.


Cloudflare blocks readers running Tor or VPNs and GitHub has been down every day this week. I don't think your plan is bullet proof.


I'm not going to run Tor or a VPN from my static web page.


But readers might


I'm okay with forcing people to choose between anonymity and reading my content. I'm not going to publish anything that they have any reason to hide, with very high probability.


Maybe they don't want you to know who they are; you shouldn't get to dictate your readers' philosophy. ...Though in a sense, by choosing a Cloudflare you are deciding for them, and they just won't get to see what you have to say (or won't bother training another machine learning system for free solving hCAPTCHAs just to read your post).


That's a good post on the topic, thanks. Like a lot of others I'm a hybrid-self-hoster. I do rely on some third-party, third-party-hosted or other cloud services, but I also spend a lot of time bringing things back home when I can.

It's tricky to be in that hybrid-box since the conversation in this area is very dichotomous--cloud things OR my own thing--but overall I like keeping my options open and swimming with the herd ;-) in making sensible use of cloud services when it seems appropriate.


I think the granularity of control is just as important as where the app is hosted imo. Its perfectly valid to make a fair compromise on ease of management vs. being able to vendor your own versions. And especially with how great Tailscale/Wireguard networking is nowadays, you really can make that line blur between your own network + a cloud provider.


The awesome-selfhosted list is ..well.. awesome, however it lacks a hardware category to collect links and information on cheap low power small devices to be used to host our personal data/services, both to avoid keeping beefier PCs up 24/7 and to better isolate their functions.

It could for example include from tiny boards with the bare minimum necessary to host very light services (Single NIC Raspberry/Orange/Nano/Banana PI, etc.) to small sized boards either with storage capabilities or a couple NICs to be used as moderate traffic firewalls (PCEngines.ch APU, etc) and bigger and more powerful systems with multiple NICs for SOHO+ sized servers and firewalls (www.ipu-system.de etc).

Also, this m.2 to SATA port replicator reportedly works perfectly with Linux (Possibly FreeBSD/XigmaNAS too) as do many other JMB575 based cards. Could turn any cheap board into a low cost NAS. https://www.ebay.com/itm/203735847811

Too long for Mini PCs accepting only 2242 modules? Here's the extender. https://www.ebay.com/itm/263570657382

..etc.


> this m.2 to SATA port replicator reportedly works perfectly with Linux

That's a port multiplier and if that works or not depends more on the SATA controller and less on the operating system. Far from all SATA controllers support port multipliers. And even less support fast port multipliers (FIS), which is not much of an issue if your main target is size.


I used to host company stuff on a single physical server I built and put in the datacenter. After some time (and traffic overload) I redid and migrated the company stuff into the Kubernetes and cloud but I still kept the server for personal services and it is still running as we speak. I just had to switch one RAID drive on-line during all those years.. It is a bit costly, but hosts NextCloud photos, files, contacts, calendars, tasks there. Dovecot and Exim for native email, Roundcube webmail, LDAP and even authoritative Bind for couple of domains with a secondary replica on a VPS. Also gitolite git repo hosting, wireguard. I use those self-hosted services daily from android, my laptop and desktop. It is the real bare metal thing. It has some AppArmor policies, fail2ban, some docker containers too. Yes, it took a lot of time, configuration, constant small improvements, adding stuff one by one, DKIM, DMARC, DNSSEC, etc, it needs upgrades now and then but the hard part was mainly done at the beginning! Huge upfront cost and probably not for an ordinary hobbyist, true. But now the maintanence is quite ok. I can even SSH into it any time from Termux android terminal if I need to do something quick on the go or download something fast for backup via the server optical datalink. Most people were lazy, they decided to go clouds and cloud hosted services fast, they took shortcuts and now we have what we see today. There used to be a funny Microsoft tale ad [1] where a family was proud to have a server in the house. :xD While I am not a fan of Microsoft, this also inspired me to go this self-hosting (pain) way and learn a lot along the way. :)

[1] http://www.jimhaven.com/microsoft-stay-at-home-server


> It gives you the peace of mind by keeping you in control of your data.

I like the sentiment and the points made, but the author uses this amorphous concept of "your data" throughout and I feel like it simplifies things a lot and conflates many different issues.

Most people shouldn't focus on self-hosting literally all the data related to them. This is a sort of perfectionist mental compulsion many of us on HN are familiar with. You have to decide what data you actually really don't want to live without in the rare event you lose access to it, and prioritize that. For most people, this data is not very complex: family photos and videos, an album by an obscure artist, a game you like to play every few years or hope to show your children.

If you are an activist, or someone creating dissident media, or something like that, you should already be wary of the cloud -- the incentives already drive you to use tools that are secure and self-host when needed.

If you truly don't like the ways the big tech companies are doing things, you should find ways to organize with others and demand change; otherwise you are just modifying your personal habits and thinking you are sticking it to the Man with a one-person boycott.


I agree with the issues raised, but I'd say there are costs and risks associated with self-hosting, and those aren't factored into the post.

Self-hosting will have the same appeal as off-the-grid power: It's expensive and technically complex to implement, comes with it's own unique risks, and is way less convenient than sucking it down through the same pipe everyone else is. But it does provide a sense of empowerment.


I love promoting self-hosting.. self-host, self-host, self-host!

Having said that, I'd say: Chose your battles wisely...

You can run your hardware in X number of physical locations that you have access to (personal house, family etc.). But that doesn't always suffice for backups, so go with an additional cloud provider for additional backups.

Emails: Do you want to be hit with tonnes of spam traps because you're an unknown IP (any individual doesn't send email emails to 'warm-up' your IP). Do you want to lose emails because your personal server had a power-cut or internet connection drop?

Monitoring: I'd said for small-medium personal setups, to get the level of monitoring, central logging and intrusion detection detection that someone (at least for me) would be comfortable with in the current age, a fair chunk of computing power goes to this. Maybe you'd use an external vendor for monitoring, since your home server monitoring itself won't detect if it goes out.

Instant messaging: For IOS, at least, you need to jump through a bunch of hoops to send notifications to devices - should you use an external service for this?

Honestly, I'm rambling, but.. I absolutely recommend self-hosting everything.. but I think a foreword about the amount of effort that needs to go into setting up services that you rely on a daily basis is (or should be) pretty high.

I.e. if I were wanting to setup a single service for myself that I _heavily_ relied on.. I probably wouldn't do it. If I wanted a bunch of applications.. serving 5 applications from a k8s cluster and some additional work for monitoring, log management, backups and other bits and pieces probably starts making sense.

On another note, for me, hosting things on your own, especially for data/services that you truly care about, sometimes can have a keep-you-up-at-night feeling of "you don't know what you don't know".. what if someone is in my network.. what if there's a vulnerability in the VPN, firewall and X, Y Z that hasn't been patched and someone is on my machine deleting/stealing my data. There's also people at lot more clever than you in the world and plenty of people writing scripts to automatically break into services that require a little more knowledge than you have on the subject (whatever the attack vector maybe).


PhotoPrism[1]+NextCloud is a potential solution to the Picasa problem. I run them on my personal NAS.

The devops experience is fine -- I can wrap up PWAs for all the devices (PCs and phones) in the family. Need to set up a few systemd timers to synchronize data, build indices and check for PhotoPrism app updates but that's not too bad. Docker makes deployment super easy.

The user experience, hmm, modern, minimalism, tolerable.

Modern = it knows about iPhone live photos and all sorts of photo metadata; has machine learning for classification. Recognizes faces. etc.

Minimalism = just a viewer, no photo editing (Picasa photo editing and the ability to put an album together into one picture totally rocks)

Tolerable = meh classification precision, slow geotagged map (dreaming of Picasa + Google Earth), NextCloud iOS autoupload constantly breaks (you want non-iCloud cloud on iOS and you're not a megacorp huh? good luck) etc.

Conclusion? It has been a decade since Picasa is gone. I'd expect a lot more improvements to happen, but in reality, the best thing we have now is just that. Some good, some bad, some ugly.

[1]: https://photoprism.app/


I'm writing PhotoStructure, which you might be interested in. It's self-hosted, but also runs on Windows and macOS without docker, libraries are portable, and photo and video deduplication is robust. Photoprism had a couple features I haven't built out yet, but I'm getting there. More details are here: https://photostructure.com/faq/why-photostructure/

Also, if nextcloud gives you attitude (I had scaling issues with it), know that there are several other alternatives to background phone syncing with your server: https://photostructure.com/faq/how-do-i-safely-store-files/#...


Very interesting project, and nice landing page! Will definitely check it out.

I'm a long time ownCloud/NextCloud user and I'm aware of the alternatives. With multiple android phones come and go in the past 8 years or so, the background upload seems to stand its ground.

The real problem here is iOS and its lack of proper background tasks. See: https://github.com/nextcloud/ios/issues/215 -- they tried every possible way to persuade iOS into running background sync, but still hit and miss.

I have to request access to my wife's iPhone and manually trigger some :)

One small suggestion here -- PhotoPrism went with `tensorflow.js` to load up classification models, and I recommend a "real" TF or PyTorch installation to properly leverage the computation resources. The difference is huge even running cpu-only because it's wasm vs. proper BLAS library.

I worked on a nodejs binding for native ONNX runtime (not publicly) so that's also a possible way out.


> I recommend a "real" TF or PyTorch installation

Yeah, PhotoStructure's feature of "runs everywhere" turns out to be a huge albatross around your neck (for me) when it comes to ML.

Currently, all features are available on all platforms--but having classification plugins that are only supported for specific hardware/OS combinations might be a reasonable solution.


Even better -- things like ONNX Runtime are intended to run everywhere, and take advantage of the cutting edge tensor processors (like the INT8 processors in latest ARM chips)


Like so many things, this is just all about trade-offs. Self-host is not a silver bullet, it just swaps in a different set of problems.

Risk is part of it. Cloud service disappearing, discontinuing, failing, changing pricing, or modifying product, vs fire/flood, theft, hardware failure or software update breaking things.

Responsibility for maintenance is a whole thing, too. Maybe you like that sort of thing, but is still a time suck and for most people it eventually gets boring (especially if it's similar to your day job). Do it less often and eventually you will find yourself upgrading something through major versions with all kinds of breaking changes.

Security is a constant concern, and it's unfortunately not as simple as "it's firewalled on my LAN with no inbound access"

Media disappearing from a cloud service is incredibly irritating, but you know what else is bad? Trying to watch a movie with your spouse but instead spending your evening diagnosing why your NAS refuses to boot.


Dismayed with the brittleness of Pinboard and the bloat of most alternatives I turned to self-hosting an excellent bookmark server called linkding[0] on a Raspberry Pi. Very happy with the result.

[0] https://github.com/sissbruecker/linkding


The main barrier is the difficulty of doing it, and there is currently an economic disincentive to fix this.

For software companies the cloud is DRM, and the only kind that works. Rent access to software and you can easily charge a recurring fee for it. This is incredible on the business side, especially because recurring revenue is valued higher by finance types than non-recurring revenue (due to perceived lower risk).

For makers of software you can self-host, money is often made through support. This creates a disincentive to make things too easy or you cut into support profits.

If you try to make a living making endpoint applications, life is hard. The FOSS movement has educated the market that software should always be free (as in beer, not freedom). People will pay $10 for a Starbucks drink but not $5 for an app they use every day.


I feel that a lot of what the OP mentions is not really solved by self-hosting. Has does self-hosting solve Netflix problems? How does it stop Spotify changing your playlists? Sure, you can create your own jukebox of music files but the reason you pay for Spotify is unlimited access to a lot more music than you would ever buy and easy use between devices.

There might be a few use-cases where self-hosting is a bit less risky than losing everything but I suspect for most people, the online services are just easier. That said, if you pay for stuff, you are more likely to get some proper support. I pay fastmail for my email because they provide me email and support in return for money. You can't use free GMail and then complain that they have broken something or locked you out.


I wish self hosting was a bit easier. Right now it seems you need to know so much. I've always wondered if there was a way of making self hosting products that were easy to set up and secure by default.

I'd love to spend $100 for a mail server that I just plug into my router, as an example.


I've been using an app called Mylio as a replacement to Picasa. Everything is locally hosted, the apps are very fast with large libraries and you can have peer to peer syncing between multiple devices for the same library, including your phone. I like it a lot!


Can recommend https://cloudron.io for those looking to get started with self-hosting and don't have a whole lot of time figuring out how to install/update a variety of apps.


I'm biased because I now work on it, but I think Urbit is the only way something like this will work for most people and at scale. "Only" is probably too strongly worded, but it's the one attempt I've seen where I think real success is among one of the possible outcomes (other attempts I've seen don't fix deeper issues and are DOA).

The issues that caused the decentralized web to fail (and incentivize centralization) are deeper and to get self-hosting to work beyond the tiniest of niches requires rethinking some of the computing constraints we find ourselves operating under from first principles.

People will never run their own servers if that means administering linux. Identity will never be solved by PGP key signing parties and spam will always be a problem on the current web. Federated systems in their current state that require everyone to run linux servers and keep them in sync/up to date will not work.

https://moronlab.blogspot.com/2010/01/urbit-functional-progr...

https://urbit.org/understanding-urbit

On the current web we're just serfs allowed account access on company servers. I think it's admirable to make it easier to run your own server, but I think decades have shown that it won't work (beyond a narrow hyper-technical niche) without fixing some of the larger issues: https://zalberico.com/essay/2020/07/14/the-serfs-of-facebook... - the most exciting part of the web was what people thought it would bring in the 90s. I think that isn't impossible, but we're currently trapped in a local max. We can't get out of that local max without acknowledging why we're in it - why the centralized services are currently so much better and why the dream of everyone self-hosting (even with decades of effort) has been a failure.


A few years ago I moved from a nailed down Apache "Webspace" to a self-hosted nginx server on a virtual debian instance for 3.50 Euros a month.

I had to learn a bit about Webserver configuration and best practises and have to adjust every now and then, but in hindsight I should have done that earlier. Before I constantly had to renew certificates manually (and the old hoster wanted not exactly little money for it) — now I can just use Let's Encrypt with automatic renewal.

Before I was limited to whatever my webhoster would allow to run (which was basically only PHP), now I can run whatever I want behind a nginx reverse proxy.


Seems to me that there's a middle way. Self hosting is too hard, but making sure you've got local duplicates of all your stuff is less so.

As a simple example: I use Dropbox and Google Drive extensively. I'd like not to but the utility and ease is hard to beat. But I have made an effort to only use Word and Excel (rather than gdoc/gsheet) and have hooked up my Synology so it backs up all my cloud services whenever there's a file change.

So - I'm not strictly self hosting, because it's too hard, but if Dropbox doubled price or Google stopped doing GDrive, I'm safe. Same with photos and other critical assets.


I think it's great selfhosting has caught on again. I've been doing it since the 1990's. I host multiple low bandwidth domains on my old laptop over a Wireguard tunnel to my cheap AWS Lightsail instance for $3.50 with 1TB trasfer built in. Also I've hosted my own email all that time so I always have an alternative to Gmail or others (Gmail is good but I like options that I control). I host a custom service I wrote and run in Docker as well. I can and do use AWS services like Codecommit and am looking into others but for control; Selfhosting is where it's at.


I would love to self host, but the time and effort I would have to put into doing, maintaining, and convincing my spouse (which is a whole effort by itself) is so significant it will take away from my other goals in life.


Follow-up question:

Should someone interested in self-hosting do it from a literal PC in your basement, configured as a server?

Or is self-hosting on AWS / DreamHost / whatever good enough?

I ask because I like self-hosting a lot, especially when market solutions don’t really do what I need them to.

But security, man, that worries me. I can’t tell you what a three-way handshake truly is, or what a signed certificate really means: so self-hosting my own email / web server / etc. from my basement gives me a fear that someone, somewhere will take advantage of a vulnerability in some system component that I’ve never even heard of.


> Should someone interested in self-hosting do it from a literal PC in your basement, configured as a server?

It's a good place to start/test. But don't open your firewall: do all of your testing on your internal network. You really don't want to open your network to the kind of problems that can occur while you're learning.

When you're ready to really host things then you should rent a cheap shared instance, or maybe a low-priced dedicated server. You can pick up something decent for $10/mo. That's not much if you're skilled enough (eg, employable enough) to learn how to self-host.

For your internal network you can use a pi-hole to set up all of your DNS entries so you can even visit "http://example.com" and have it point to an IP on your LAN.


I would not encourage someone who completely lacks experience in server/network management to do self-hosting, as it is easy to make mistakes.

Nevertheless, if someone is willing to dedicate some time for study and experimentation in the beginning, this is not an insurmountable problem.

I have been using self-hosting on "a literal PC in my basement" for about 20 years, without any problems whatsoever, and with negligible costs (the main cost being that I have a set of public IPv4 addresses and a fixed IPv4 address on my router connected to the ISP, which implied a more expensive monthly fee for the ISP).

After the first few months, during which I have made frequent changes in the configuration, while I understood better and better how it should work, the time wasted with server management during the next years has been negligible, i.e. just a few hours per year, used mainly for software or hardware upgrades.

Configuring and managing services just for personal needs or for the needs of a small number of users, e.g. a family, is much simpler than in an enterprise setting.

For reliability, it is good to have a second spare computer and a second image of the root SSD/HDD used on your server, to be able to replace the active server in case of failure. As others have already mentioned, periodic backups should be done and they should preferably be stored in a different location.

While I believe that self-hosting is not difficult, unless someone has already done such management work as a professional, it is necessary to learn many things.

For security, the first thing needed is to understand well what a firewall does, which are the firewall rules needed by whatever services you want to host and how to configure and monitor whatever firewall program you choose.

For this, some knowledge about how the main IP protocols for networking work is necessary.

The management of keys and certificates is also important, as you have mentioned, but what you need to learn for this is much less than what you need to learn about networking protocols, in order to both make a correct server configuration in the beginning and to diagnose any problems that might appear later (usually because someone at your ISP makes some changes in their configuration, which break yours, but nobody who answers the support call has any idea that they have changed anything, so you should better be able to identify yourself what they might have done, if you want a quick solution).


Even better do you really need "self hosting" many people will be good enough with external drive.

You can also setup something like Synology which is good enough for layman and if you keep it in your local network it is basically easier than configuring some old PC.


I self-host entirely on a Dreamhost VPS, precisely because of the issues you mention. I'm fairly experienced with many of the more technical aspects, but Dreamhost is more diligent than I am, and they stay abreast of issues I'm unaware of. So I handle the app layer (Nextcloud, FreshRSS, Fossil, etc.) and they handle the OS, web server (Apache, PHP, etc.), and certs (through Lets Encrypt). This balance has worked really well for me. No affiliation, just a customer since 2004.


For some things your local network is enough, like personal pictures and other private files. E-Mails I would suggest to host in a datacenter. Not necessarily in AWS but a local company offering hosting.

For those who feel unable top securely self host I'd suggest looking into smaller providers of hosted E-Mail solutions. A large number of federated services is better than everyone being on Google Workspace or MS360.


Self-host in your basement, use nginx as your reverse proxy and add tls with letsencrypt. I'd argue this is more secure than most modern applications.


If you need mail, you need VPS with good reputation. Otherwise hosting from your basement is an option if you've got accessible IP address.


I've been working on https://markdownsite.com/ - the "Git Repo -> Website" type of hosting platform, and have completely opened sourced it so others can run it themself.

The installation and on-going configuration management are first class things, with documentation and graphs: https://github.com/symkat/MarkdownSite/tree/master/devops


I do believe self-hosting is the future, this is why I changed the business model for my web analytics platform[0] from self-hosted + cloud to self-hosted only. By focusing purely on self-hosting, I can touch on many aspects that companies that promote their cloud offerings don't (server maintenance, monitoring, backups, alerting, etc.). This also forces the clients to give self-hosting a try if they want to use the app.

[0]: https://www.uxwizz.com/


Genuine question: does it make sense to go even more paranoid with self hosting?

1. buy a box at home

2. run on onion: https://medium.com/axon-technologies/hosting-anonymous-websi...

3. access media using onion browser

I believe electricity cost of hosting at home would be expensive and accessibility will be a problem 2000 miles away without cdn. One might have to consider having this box on a separate network.

So anonymity here might not be worth the price?


https://sandstorm.io/ was meant to address this but seems moribund, sadly. Urbit comes to mind as well.


If anything, I'm hoping that the increase in self-hosting (if even by only just reasonably tech-savvy users at first) will start putting more and more pressure on the ISPs and associated infrastructure (from municipal to federal or however it is structured in your country).

Because at the end of the day, in most places connectivity is very heavily asymmetric, and as soon as you start self-hosting anything addressable from outside your LAN, your connection will go bust for all practical purposes.


I've been self hosting for nearly 10 years now, and in all this time the biggest pain point I remember is setting up OpenLDAP. That's why I created LLDAP (https://github.com/nitnelave/lldap), a minimalistic LDAP server with a nice web interface that is very easy to set up and configure. It needs a little bit more love before the 1.0 release, but it's already very usable.


That looks really nice, thanks for sharing!

Just curious about "<100 MB RAM including the DB"... that sounds like a lot for a small LDAP database, doesn't it? I don't remember slapd using that much. Is 100MB a realistic RAM consumption for lldap or is it more like "you can be sure it will never under any circumstance go beyond that"?


It's more that I never properly measured it, but I know that it doesn't use more than that :) I should probably get a more accurate figure in there.

EDIT: From docker stats, I can see the container using about 2.5MB RAM, which is more in line with what I had in mind. It might scale up a bit if you start having 100K users, though :D (but it's really not meant for this and the frontend will crash your browser way before because there's no pagination)


A pity that I haven't found this earlier. OpenLDAP is definitely a pain, especially if you want Docker+Alpine Container+Group Management. I created a Docker container for myself but I wasn't able to get the group feature to work. Maybe I'll check this out. Thank you for mentioning!


A good alternative would be decentralized peer-to-peer apps. I can see a a youtube-like app using a technology like torrents where users also host the videos they watch. If it would be easy to use, convenient and fast, I can't see any reason why such an app can't become successful. In a similar manner, I can't see why blogging platforms or social media p2p based apps apps can't be successful provided, of course, they are easy to use, convenient and performant.


I want to run my servers from both AWS as well as my laptop. At the moment the configuration and deployment of each is unique which, apart form being a bit of a hassle, also means there might be issues on one i cannot reproduce on the other. It would be really cool if there was a way I could deploy to my machine with awscli and self host my own beanstalk setup so I can test and debug even offline safe in the knowledge it will work exactly the same.

Are there any projects that offer something like this?


One thing I think would help the self-hosting community is a standardized method for tapping into repositories of scripts and functions. The next step is to build a UI on that platform and then I can do admin things from a self-hosted UI but it just runs several script for me behind the scenes. E.g. a button for check upgrade for my email server, a button for upgrading my email server, etc.

If administrative configuration became standardized, then it will become commoditized by hosting platforms.


For those that avoid it on the grounds of "it is too hard to self-host", may I suggest a much simpler alternative? It takes two simple steps:

1) buy a domain name

2) Foment/patronize SMBs that can provide hosting for open source software alternatives.

That's it. By demanding open source alternatives, you are ensuring that the service vendor can not lock you in. By using your own domain, you get the freedom to port your services to anyone that offers better price/better support/better performance.


I've been thinking about buying rack space from a colo in my metro area.

Hosting at home is something I used to do religiously for over a decade, but I really don't like all the hackarounds and shitty ISP/DNS/port problems anymore.

It's definitely not cheap to do this, but there are a lot of fun upsides. Just having an excuse to get out of the house to badge in at a DC is a nice mix-up for me. Everything I do at work is cloud hosted, so I rarely get the visceral experience anymore.


>I really don't like all the hackarounds and shitty ISP/DNS/port problems anymore.

This is a not insignificant part of the reason why I'm in no hurry to move from my flawed apartment. Symmetric gigabit fibre with static ipv4 is a luxury not everyone appreciates but I sure do

Moving would be such a pain since rental agents don't get this at all. "Yes it has fast broadband"...what they mean is it has 4G reception if you lean out the right window.


Checkout https://coopcloud.tech "Co-op Cloud is a software stack that aims to make hosting libre software applications simple for small service providers such as tech co-operatives who are looking to standardise around an open, transparent and scalable infrastructure. It uses the latest container technologies and configurations are shared into the commons for the benefit of all."


Thanks for sharing! There are *a lot* of such solutions in the docker/k8s space, but i feel like mentioning some lower-tech solutions: alternC, ISPconfig, yunohost, libreserver, freedombox...


https://freedombox.org/ can make this easier. It is based on Debian and has a nice Web GUI. One can also order an appliance: https://www.olimex.com/Products/OLinuXino/Home-Server/Pionee...


I haven't seen it mentioned but there is Start9 [0] which is a privacy focused plug and play server. I own one but haven't had the time to set it and and use it properly.

It's a Raspberry Pi running their own OS and can host a variety of apps from their marketplace. But they seem to make it easy enough to add services that users develop.

[0] https://start9.com/latest/


>Whenever I bring this up people are like “I don’t care, I have nothing to hide”.

My feelings on this are similar but different, I do have things to hide, but I just don't care.


Been on this route for a while. Currently, I have:

- My blog (Jekyll + Apache 2 + nginx)

- An Invidious instance

- My VPN (Wireshark)

- A DNS server (Pi-hole + nginx for DNS-over-TLS)

- My password manager, up to a point (KeePass + OneDrive for backups and sync, but I'm thinking of ways to self-host that)

The big ones left are making my password manager self-hosted, email (not sure if I want to go beyond having my own domain yet) and code repo. I feel these need more reliable hardware and internet connections to be fully viable as self-hosted.


Author; Unrelated to the topic but related to your blog; the footer has a missing colon in address, in theme link. It is

https://https//github.com/nodejh/hugo-theme-mini

It should be

https://github.com/nodejh/hugo-theme-mini


I've been pretty happy with my local Unraid server. I have a few things running on it, including Plex for my music library and Nextcloud for notes, file storage, and automatic photo uploads from my phone.

The software and Nextcloud data are all on an SSD, but the Nextcloud data gets a nightly backup to a mechanical hard drive. The music doesn't have any backup, but I could always re-rip the CDs if I had to.


I self host a ton of things! :) it's really much less hassle than people think. I started with Docker compose and eventually started using my side project https://synpse.net/ for it as it just helps to move things around and update things remotely. I just wish more tools embraced 12 factor app style deployment :)


I understand this, but I also... really like the cloud.

I can share, be social, get recommendations, not worry about backups or a lost computer, not maintain anything, access from my iPhone, etc.

I have thousands of photos and music collections lost on old laptops and hard drives that I'll never see again.

I know there's huge tradeoffs (as articulated here), but there's some really amazing things about the direction the web is going.


Couple weeks ago I made this post about self-hosting https://news.ycombinator.com/item?id=30618577

My conclusion coming out of that thread was self-hosting is not a thing I'm going to do. I don't have the time or energy to essentially take up the part-time job of managing my own self-host.


I like the Picasa example. I am stuck with just looking for Picasa (ofcourse with sharing and cloud stuff) self-hosted alternative :-(


Photoprism?


Flip-side:

I self-hosted my blog and email for over 10 years, everything automated - first with Perl and Bash scripts, then much later with Ansible. It was beautiful. But last year I moved to S3/CloudFront via CloudFormation for my blog and Migadu for email. It's even more beautiful because it's now somebody else's problem and also a hell of a lot cheaper.


What self hosting stories don't seem to focus enough on is backup and encryption, as these are the main issues with server-in-your-house hosting. Even disregarding fire/water damage it's not uncommon to have hard drives die outright, which is a problem if you didn't think to (or had the money to) set up zfs for data redundancy purposes.


I agree coming up with a good backup strategy is an essential ingredient to long-term-sustainable self-hosting.

Speaking for myself, I don't have the goal of 100% detaching myself from "the grid", so to speak. I still want to pay an ISP to act as a gateway to the internet, and want to pay the local electric company to power my house.

To me, "backups" are a commodity service, like internet service and electricity.

Dumb file servers are offered by any number of places for a price lower than the cost of in-housing that service, and with a negligible switching cost at for my workload.

I'm personally OK with having one relatively shitty local mirror, and a background task that rsync's to backblaze. If BB makes noises about going under, I can migrate aws s3, rsync.net, digital ocean, whatever entity wants to charge me the least for my workload.

I don't think NAS's or ZFS are strict requirements, although playing with them can be fun.


Whenever I bring this up people are like “I don’t care, I have nothing to hide”. But this is exactly similar to saying “I don’t care about free speech because I have nothing to say”

Brilliant. I fully support this spirit. We need to self host more.

Here is another important one that self hosting would protect you from: censorship and ideological regulations imposed by a platform.


Can self hosting include hosting on ec2? To me it's a bit of a jump to assume the hardware is in our basement or something.


Raspberrypi is solving self-hosting issues for most people (size, power usage, simplicity). It's also bringing the price down, because for 2 years of a paid dropbox plan, you can set up your own nextcloud instance + another backup drive if needed... plus all the bonus features (privacy, fast access at home, no ToSs to break, etc.).


Speaking of which.

Is there a good self-hosted version of Google photos and by which I mean the critical features that make Google Photos so attractive:

a) very good mobile sync

b) creepily good contextual search (search by people, places etc.)

c) family sync and family albums based on b)

I would gladly pay more than what I pay for Google photos for a local version AND a setup fee to help me transfer.


Seemingly there isn't, which is a shame because a lot of people would benefit from it. Practically everyone has a smartphone and people's photo libraries keep growing and growing. When I was young and still naïve about the big G, Google photos was awesome but now it doesn't make sense to hand over all of my photos for them to mine and have to pay for storage too.

Right now my setup involves using Syncthing to get photos from my phone to my RPi-based NAS, where I'm running a Photoprism instance. On paper it looked great but Photoprism lacks polish and some important features. On the app side I planned to use PhotoSync to sync with Photoprism but didn't bother downloading it when I found out it wasn't open source and the Android version was ad-supported. A solid Android app that uses Photoprism as a backend and is as smooth and fast as Google Photos would be great to have.


I'm not affiliated but I came across some software called Yunohost (https://yunohost.org/) recently, a Debian-based OS that tries to be user-friendly for self-hosting applications. Not sure how much it's being maintained.


It's very active. I have been using it for years and they update often enough.


What exactly is self-hosting? Are you just running services in isolation?

Updates come from a central place, I guess. With some appliances, there is integrated federation, "cloud" access? Those can still comprise you.

Do you share hosting with your family and friends? Are they still "self-hosted", or are you their provider?


The FreedomBox project is a good example of making self-hosting easier:

https://www.freedombox.org/ https://wiki.debian.org/FreedomBox


Interesting contrast to this article[1] which was posted to HN several days ago

[1] https://greenash.net.au/thoughts/2022/03/i-dont-need-a-vps-a...


For those suggesting e2e encryption of data in Cloud services, how is that possible? How could you, for example, run Salesforce and have Salesforce only see encrypted data? Seems extremely complicated or impossible -- isn't the point of encryption that nothing can be done with it?


Site is down. I guess we've learned the limit of this self-hosting advocate's self-hosted setup.


The examples he gives are all the small downsides of cloud hosting but the huge upsides are clear to consumers and is the reason we all use them. Dont tell me that you really want to self host your youtube playlists, the market of people who want that is incredibly small.


Although I once loved the idea of self-hosting, my opinion nowadays is that life is too short to self-host. Yea platforms will come and go and sometimes it sucks, but what we really need is easy ways to move data from one place to another, more than we need self-hosting.


How would that work, self hosting Spotify and YouTube?

In theory you could probably find ways to rip and download everything you want to save, but it would require a massive amount of storage space just to be sure you never lose things that have a tiny chance of being missed.


For those advocating e2e encryption instead, is that even possible with most cloud services? How can you encrypt Salesforce data, for example, and still have Salesforce perform all of the necessary operations on that data, if they can't even see it?


The irony of an article titled "Start Self Hosting" having its site go down


You're missing the point if you think uptime is the number 1 priority.


So much underestimation here regarding what it takes to have reliable, secure and resilient self hosted services. I've seen far too many disasters because somebody thought it was just easier/cheaper to self host.


Self hosting has lot of issues but unless we discuss we will never move in that direction.


As a user of a service, unless it is for learning or paranoia reasons it makes no sense to try to self host the same it doesn't make sense to have your own mechanic workshop in case your car breaks or your own hospital in case you need medical attention.


The mechanic only offers the service. He works on our equipment and then he is done with it. At no point the mechanic keeps a part of car with himself and use it for his own adavantage. Nor he can suspend your access to your car. In fact if he does pull something like that the system will run a full enquiry into it and punish.

As a user of service I want guarantee that my data will be processed as per my requirement and then it will be left alone.


>As a user of service I want guarantee that my data will be processed as per my requirement and then it will be left alone.

This is totally orthogonal to self hosting. You can ask for a service to be operated for you and also ask for your data to not be processed in any way and be accessible only to you. Of course this is not free, which is what most people want and what you're mixing up here.


Remember that self hosting doesn't necessarily mean having to manage a server - not in the traditional sense at least. Many of the things mentioned in TFA are fairly trivial to setup with a consumer grade NAS.


There are projects out there trying to make it easier for beginners, like Yunohost, FreedomBox, and FreedomBone.

YunoHost in particular has a nice community, they are worth supporting if you're into democratizing self hosting.


The main problems with self hosting are securing the server for remote access, and maintenance.

If you can keep it local, Synology has good boxes that are reliable and largely plug and play. They require little to no maintenance.


I run Caprover on a $5 Linode VPS, and it makes it easy to spin up new apps from a curated selection or from a Docker Compose file. I checked out Dokku, but the learning curve out of the box was harder.


Hosting a server from my house is a violation of the TOS from my ISP. My choices are them or DSL. I don't live in a rural area. I'm well inside a metro area of 5 million people in the USA.


You need to change your laws to make that TOS illegal.

I have mailed the Swedish govt. agency that controls telecom (PTS, "Post och Tele Styrelsen" or "Post Traumatic Stress" both work in this case).

ISPs should have to provide static IPs and open all ports, pressure your local authorities and spread the word.


Break your TOS. An ISP that doesn't allow you to use the internet is a web service provider.


So get a VPS.


Which makes the whole point of the post I just read moot. I just switch one company that controls my stuff to another company that controls my stuff.


I love self hosting but very concern about reliability, security and stability of these systems, e.g. outage, disk crash, out of memory… Looking forward to seeing more articles about these problems.


I explicitly do not want to be in control of my own data. I don't trust myself with it. A third-party is better equipped to manage it over time. This is both a common and rational position.


Can you trust yourself with passwords for true e2e encrypted traffic? that could work too...


With regards to self-hosting, https://selfhosted.libhunt.com/ could be helpful.

Disclosure - the founder of LibHunt.


p.s. all open source projects mentioned on this thread could be found here: https://www.libhunt.com/posts/658517-start-self-hosting


> "But if you cannot wait, head over to r/selfhosted"

The irony of this blog post is telling me to visit a non-self hosted cloud service to get started self-hosting.


I self-host everything but my email.

Hosting email is just too much. Big providers just treat you as guilty of spam, unless proven otherwise. Just too many hoops to jump through.


It's not self hosting if it's not on your hardware, on your property, using your internet connection.

It's still just having some other random person or company hosting it FOR you.

The single biggest problem with real self-hosting is not a software one, even with all the software and hardware setup and working, the real problem is limits imposed on normal internet subscriptions by modern ISPs: Carrier grade NAT, lack of access to public routable IPs, lack of smarthosting combined with the IP blocking outgoing ports (which is somehow not illegal).


You're also not hosting any parties if you rent your home. But mainly that's because your pedantry will not be entertaining enough to keep any guests. Others however will realize that hosting your own services on a generic VPS or dedicated server gives you much more freedom than a you-are-the-product service that does anything legally possible and then some to lock you in. Sure, hosting on your own hardware in a place you control has advantages, but as you said it is also significantly more difficult so that is no excuse to dismiss other options.


I actualy think it's the best excuse to dismiss other options, as those options makes self-hosting even harder because it gives the ISPs ample reason to keep restrictions in place "Oh, you don't NEED to host it yourself you can just trust your data to live on some random machine somewhere else"

What I dislike about this use of the term "selfhosting" is that it's practically an advertisement for hosting providers vaguely disguised as a way to privacy and freedom, which it most certainly is not.


I'd love to self-host something like Picasa or Google Photos. Alas, there are not too many choices that can replicate the experience.


I vastly prefer local applications to self-hosting. The Internet really needs to return to its original roots; a network of equal peers.


Great relevant podcast: https://selfhosted.show/


If someone is interested why Picasa was mentioned, it's because of the face recognition. Still best offline version there is.


What we really need is a hosting provider cares about privacy, i.e. isn't Amazon, Google, or Microsoft.


Great fun to make, a lifetime to maintain.


Wanted to mention FreedomBox, LibreServer, Epicyon, and Retroshare. Any others worth mentioning?


I’m all for self hosting, and I run multiple services for myself and family. But I really don’t like this post. I think the arguments don’t make much sense.

Do I remember Picasa? Sure. Do I remember it being the ‘end all be all’ of photo management software, that was egregiously replaced with the inferior Google photos? No. I’ve never heard anyone argue that Picasa was The Goat, and we’ve never had anything better since.

I get the point they are trying to make, that’s just a really bad example. Same for the other examples. Like a song disappearing from your Spotify playlist? Well that’s not the reason we use Spotify. We use it because purchasing and maintaining a static collection of music in 2022 is expensive. In time & money.

But the greater problem, I think most of us here are aware of, is self hosting is simply not possible for what, at least 95% of the population (and that’s probably generous). Yeah, for software developers sure. We have experience & knowledge in regards to installing software & maintaining a Linux server. Nobody else does.

I know there are some services and companies trying to solve this problem, and I’m encouraged by that. I hope they come up with good solutions and people take notice, because I’m totally on board with the self hosting ethos.

We can easily build a package of self hosted software that fills all the needs applied to us by Big Tech. The way I see it is if you have a Protonmail/ or Fastmail account, and combine that with Mastodon, Matrix and Nextcloud servers, you have everything you need and it’s a huge win for your privacy. I just can’t figure out how to convince my friends and family to jump on board :)


Consider self hosting instead of falling for the ponzi of the decentralization of web3


Yes. SaaS has betrayed us all. We’ve learned a lesson. Our data, our computer.


Agreed, we've given up too much control, privacy and sense of ownership.


Irony.

Hosting a list of applications for self-hosting on a SaaS platform.


nobody is talking about the freedombox? https://freedombox.org/


It only saves the problems if you can maintain the same level of availability and integrity, what happen when you have an outage, when your hard drive die or when your house burns? What if you forgot to encrypt your data and your drive gets stolen? Can you patch your server vulnerabilities faster than company x ? What about simple maintenance like certificates renewal? It's not that easy to be better than a tech company.

Regarding pure privacy, it's getting better with new regulations like GDPR, but yes, the best is to not share data with third parties.


Could IPFS Filecoin solve this problem?


I HATE the Spotify podcast player.

It is the worst UI for pretty much anything: music, video, podcast, lyrics ...

I selfhost ... i download the spotify exclusive podcasts and host them myself to use the with overcast. They come as OPUS files, but ffmpeg to the rescue.


Syncthing, baby.


Philosophizing on your blog seems to be the new way to tilt at windmills. If you're actually interested in self-hosting, https://github.com/awesome-selfhosted/awesome-selfhosted is a great resource for self-hosted apps. Roll up your sleeves, get prepared to get lost in documentation, and have some fun! You'll realize the tradeoffs of what to self-host and what not-to quickly as you start playing around with actual technologies. Just remember that your life is production and if you're self-hosting XMPP for your family, you may want to be confident you know how to run XMPP before pushing everyone onto it, so maybe setup a lab or staging environment. But that's fine, it's part of the process! Stop writing screeds and start actually self-hosting.

EDIT: Since I'm mostly just reposting the link that OP links in their post, I'll add a couple fun things that I use a lot with self-hosting.

https://hoppy.network/ lets you setup a Wireguard tunnel to have your own static IPv4 /32 and /128 IPv6.

https://freerangecloud.com/ gives you similar products but also lets you do things like colocating a Raspberry Pi or getting a VPS at an IX

https://www.zerotier.com/ can effortlessly setup a private network between hosts

There's more I'm sure, but I like these.


Philosophizing on your blog seems to be the new way to tilt at windmills.

not the first time I’ve seen such comments or sentiments close to it regarding the content of developer blogs, when one gets shared here.

I ask most sincerely: isn’t that just one of the many reasons people chose to launch a personal blog in the first place?


It surely is. I prefer less of it which is why I made my comment.


That blog post literally mentions that link.


I know. Now I made a comment that helps self-hosters just as much as the OP with much less text and much less moralizing.


One of the most important aspects of choosing a solution is understanding the problem first.

There's a place for both:

1. Blogs that moralize and talk about a much larger philosophical underlying problem. These help the reader understand a problem that they may not have fully understood. Before, the problem was: "I need a place to host my photos". If that's your only problem, there's no reason not to choose something easy like Google Photos.

Only by digging deeper does one start to understand that there's more to it than this, and choosing certain solutions bring with those solutions a whole set of new problems. Now, you realize "I need a place to host my photos and I need it to provide a certain level of privacy, and a certain degree of predictability..." etc. A set of problems that can be solved by self hosting.

2. Blogs that are solution oriented. You already know what you want, now go do it.

If all you ever present are solutions, the reader is left to wonder why they'd ever invest the time and effort in doing something that is much easier elsewhere. An investment that does start to make sense if you have problems with the implications of hosting elsewhere.