Fun todo: Install this somewhere, nmap it for open ports, then ask "How many of these services had a remotely exploitable CVE in the last year?" "If one of these services had one tomorrow, would I know to patch it and take action faster than someone would takeover my box?" I don't see any containment mechanisms on any of these services beyond what's included by default so a compromise of one service likely leads to total compromise of the entire box.
I had to think about this a lot with AlgoVPN (https://github.com/trailofbits/algo), and we built a system with no out-of-the-box remote administration, strong isolation between services with AppArmor, CPU accounting, and privilege reductions, and limited third party dependencies and software. You can't count on a full-time, expert system administrator.
For some things, I'm happy to use SaaS providers, where they are responsible for the whole stack. For others, I'm happy to use apps, where they just provide the code, and I provide the platform. But for a number of things, I want something in between: I provide storage and compute, they provide code and operations.
Bitwarden for me is a good example. They're a password manager who provides their backend as a docker container anybody can run. I like that, as I don't really want them to have my data, and if they go out of business, I don't want to be cut off from my passwords. But I won't run the backend myself, because I don't have the time and expertise necessary to make sure it stays secure.
Another good example is photo hosting. I would rather keep all my photos on space I control. But I also need modern, maintained software for syncing, serving, and controlling access to photos and related data. I'm happy to pay somebody to make and maintain that software, but not nearly as happy if that means that at any point they might shut down and take my data with them.
I suspect we're headed toward a future where people like Synology and Digital Ocean sell storage+compute, and then other companies sell and maintain user-selected software that runs on those environments. Basically, some sort of app store for servers. But I'd love to see this happen in an open, nonproprietary way, as the drawbacks of Apple's and Google's app stores have become pretty clear.
The notion that every document is its own independent unit sounds pretty menacing to me. Could be fine for some things, but getting things running there is sounding like a fair bit of work to me, and very limited.
And then this part is especially bad: "[maybe someday] You won't have to deal with payments | Eventually, we hope to make Sansdtorm implement in-app purchases and deposit the proceeds directly to your bank account"
Right there a lot of incentive to integrate has leaked out. "Please build for our platform in exchange for no money" is not quite the worst offer I've had, but it's definitely not appealing. Looking through the Wayback machine, I see
But the part that really concerns me is that they seem to think that server apps can run like mobile apps. To me one of the most powerful things about SaaS products is that the aggregated use information drives both product and operational improvements and allows rapid response to bugs and issues. So as a developer choosing between this and a SaaS approach, this feels like having one foot in a bucket to me.
For example, I recently outsourced my mail hosting to Fastmail precisely because I want experts to run things. Would I be happier if the data were stored somewhere I control? Definitely. But not if that means the experts aren't paying attention anymore.
But the end goal is pretty well worth it: Any grain is incredibly secure by default, and for the most part, app vulnerabilities are irrelevant. A grain where only you have access doesn't need any sort of authentication or security in the app at all. And since each document is it's own sandbox, sharing a document with someone doesn't give them a way in to exploit access to your other documents as might happen with a vulnerability in a more traditional design.
The business model story for selling Sandstorm apps isn't super great right now, you probably could have a licensing model that requested network access through the Powerbox to check the license or something, but in many cases, there's already a wide variety of great open source apps that are free and just frustrating to host and manage without a platform like Sandstorm (or Cloudron). (EDIT: Now that I think about it, Sandstorm used to have a paid license key/feature key system that made no callbacks, I think the licensing info was encrypted asymmetrically.)
As for your support of SaaS data collection, I just can't really agree with you: People who want to give data to a developer can choose to do so, but I think it's ethically wrong to collect data without permission. (Sandstorm servers do have the ability to opt in to provide basic app usage data back to Sandstorm's development team.)
I love FastMail, and have been an enthusiastic customer since 2016. :)
> But the part that really concerns me is that they seem to think that server apps can run like mobile apps. To me one of the most powerful things about SaaS products is that the aggregated use...
That definitely seems like the current state of affairs but I think there isn't a reason why writing server apps like mobile apps has to inherently throw away aggregated data. Especially with Google's research in federated learning, apps will soon be able to get insights across their users while preserving their privacy.
This is the same concern I have with self-hosting anything with sensitive personal information on it. Without continuous monitoring, alerts and periodic review of audit trails, it’s anybody’s guess what’s going on with all the self-hosters’ data. With larger companies that provide a SaaS solution, there’s a little more hope that someone is looking at this seriously all the time.
You can also comment out the bits you don't want from https://github.com/sovereign/sovereign/blob/master/site.yml before you run the top level playbook.
For extra security, bind ssh to localhost only and run a tor hidden service on the machine for accessing it.
If I were to provide services for me and my family (or for a small company), I won't make them publicly available at all.
I would have every device connected to them over VPN (OpenVPN, WireGuard, ZeroTier). Of course it would prevent self-registration, and would take some work to distribute keys — but by definition we are a small operation, so this is manageable.
No service would ever listen on a publicly accessible IP. The machine(s) hosting that would firewall off all other incoming connections, except the VPN and SSH for admin purposes. I hope I would be able to quickly address CVEs in these two services, plus the kernel.
A setup like this is already pretty standard with AWS, but you can reproduce it almost anywhere, including your own physical box(es).
The weakest link with this setup is the client computers. So inside the VPN you still need good security practices — but the attack surface becomes much smaller, and a DDoS becomes harder to pull off.
> Don’t want one or more of the above services? Comment out the relevant role in site.yml.
Also, there are so many great FLOSS alternatives to Google Apps. This repo contains some, but here are some of my favorites:
- https://nextcloud.com/ (I prefer this over OwnCloud)
- https://mailu.io/ (basically a Docker-based deployment of Postfix/Dovecot/etc)
- Server - https://matrix.org/
- Client - https://about.riot.im/
- (I prefer Matrix.org over Jabber/XMPP)
Don't worry, they serve very different purposes. You already probably know but Docker is for running applications in isolation, while Ansible is for provisioning and configuring hosts. For instance, you won't use Docker to harden sshd on your hosts but Ansible.
You may want to put some effort (not that much) into managing your credentials with Ansible Vault and you can try your playbooks e.g. on a Vagrant Machine before applying them to a real host.
Any tutorial will do for the beginning but you should always notice what version of Ansible you are using (vs. the one used in the tutorial) as features change and also there have been some changes to the syntax to improve readability of your playbooks.
Uses ansible + docker to setup and run a home server. Its feature-set is close to that of a typical FreeNAS setup in a home environment.
If you are concerned about Ansible SSH’ing in (which means you are concerned about any person SSH’ing in), you just do the standard SSH hardening.
Because you (or some intern hired a week ago) are 1 typo away from making a disaster.
For the record, security includes availability and this is an availability risk.
Don’t waste time in Docker, for this purpose use lxd containers . LXD containers are more secure than docker in general and provide support for unprivileged containers for over 5 years. You can use your ansible scripts to create and manage your container images the way you manage bare metal or a VM. Indeed the modern LXD can help you seamlessly manage VM or Linux container , because when you need to directly access underlying host hardware VM is still more secure than container. Also with mount syscall interception in version 3.19 of LXD , it is possible to mount NFS inside container in a more secure way with unprivileged container running in user namespace.
Initially docker itself was built using lxc , but then decided to re-invent the wheels to build its own libcontainer without any significant advantage over lxc, just NIH. Obviously given all the money which went into docker and than kubernetes using it. Docker is more famous in spite of being inferior because of the marketing money spend on it like Java (a language which is famous because, sun spend over 500 million in early years of its inception on marketing). Kubernetes is famous and a valid tool for google level of problems for 90% of startups LXD is a better fit. This is the same fight like old times when inferior technology wins due to sheer marketing like Blu-ray won but rendered not as useful, same is docker.
Don’t get me wrong, Docker has made some stupid technology decisions, but the network effect of so many adopters means that it should be the default choice for any container situation.
That's the kicker that got them over the line. Look at usage graphs and docker is running circles around Linux containers. I use docker professionally but privately will use lxd barring some complicated setups that are a docker pull away.
Truly can't stand some of their design choices, eg the utter distain for iptables and even non-technical ones like requiring signup to get the daemon on Windows/Mac, it's frustrating software overall whose only saving grace is the ecosystem around it.
People assume that just because Docker and lxd do "containers", they are somehow equivalent and it's simply a matter of choosing between the two. But they are far from equivalent. It's not a choice between two competing technologies that achieve the same goal at all.
Personally I will prefer model like Guix System instead of container to run systems and services.I find that solution to be much more elegant than Linux containers or distributions. Linux containers initially came as chroot, jails and zones and than addition of cgroups and namespaces by google made it popular as lxc and later adopted and forked by Docker to make something complicated. These are bolt on solution to have immutable infrastructure. Guix is designed from ground up to be new OS for 21st century server and application infrastructure. May be it will or one of its derivative will become mainstream in time to come.
Please explain. This statement doesn't look correct to me. Both use the same technology - namespaces and cgroups. LXC is just meant to host the full OS installs, so you have to manually do things like "apt upgrade", resolve all breaking changes manually etc. So you end up with bunch of VM-like full OS installs, taking lots of time to manage.
Docker is basically the same, except there are layers of filesystem data and that those base OSs are minimal. Minimal also means less attack vectors, btw. Now, in the image there are all the required dependencies and you can prepare new version in your laptop, resolve breaking changes, test it properly and then easily deploy.
So why are LXC containers more secure?
Every image in LXD can be locally hosted including the base one privately, so do not need to rely on inspecting a hotch-potch of Dockerfile, scripts and pull from other docker images to know what’s inside.
Now with the release of LXD 3.19 they introduced interceptions of syscall so even unprivileged containers Running in user space can securely access hardware. So NFS can be mounted inside unprivileged container I haven’t tried the latest docker container but in the old one cannot mount NFS from inside a container without running it in privileged mode with kernel access.
You wouldn't use docker to manage the host networking (which is where ansible comes in), but packaging whatever is listening on ports as a container works really well for me
I run a similar project called Ansible-NAS - https://github.com/davestephens/ansible-nas - which originally came about because I fell out of love with FreeNAS, and felt I could do a better job with Ubuntu, Ansible, and a bunch of Docker images.
Sovereign is awesome, I've been watching it for a while, but I'm not keen on everything being installed directly onto the system which is what I tried to solve with Ansible-NAS.
Using docker for such things is like putting a square peg in a small round hole.
It also doesn't usually make sense to put SSH in a docker image or container, as you can enter the container using docker exec
The first step to learning Docker by downloading it, irritatingly, is that getting Docker Desktop on Mac or Windows requires creating a Docker Hub account and signing in! There’s even a long issue thread about this on GitHub and the response was totally irrelevant! Luckily for those who don’t want to jump through these hoops (through disposable addresses or reusable shared logins) or provide an email address, there are many people who have posted direct links to the different downloads available.
"Why are you doing this?"
"To improve user experience"
"But now I have to sign up for an account whereas before I could just download"
"Yes this will improve the experience for everyone over time"
Are you building every container your self? I hope so: https://blog.banyansecurity.io/blog/over-30-of-official-imag...
I got started by just grabbing a $5 DigitalOcean droplet (can get them with Docker pre-installed) and then played around trying to setup a simple app. (I think it was RocketChat.)
not a resource but docker containers you might want to run in your homelab. checkout awesome docker on github or katacoda
Both the products are backed by companies and both are doing quite well. I would say Nextcloud goes more and more into expanding it's use case and thus makes it product more extensible via plugins. This can be good or bad depending on how you look at it. Plugins go unmaintained/incompatible over time and are a constant source of pain when upgrading. Wordpress gets away with this because it has a massive community.
ownCloud on the other hand has decided to double down on it's roots of file sharing/syncing. I heard they rewrote their stack from PHP to Go now and the frontend is now React.
You can think of it in much the same way as OpenOffice vs LibreOffice: devs fork to make a new product, the "original" product stagnates and is mostly used for rent-seeking.
The downside of both is that, to my ears, both "OpenOffice" and "OwnCloud" better signify to outsiders what the product accomplishes, while "LibreOffice" and "NextCloud" really don't, unless you're already familiar with the product or product history.
If this project piques your interest, please consider contributing! We could really need more helping hands.
Ansible is easy to learn and most (not all!) problems due to new versions are easy to fix.
Also, if you only want to use a fraction of what Sovereign has to offer to reduce your server's attack surface, that's easy! Just follow the instructions.
Every time I attempt to use Ansible (or its kin) to manage my own network, it feels overly obtuse and ultimately unhelpful. Its gains seem to be rooted in configuring a large number of identical servers, and isn't geared for a handful of hosts with some commonalities and some differences. Writing playbooks feels like a still-imperative wrapper around shell commands, just in a bespoke and verbose YAML syntax.
Instead I am using my own script that runs a tree of files through a template engine, drops them on each host being configured, and then runs triggers based on what has changed. This seems utterly simplistic, lacks polish, eschews common practices, etc. But the overall configuration seems straightforwardly grokkable compared to the heavy tools.
Not saying you couldn't do that with custom scripts, but I found that when I tried writing my own admin scripts I was solving problems that the ansible team has already solved. It's a matter of what you want to spend time on IMO.
Having said that, I am to the point where it would be really nice if my ssh pushes ran in parallel, which is one of those robust niceties you give up by going your own way. So I'll have to revisit Fabric because it would be complementary - thanks for the reminder!
Putting everything in YAML is helpful because no matter what service you are setting up, the format is the same to read and understand. And I hope you are using modules and not shell command directly (except for the small cases it is necessary)
Ansible can be as simple or as complex as you made it. I bet your script isn’t as nice as an Ansible setup, nor as maintainable. Ansible’s templating engine is super flexible.
Ansible uses jinja2, and so do I. I obviously can't form an objective opinion on the "niceness", having developed it. But I'll say that my setup puts which files are on which hosts front and center, whereas it seems like Ansible would want that splayed out into a directory tree of "roles" (of which I'd have about 25). With my setup, I've got that in 200 lines of python including host/group definitions (excluding comments), with the actual config files living in one analogous tree (eg conf/etc/network/interfaces).
Half the reason I threw my comment out there was to see what bounced back, as I am currently working on this system. One of the responses was about error checking, which resonated. So I've since given Ansible another shot, for managing the top-level (calling my templater for each host, apt upgrade, etc). I think I do like it for this, now that it's much closer to its sweet spot of doing similar things to every host.
Overall my goal is to write very few on-host configuration scripts, and prefer overwriting files. For example, most triggers are simply service restarts, which can also be performed by a full reboot.
Disclaimer: I am the co-founder
It has many of the same apps as Cloudron and is completely free. And you can of course host your own Dockerfiles on top of it.
I think the pricing for Cloudron is way off. I'm not going to spend 5 dollars per month for a DigitalOcean droplet and then 30 dollars per month to host a few open-source apps on that Droplet. Especially since CapRover does 99% of this for free.
As for the pricing, agreed that it might be out of reach for personal use. I think it really depends on what you get out of the product. Our target at the moment is primarily family/small businesses/IT teams. For example, a business runs website/file sharing/mail/chat/analytics/crm/forum on a droplet. The server would be around 20/month (4GB server) and then 30 on top of it. For personal use, all my stuff is on Cloudron (calendar/contacts/email/blogs/website/rss/media/docs/notes). I cannot imagine relying on a product which is 5/month for all my stuff.
Besides, having complete control and owning your data is priceless. If you don't value that, I am not sure why one would self-host at all.
I mean, you could play an infinite regress game. Do you own the hardware? Do you own the cage the hardware is in? Do you own the building that the cage is in, and the land that the building is on? And then we can go toward owning the power company and the connections to anybody your servers talk to.
But in practice, self-hosting is about control. If what you're running it on is a commodity cloud instance that you could get from a half-dozen providers, then any one cloud provider has very little leverage over you.
Also, I think there are other similar popular terms. For those who run in their own premises, the term is on-premise. For those running it home, usually they call it home lab/NAS/home server. Self-hosting to me encompasses all this.
Also, self-hosting doesn't necessarily mean just open source. There are some amazing closed apps out there that you can self-host - emby, confluence, teamspeak to name a few.
Two of my favorite spots - https://github.com/awesome-selfhosted/awesome-selfhosted and https://www.reddit.com/r/selfhosted/
It's likely if my choices were A or C, I'd have never left A. But that B option eased the transition for me, and made it possible for me to get to the point I felt like the investment was worth it to create a fully on-premise solution.
There's a part D to this too, actually: I'm still using a service to manage the DNS and TLS for it. Eventually I should be able to move away from that too. But without the intermediate step, it'd be too prohibitive and frustrating to have moved to step C.
I suppose this product isn't for me anyway, since eventually (not that long ago) I just bit the bullet and learned the basics of Docker and docker-compose. It's not that hard, costs nothing and is pretty rewarding, imo.
There are some blog posts in the README that go into how I built a lot of it. A lot of it is specialized for me though. I have a ton of rspec/tests but I don't have a real config schema or entirely useful error messages. I might add some in the future.
Looking at the list in this, I'd advice against nextCloud(ownCloud). I recently setup their official Docker containers and the web piece works alright, but their F-droid app continually crashes and I had to uninstall it and the nextcloud-client in Gentoo's package manager segfaults at home and refused to build at work.
I've read other stories of data loss with nextcloud. It might be better now but my initial experiences made me use syncthing. Syncthing does use relays if you're behind a NAT, but if you have openvpn setup, you can also force it to use a direct IP address as well.
If you're thinking if self hosting and have the time, I'd suggest building it yourself; borrowing (and properly accrediting/licensing) other open source projects, their ansible scripts and containers and such. You learn a whole lot about why this tooling is so complex.
Though somewhat offtopic, this line absolutely cracked me up.
>A VPS (or bare-metal server if you wanna ball hard).
I can appreciate a sense of humor.
* Why pick ownCloud over NextCloud? The former's forum had 139 posts in the last 7 days and the latter's forum had about 1700. Also some of the features in the former product are locked for enterprise only.
* Tarsnap is a paid online service. You could try restic command to have encrypted backup to remote storages.
* cgit is an old project released more than 10 years ago and despite being written by the author of wireguard, we have far better stuff like Gitea (or its fork source Gogs) to have user access control with nice web interface for git project management.
DDNS seems like it's a local too good to be true for solving the dynamic IP problem. I'd prefer to have a static IP for my gigabit Internet but sadly Webpass doesn't allow it. Does anyone have experience doing something like this?
I went with YunoHost.
I initially tried sovereign, but once I figured out I had to pay for tarsnap backup service, and that it did not have ansible for nginx setup (I needed that experience for work stuff), I went with Yunohost.
Sofar I am happy with YunoHost and subscribed to send periodic donation to the project.
Overall, though, if you are working with ansible at work, or want to advance in devops field, learning ansible and contributing to Sovereign project would be a good path to take.
Like, there are countless ways to configure your MTA and spam filtering- if you are going to have to dig through this config.. why not just roll your own?
Can someone explain to me why you need ansimble for this? or am I just being stupid and this is like an exercise to show what the toolchain can do?
With Ansible, you just run it again and in a fraction of the time its back up.
You do not need ansible for config of a personal server at all.
If I get locked in on a specific product, it's way cheaper to redesign that around an alternative vendor than it is to maintain a private cloud (Ansible, Kubernetes and friends included).
As a nerd, I'd prefer to do things myself, but I have business needs to attend to.
I see this being for people that just want things to work without much of the effort to make it so. If that’s the case, a simple web UI that treats all the little solutions as “apps” in a way makes sense. Not plugging here, just curious to the practical everyday differences.
I've always been impressed by Cloudron's well-maintained app library and constant march of major feature improvements to the platform.
* Mobile contact and calendar syncing: How well and reliably does it work?
* Calendar group features: how well do they work?
* Setup and maintenance: how much hassle is involved?
Sovereign doesn't solve all your operational problems.
I think it's suitable for personal use. I wouldn't run it in a production setting without thoroughly understanding all parts of the stack.
I would say it's good for personal use or to demonstrate what ansible is capable of.
I don't mean to start 'Docker vs. Ansible', I just wonder why if you wanted a quick way to setup a single-server 'own private cloud' you wouldn't just go with what already exists, and list the images you want in a docker-compose.yaml file?
(Which would additionally set you up for 'scaling' if you had any concern that you might be able to save some cash with a two or three smaller servers than one big one by the time you'd installed everything you want.)
I explicitly said that I wasn't making it about Docker vs. Ansible - I don't care - I just mean that Docker is very often used by first party maintainers (and if not by someone else) to package these services, so use what's there; if it had happened to have been something else that took off in that way, an Ansible playbook say, then that, but it's Dockerfiles that are in that position.
Furthermore, if you do want to use containers, there are tools like ansible-bender that use Ansible to build container images.
(edited the link to point to the ansible-community repo)
It seems to me that the target demographic is people that just want the least effort minimal faff way of getting some services up and running for personal non-production use. And for that it was my suggestion that many of the services probably already provide a Dockerfile upstream, so the easiest thing to do would be to install docker-compose, list the images, and `up`.
The only requirement for a remote host to be managed by Ansible is python, and even that can be installed by Ansible itself using the `raw` module on an initial run with nothing but ssh access.
No need to gather a bunch of random Dockerfiles from various places, tweak them to be compatible, and create a docker-compose file from scratch... how is that "the easiest thing" when this is a complete set of Ansible playbooks where the work is already done?
The first line of the readme says it's for a 'personal cloud'.
I just copied the first link that showed up in a search, which looks like it's the personal repo of the project maintainer but is pretty far behind the upstream repo now.
Version for BSD.
Ansible is awesome for enabling people to do reasonably complicated things in a consistent manner, at scale, without having to write all of the boilerplate code to be able to do so.
This is forgetting the fact that Ansible is reasonably opinionated, which is great for lowering the barrier to entry and helping devs/admins to be productive quickly.
When I just need to Get Shit Done, Ansible is awesome.
That's the opposite of a good thing.
> When I just need to Get Shit Done, Ansible is awesome.
"just" is the keyword. "just" instead of caring about long term maintainability and security
For your second, Ansible is specifically designed for long term maintainability and security.
And that doing packaging, staging with CI/CD and immutable infrastructure is unnecessary.
FAANG companies clearly disagree.