Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Self-hosting in 2023: Nextcloud on Linode, or...?
218 points by jtode 11 days ago | hide | past | favorite | 237 comments
I have always chafed at handing any aspect of my life over to a corporation, but the ease and convenience of Google's ecosystem has herded me in over the years, and I never want to be without that type of "cloud" style document and data storage again. I'm also ready to do what is necessary to stop relying on some faceless entity to manage it, I'm just not sure what to use, or where.

Plan A: Nextcloud on Linode. Probably not the dirt cheapest choice, but affordable as a steady expense right now, and it seems to provide a lot of resource headroom while I get on top of how it works and what my actual needs are gonna be re compute and bandwidth and so forth, as well as allowing me to stand up any extra services I may want on this web presence - Nextcloud's open nature is good for that as well, but I want access to the system itself.

What I need:

I will be using one of the Nextcloud office suites for the same stuff I currently do on Google - text documents (chord charts mostly), spreadsheets, etc.

Likewise I will be figuring out how to hoover every photo and video taken by our phones and computers up into a backup collection, and we can then treat our phones like "thin clients" which are only representing our data, not storing it. I have not successfully used any organizational aids for pictures before so for now I'll be happy just to have a collection of dated folders for each phone, and we'll improve from there. It will be stored in some sort of cheap bucket or block storage as well as on my local ZFS server (seems like block storage might be the better choice for that reason).

Likewise I want to get all my email history backed up somewhere other than gmail's servers on an ongoing basis. I don't think I'll stop using that email address and I don't expect to actually control my email (nor would I want to), but I don't want to be in a position anymore where Google could just up and decide to lock me out of my own communications history based on some algorithm. That said, I will probably also setup some sort of alternate email that is not on any .com platform and possibly transition to it over time, and all email will end up here.

Re platform, I think that I could probably do it a lot cheaper on AWS, and I think I know how to get that done without getting snagged by one of their runaway expense traps, but I'm not completely sure. I do not trust them not to find some way to slip a thousand dollar bill past me before I realize what their automated system is doing.

Linode, on the other hand, have a good reputation in terms of competence and reliability, and from what I can tell the price they are offering is not completely out of whack. They even offer the quick deploy version, but I do believe I would just take a raw server and stand it all up myself, I have security people in my family who can make sure I'm not hanging my junk out the front door before I go live.

I am also considering Digital Ocean, who I've dealt with a little bit in the past and found them great.

Future plans for this server include some kind of federated publishing - Nextcloud might even have some sort of blogging extension that could be further extended, or maybe even it's already implemented, I'm not that up on it yet. It's just a high profile self-hosting system that I noticed.

Or I might add a small Mastadon to the server for the same people who use the Nextcloud, but I'm hearing a lot about runaway transfer fees so I'm gonna wait and see before I stand one up myself. But that's why the raw server instead of the one-click solution, one way or another I'm gonna get on ActivityPub.

Anyways, thoughts anyone? Like I said, current plan is Nextcloud on Linode for a while and see how it goes, but if there's something leaner or more extensible or that handles ActivityPub better or whatever I'd love to know.


So I would probably avoid Amazon just because many of their services charge for data out. It isn't out, but it's a variable for you, and you probaly want something that's flat per month. Cheapest you are going to get with somewhat reliable service is either going to be Hetzner or BuyVM. Hetzner is better for someone who doesn't want to tinker, BuyVM for those who do (BuyVM is a little less reliable, but you can set it up cheaper if you are willing to do a little bit of manual work with shell commands).

Secondly, I'd suggest you host this through Cloudron. It helps you handle automatic security updates and backups. It's very nice, and worth paying for, although it's a little pricey for individuals.

Third, with email, you can host it yourself (in fact Cloudron has this built in), but I'm going to recommend against it, or at least recommend that you pipe important emails through another service like Fastmail. Let me explain why. There's going to be some point after hosting for 5 years, where your server is going to go down. Now email will be fine, it's built to deal with cases where servers go down, but... we rely so much on email right now, that it's going to really suck to have it down. So by all means, have your personal email come to the server, but keep anything that you can't do without running on a managed service. You can pipe it through your own domain, and set up automatic forwarding, but it's going to be a little better to run important stuff through someone else's server, imho.

Just my two (or three, I guess) cents.

I've been doing email related things for 15 years. Do not host your own email.

To clarify: it's not hard to set it up. It'll just be useless because all your emails to people will be marked as spam. And it takes years to have a chance at getting a good enough "algorithmic reputation" to be pulled out of that bin.

The state of email is disappointing and sad. It is possibly one of the most centralized decentralized protocols/networks on the planet. We need a good replacement for it. It's so legacy.

My email is not landing in spam for people I do business with, others need to add me to white list, do you think this is a problem? I don't, I believe more of us are self hosting less of a problem this will be.

You said that you're hosting since 2011, which makes sense since you're essentially grandfathered in.

If you have set up a new system without any reputation now, even if you have set up DKIM and SPF, it's now a lot worse. Major providers like Google and Microsoft won't really tell you, but if you are new but don't have a dedicated AS and instead you're using (for example) Linode you'll be scored lower by having low-cost solutions that just so happens to be abused by spammers.

Hetzner has been fine for me, only since 2018.

As ever with these discussions, I host my own email on a Hetzner VM with https://mailu.io/ and there was no special setup or precaution required to ensure delivery both ways.

Still - if you self-host just assume at some point it will go down and you may have to deal with a backup restore before you can receive any more email. If that gets in the way of your life, you may want to reconsider :)

Recently, a shared hosting company where I had the emails went off-the-grid in a puff. I know I should have taken backups, but, it seems trivial that why would a company just shut off the servers/ stop responding if the service has been impeccable for last 5 years? This is another story.

So, what can I do to just keep a backup of all my emails (one @gmail.com and other personal website emails)?

Maybe for me, just having those emails accessible somewhere is more important (other than primary source/ provider)


Worst advise. If you are incapable (no offence) to set up your own email server, doesn't mean anyone else should avoid doing it.

Source: Having own email server for 17 years. Absolutely happy with it. Again, that doesn't mean everyone should do it, but I'd abstain from advices like you should not! or should do same.

I'm perfectly capable, as is the vast majority of even passing readers of HN. If you think that the hard part is setting up the mail server, you're exactly the type of person that should _not_ be hosting it yourself. The problem is, and always has been, deliverability.

Deliverability _is_ a part of setting everything properly. Setting up mail is not juts installing exim and expecting everything will work by some magic and the manual online.

I do host mail for tens of domains and never had issues with deliverability.

I've been living off of fixing misconfigured mailservers for 15 years. It's really not /that/ hard.

No, the configuration is not difficult at all. It's the interactions with all the other mail servers on the internet.

Interesting, I'm hosting my email since 2011 and I can't understand why would you advice against.

I’ve got a Cloudron instance on a 1G buyvm (3.50). The Cloudron free tier is kinda perfect for it because the 2 app free tier maximum pretty much exhausts the 1G of memory (running rocket.chat and The Lounge).

I have it setup as my.srv1.domain.com (with apps at appX.srv1.domain.com). This way if I need more apps I just spin up another 1G Ubuntu instance somewhere and install cloudron as srv2 etc, for another 2 apps that fit squarely into the 1G and also Cloudron free tier. Cloudron in their forum has also stated that this does not violate their terms (they said completely within terms).

Glad to see BuyVM mentioned here because I was just about to suggest it, I've been using them for close to a year now and nothing I've found comes close to the performance to dollar you get with them, their services are great +1

I'd prefer to pay someone else to handle email, in the final analysis, I just don't need that headache in my life right now. It's why I'm not making any plans for a big dramatic change on that front yet, but I might dip my toe into piping it through someone else's servers, if I see something that makes sense from all angles.

Ask why you want to host email before doing so.

Since almost all of the people you correspond with are sending from Gmail or Office365 or <insert other oligopoly provider>, there is no email security anyway. Sadly.

If you're worried about costs, I run Nextcloud locally in my house, and just deal with the fact it's not externally available. Everything syncs when it is in the house, which is just about always for the laptops and pretty often for the cell phones, and when it is out and about it just doesn't. It all works out.

I have a backup process running on it, but back up disk space is a lot cheaper than live disk space attached to a VM, so it's a lot cheaper than the requisite VM disk space would be.

That said:

"and I think I know how to get that done without getting snagged by one of their runaway expense traps, but I'm not completely sure. I do not trust them not to find some way to slip a thousand dollar bill past me before I realize what their automated system is doing."

This is a per-service concern. EC2 may be old & busted & "just VMs, dude, get cloud native you early 2000s buffoon" & totally uncool... but also precisely because it is just a VM, it is also bounded. It won't blow up on you, because you can't just use 100 times the service you expected. Worst you can do is use the network like crazy, and for as expensive as bandwidth is at large scales, at this scale it's not going to break your bank unless you really screw up. I'm bounded by the fact my home network connection won't let me go too crazy anyhow. (Or on a small T3 instance you can turn on unlimited credits and then run those up, but there's a bound on how large that can be even if you're running 100% full time and it's not huge.)

Just some options. Mastadon is presumably more complicated to run on local resources, you'd still need something with a public IP that can be reached to work correctly.

I run Nextcloud "locally" too. It's "local" in the sense that it sits on an laptop-turned-server by my desk [0]. Add a domain name, a simple dynamic DNS [1] and a forwarding rule on your router ; your local machine is now reachable from everywhere.

No (useless for that usecase) additional intermediary like Tailscale in the middle. It has the added benefit of allowing you to share everything that is on Nextcloud with people without requiring them to use any VPN/etc.

[0] the fact that it runs an a laptop (with its battery) rather than on a workstation provides a UPS on the cheap

[1] dynamic DNS can be achieved even using cheap providers such as OVH as long as you get your domain name there https://docs.ovh.com/ie/en/domains/hosting_dynhost/

I wouldnt call Tailscale "useless" in that case. If you use Tailscale there, you dont have to port forward, so you have no exposure to the general internet. No one bashing on your port, looking for vulnerabilities. You don't need a DDNS, since Tailscale gives you a fixed address for your machine that persists. So you can set a single CNAME record with your Domain hosting service and you're done. And Tailscale has clients for all platforms, including mobile, so it "just works" with all your devices. It's free for up to 20 devices.

Re: dynamic dns, I've just started self-hosting some services and piping them through cloudflare, and I use the docker container oznu/cloudflare-ddns to handle IP changes.

Bonus is that I can restrict incoming traffic on the router to cloudflare ip ranges, and use cloudflare tools to restrict traffic.

I suppose you could accomplish the same with a VPS but this is all free.

This would definitely be the ideal solution, and it is certainly how the Internet was intended to be used, but a lot of residential ISPs either frown on hosting services on a residential link, or outright forbid it. Plus, CGNAT is more or less inevitable at this point, might as well embrace it.

Like you mention, services like tailscale and cloudflare tunnels are a way around it, but that introduces complexity and additional trust in another company.

The main reason I host my stuff on a VPS is because if an attacker finds their way in, I don't want them to have unrestricted access to my home network as well. (And I'm to lazy to set up a DMZ...)

Lack of hairpin NAT makes that very challenging on my network, I mean I can access things from inside and outside the network but I have to use different domain names.

I just use Cloudflares tunnels instead of dyndns.

>and just deal with the fact it's not externally available

Perhaps wireguard(or tailscale just for this bit). Not saying you have to, having local only as default is perfectly good and safer maybe!

yeah i have a vpn to my home. i don't have a static ip to my home but i run a script on my mikrotik router to update the A record for the subdomain at cloudns.net that i use for vpn.

Yeah, Tailscale is probably a good bet. If you wanted to get a bit more involved, I have a blog post about my setup with OpenVPN (though it does require a hosted box, which may kinda defeat the purpose of having it locally for OP depending on their needs)


If you're worried about costs, I run Nextcloud locally in my house, and just deal with the fact it's not externally available.

I do the same but then have a VPN so I can reach it externally. One more thing to maintain but worth it.

Tailscale works great for this application.

I second this. I was using a Cloudflare Tunnel, which worked but meant I had to jump through some hoops setting up an email auth flow that then sent me a verification code

Now I use Tailscale (which is free for a single-user account) and it works excellently, without the Cloudflare dependency. No login necessary through the browser like with Cloudflare Tunnel, because you're already authed with Tailscale on the device.

I use this with Resilio Sync and custom dns names from Tailscale Magic DNS - life saver!

In my case, I want it everywhere. I operate from a permanent deficit of attention, and I store all of my musical reference material in Google at the moment, because one thing I do always manage to have with me (so far) is my phone, so if someone calls for a song that I don't have memorized I can very quickly pull up the chart no matter what.

I've tried making binders, I've tried every suggestion anyone could possibly make, and the ubiquitous available of Google Docs while I have a phone with internet is the only thing that has verifiably made me more productive. It's only gonna be a few tenners per month to have it fully under my control.

So what you're saying about AWS seems to be that if I just use EC2 and their cheap storage and stick to things I understand (I understand a rented VM, for instance, and I understand the idea that whatever that VM sends out counts against a quota), that they would have a hard time catching me out, and that also seems true to me. Behind a firewall I'm extremely brave, the fact this is internet-public has me a bit more trepadacious I suppose.

I also considered using a local instance to start, but the documentation seems to be very against that and I don't like starting my journey with a new system in defiance of their stated best practices, that seems like the road to a broken heart to me.

edit: lots of suggestions to use dyndns type services, problem there is I'm on Starlink and have no public IP available. There might be a cheap bouncing service out there but that's more googling and in the final analysis I can afford to do it the straightforward way.

You can run dyndns yourself if your dns provider has an API (e.g. cloudflare). Mine is a ~10 line bash script.

Basic idea is `curr_ip=$(curl ifconfig.me | grep smth)`, then `curl -X POST https://your_dns_provider?dn=vpn.jtode.com&new_ip=$curr_ip`.

Run it as a cron, on a schedule that's kind to your IP identification service and kind to your personal SLAs around uptime (mine's set to every 5 minutes).

Verizon changes my home IP every couple weeks, by my latest log check. No need to pay for dynamic dns.

Self-hosting is a long journey of solving such trivial problems. But it's pretty rewarding when it all fits together. :) Good luck!

I have a setup like the one you desire. Some of my services are port-forwarded to the public internet (i.e. the ones with a login screen) behind a local nginx instance. Others are only available on a "local" network, i.e. at home or via a VPN tunnel advertised at secret-vpn.{}.tld.

Use wireguard or cloud flare tunnels - personally, I have connected all the places I regularly spend time with persistent site-2-site IPSEC. Having your own infrastructure is a real blessing and relief from FAANG.

this is the (modern) way.

I've used openvpn to set up a tunnel into my network before, which worked great. I'd check out wireguard if I were to do that again from scratch, I like netmaker personally

> and just deal with the fact it's not externally available

I'm confused about this part. The default Docker implementation and Docker AIO implementation expect you to have a website that you point to to make it work. They auto-get SSL certs against that website.

Has this not been everyone else's experience?

The docker image I'm using runs over HTTP just fine. The client can be prompted to use it manually. Since it's only internal, I'm dealing with it (and it doesn't share any passwords).

Is it a bad idea? Yeah, sure, but given that it's not publicly routable if you're attacking my HTTP port as far as I'm concerned I've already lost. So there's no security situation that internal HTTPS will change, as far as I'm concerned. If HTTPS was the thing that stopped something, I've still got a problem.

My greater surprise / confusion is that the way the comment was written, it sounded like that was the default implementation and that you were unable to change it (despite wanting to). This was confusing and didn't correspond with my experience.

Proxmox with ZFS and zfs-auto-snapshot - rock solid, fully encrypted [3](german) and not too power hungry. On my Homeserver

  Fujitsu D3417-B2
  Xeon 1225v5
  Pico PSU 150W
  Samsung 980 Pro NVMe
it takes only 9.3 Watt idle after some optimizations and you can choose ready LXC containers like https://tteck.github.io/Proxmox/ or https://github.com/extremeshok/xshok-proxmox to partly automate some of the installations.

It can even run macOS VMs[1] and raspberry pi virtualized[2]

Works great so far and way better than TrueNAS Scale (at least atm) or bare metal linux systems. Needs some research and learning though[4].

[1]: https://www.nicksherlock.com/2022/10/installing-macos-13-ven...

[2]: https://azeria-labs.com/emulate-raspberry-pi-with-qemu/

[3]: https://www.hardwareluxx.de/community/threads/proxmox-stammt...

[4]: https://www.youtube.com/watch?v=LCjuiIswXGs&list=PLT98CRl2Kx...

I'm avoiding virtualization other than docker containers until I can afford a system with ECC ram. I'm not even sure if containers are that safe but I think they are? I am a weird pastiche of extreme competence and shocking ignorance, but I know you don't want to run VMs on non-ECC.

Buy a Minisforum UM350. Install Linux on it. Set up Tailscale on it and all devices you want to have access to its internal services. Set up Nextcloud on it.

There you go, $300 one-time cost, and you have a very powerful private server that can run all your self-hosted stuff. Via Tailscale, it can even expose some services to the public internet, if you feel the desire to do so.

I love minisforum and all things mini pcs.

To add to parents great idea, this is 70 bucks more (currently $320 versus $390 for this on Amazon US) and it's an 8 core ryzen and 32gigbs of ram so will handle a lot more:


Not affiliate, just anecdotal, just purchased this for myself this last fall and its fast and cheap and I feel the best current value from mini pc's at sub 400. The ones at 600-1000 cost are just incrementally better from this one's specs so I feel it's the sweet spot at $390. So your server can be much much beefier for not much more.

I may even be running a GPU externally on this :)

I was not aware of tailscale (I dunno why I miss things, I just do), but it seems like it's what I've been wanting. Cheers.

Thank you for this, this is exactly what I was looking for! It looks pretty effin' awesome :)

This is what I did. It works great.

Interesting, i might try this

Don't expect too much from Nextcloud – it will not be a 1:1 replacement for commercial services from Google, Microsoft, etc.

Prepare to spend much time debugging, configuring, reading tickets, etc. If you only want files and Cloud office, consider using alternatives like Seafile[1] or the new OwnCloud rewrite in Go called OCIS[2], which are MUCH more stable than Nextcloud.

[1]: https://www.seafile.com/en/home/ [2]: https://github.com/owncloud/ocis

This also doesn't reflect my experience. I have very minimal issues with the Nextcloud instance that I have been running on a small fanless computer at home for several years now.

It has even passed the non-technical spouse test, which is important!

It is used for files storage/sharing, backups, photo backups, recipes, calendar (caldav), contact (carddav), todo lists, bookmarks, webmail (rainloop to an externally hosted imap provider), and other stuff, all from Linux, Mac OSX (spouse) and iphones.

My biggest complaint is that there isn't an LTS version, and since you can't skip versions when upgrading, I feel like I need to make sure I update every 3-4 months, even though it isn't publicly available on the Internet (it is on my local network which is always available on all devices thanks to wireguard)

I hope it will stay this way for you. Still, the project has a ton of open issues and bugs, which haven't been addressed in months or years. You will encounter them sooner or later, if you use it for longer periods of time.

This does not reflect my experience at all. If you're using Unraid, there are common packages for it; and, if you're like me and don't trust those, you can run the NextCloud All In One Docker container to get it set up.

You can also run the NextCloud All-In-One Docker container just on a regular linux box and it'll work, as well. It is a central manager for a collection of docker containers that it starts up. Works great. I definitely encourage NextCloud.

Nextcloud is really just awful to manage. It is kinda insane that you have to do all major version migrations consecutively _manually_, so if you are on 10 and didn't upgrade for a while and current is 13, you have to do 10->11, 11->12 and finally 12->13.

That's the one thing that makes sense.

How so?

How is your experience with the OwnCloud rewrite? Have you had any hickups? Generally faster too?

It only does a fraction of what Nextcloud does: Files and (optionally) Cloud Office (via Collabora). These two functions work very well. Reliable, fast and at a fraction of the ressource usage compared to Nextcloud. I hope OCIS will open up to Plugins/Apps like Nextcloud, so we can get Groupware and other Apps on it as well.

There's a public demo deployment available here: https://ocis.owncloud.com/

How is SeaFile these days? I really want something close to Dropbox, but my own drives.

Syncthing is a flawless Dropbox replacement in my experience. There's a big caveat that you need something like a NAS or your home server always online with it though as it's peer to peer sync only.

Syncthing doesn't support my older Mac devices anymore. I can't get Syncthing to work on Mac OSX Yosemite (10.10.5). And no, the device is not upgradable for a variety of reasons not worth going into.

Any solutions there?

Depends on your use case. Maybe simple rsync and cron would be viable.

Nobody talks about conflict resolution that Dropbox does..sync thing has an issue with conflicts because of its distributed topology. Editing the same text file on multiple machines inevitably resulted in a conflict that I had to manually resolved. I’m back with Dropbox for now.

If you edit a file on two devices at the same time, doesn’t Dropbox just create a copy of the file? Is that process what you’re referring to?

Sometimes Dropbox does, but mostly it seems to figure it out.

Sync thing on the other hand creates a conflict almost always.

I’ve never used syncthing, when you say it “creates a conflict”, does that mean the same file copy mechanism as Dropbox?

There are various strategies you can choose. If you mean creating a copy of the file with the conflict then yup.

This is something that nextcloud does really well. There are lots of sync tools on the level of syncthing, but my experience with nextcloud is a polished experience truly competitve with Dropbox on the complicated stuff like conflict handling, private/restricted shares, etc.

That's a reasonable plan. And I 100% agree that controlling your data is preferable to having it mined.

What frustrates me, and why I don't do it any more, is we don't have a manageable story for most things. In many cases there's a good recipe you can follow to stand up the basic service. But hardening, security patching, etc. aren't covered. You have to come up with your own solution to make sure it's up, etc. On top of that, projects come and go, and someone may unfortunately choose a project that's a dead end and won't be patched. (And big-budget cloud doesn't solve many of these issues, either).

My personal fear is that a lot of self-hosted stuff becomes like all the unpatched Wordpress sites, years ago, that were just vectors for hacks. It wasn't that the data was stolen, they were pretty much pwned to launch other attacks. There are just too many solutions out there for all the bits and pieces needed to keep stuff up and secure. And all those fiddly bits are hand-integrated (for the most part). I'd like to find something that provided me a full stack, with all the boxes checked. I would get monitoring and security patching around all the bits.

In the interim, I try to use products from companies that either don't primarily make money by advertising based on my data (even if the products are more expensive). (Note that advertising is what you do when you're out of real ideas - so it's inevitable that all companies head that way when MBAs with no imagination want a safe return). Or, I use products that are (as much as possible) open source. (There are still disturbing amounts of proprietary blobs in my Raspberry Pi homely servers, for example).

With all that said, I wish you luck! I've run my own infra in the past and it's fun.

The experience needs to be more like installing an app on a phone or a game on a console and with automated backups and easy restore. Self hosting will never come back if we are still expecting people to manage servers like it’s 1985.

I think part of the problem is that this is exactly the kind of stuff that is not fun. It’s that boring middle layer between the OS and apps. Compounding the problem is the fact that the people who know that layer well feel no personal need to fix its usability issues.

Programmers want to work on AI and distributed systems and games and other sexy things. Even people who work in the middle layer would rather work on sexy problems there like hyperscaling with Kubernetes and architecture as code.

Making regular old systems easy to use for boring not-hyperscale uses just isn’t sexy so open source devs don’t do it. The economic model for commercial stuff only incentivizes the leveraging of this problem for vendor lock in or to move people into SaaS where data can be mined and rent can be charged forever.

At this point the whole industry is herding everyone into SaaS walled gardens because that’s the only working economic model in software. I don’t see this changing without a movement similar to open source in its heyday in the 90s, but to steal fire from the nerds and bring it to the masses.

I’m not optimistic because nobody seems to care. It might take a whole cycle in which all freedom and privacy is completely lost. Experiences like Twitter just aren’t cutting it. People are stuck on either “woke Twitter was bad” or “pilled Elon bad” instead of realizing that the problem is intrinsic to walled gardens. All cloud spies on you and all social media is manipulating discourse for someone. No exceptions.

> The experience needs to be more like installing an app on a phone or a game on a console and with automated backups and easy restore. Self hosting will never come back if we are still expecting people to manage servers like it’s 1985.

It exists: https://yunohost.org/#/. Install the distro, and then it's all clickety-clicks for all your apps (which are softwares like nextcloud, cryptpad, bitwarden, bitter, etc..)

That is definietely the idea, but in practice all the app stores suffer from same problems - mainly that keeping the apps up to date is a lot of thankless work.

Yunohost is mainly full of community contributions, quite a few of which have been abandoned. Some are stuck on old versions, some use migration scripts which may or may not do things the correct Yunohost way, some use migration scripts with bugs which can lead to data loss. The front end is slick, but it's the wild west behind the scenes. There's not even a mechanism for regularly reviewing if apps have been abandoned - I've manually reported a couple.

Cloudron is probably better than most as they have a financial incentive, but then that targets their apps towards "professional" users.

FreeNAS kinds of sits in that area, but it suffers from some of the same community contribution problem. It’s definitely hit or miss. I’m thinking the solution has to be like 10 core things that are rock solid rather than dozens of things that range from excellent to hasn’t-worked-in-2-years.

“Install the distro” is full stop for most people.

The backup and restore also has to be continuous and seamless.

Cannot upvote this enough

> The experience needs to be more like installing an app on a phone or a game on a console and with automated backups and easy restore.

I would say NASs from Synology or QNAP provide that. Small but good managed Appstore with autoupdate and file/config backup.

Just get the oracle free tier. I've read my share of FUD about how bad oracle cloud supposedly is, but went ahead anyway. It's been something like 3 years, no complaints (including ~1.5 years of running their 4-core ARM/24 GiB RAM/200 GB HDD machine).


I've used many other free tiers over the years (living in a low-income region you pretty much have to), and they make it difficult to fuck up your trial and go over the free limit. With GCP or AWS (especially AWS) it's trivial to start running paid resources and be surprised with a large bill at the end. Here you have to explicitly opt into it by clicking through multiple dialogs and confirming via an email link.

I'm not going to repeat my experiences here because it's frustrating to write it out and probably annoying to hear.

I can't in good conscience however, ever allow someone to recommend Oracle Cloud uncontested.

If it's working for you, I'm not going to fight, but please don't recommend it, the axe of Oracle is heavy swung- even if it hasn't caught your neck yet it definitely strikes.

EDIT: for those curious: https://news.ycombinator.com/item?id=29514359

These days, OCI locks your account to "always free resources" unless you specifically undo it and you cannot provision resources that'd be outside of those bounds.

You can during the 30 days trial and if you get caught with overage after the trial is iver they are supposed to terminate your existing vms. I don't know the particulars of it though.

Don't use any 30 day free trial and expect it available on 31.

While you're in the trial it is not super obvios what's always free and what's not. (I have a summary of it somewhere if anybody is interested.) There's also the problem of what to kill from the point of view of Oracle. You have VMs and you're over the limit on day 31. Which VM are they supposed to kill? I think they might be killing everything.

I'll second this. I've gone through many iterations of self-hosting and I think I finally found the sweet spot.

The 10TB of free oracle bandwidth means you can store media-intensive applications on the instance. For data at rest, I simply nfs-mount a 14TB disk I have at home hooked up to a raspberry pi running tailscale. tailscale is the bottleneck here because it pegs the cpus of the rpi and oracle instance, but I still get 250mbps (something about the arm crypto implementation being slow.) There's some rummaging that go will improve the crypto performance so fingers crossed. I think I'm using less than 10 watts with this setup.

For backups I keep it simple and plug in an external drive and run borg every once and a while. It's manual since the backup disk sits offline.

I created this account specifically to reply to this comment.

Please do not use Oracle's free tier.

I too thought the free ampere machine would have been great. Like most things, it was too good to be true. The provisioned machine sat idle and unused for a few days and was then suspended for "Abusive Activity". I must be clear that there was nothing running on it. Even if there was, I cannot find out because Oracle does not provide any network metrics and refuses to tell me specifically what their problem is. The only detail of abusive activity is:

> Traffic Details: Outbound Port Scanning, Brute-forcing, Web Exploitation, and/or DDoS. (Port 22)

Those traffic "details" could possibly be the most generic and vague explanation for suspension ever.

The only form of remediation is through the support, which is not available for always free accounts. There is no one to email, no one to call, nobody to help. I did try calling _a_ support number, but was told they could not help and I could try contacting sales(!?) who did not even get back to me. The official suggestion is to make a post on their community forum, but unsurprisingly this also went unacknowledged.

This happened well over a month ago and I have little hope this situation will change.

I had also read others' negative anecdotes on Oracle and was reluctant to believe them, now I know better.

Do not use Oracle's free tier.

Not specifically related to Oracle Cloud itself, but I am curious how folks remain on the Free Tiers of these providers, such as Oracle Cloud, GCP, AWS, etc.

It seems, from a glancing review, all of these services structure the Free Tier to force an "on-demand" or "serverless" architecture, since the CPU-seconds and GB-seconds are always undersized for an always-on system (such as a traditional server or OCI container).

For hobby projects or book exercises, the Free Tiers can be enticing, but seem like a gateway into surprise billings. What do you do if you require a few OCI containers at the same time?

How do you folks do it? Is everyone just doing "serverless" these days and I'm old fashioned?

The ARM VM provided by Oracle is actually quite powerful (definitely much more so than anything free by GCP or AWS). Since I have no point of reference for that processor, I have no idea if you're getting four real cores, but it at least feels so. Compiling large C projects is faster than doing it on my machine (although the target architecture is different, so the comparison is a bit pointless).

The two x86 VMs are puny and can only be used as VPN gateways, or for static site hosting, or something like that.

Not using any of that newfangled "serverless" nonsense, and do not plan to. For work projects we rely on colocation with properly self-provisioned and fully controlled servers. It would be silly to use free tiers since you get absolutely zero uptime guarantees.

I created 4 boxes attached 50 gig block storage and run lamp. Serverless is something you do when interest rates are lower.

I had a free tier machine. It got terminated for no reason (only was running postgres). The last time I tried, was not able to provision a new one due to capacity.

You will probably have to recreate the VMs after the initial 30 days (the disk remains, you just have to attach it). Also it has a restrictive firewall and you have to manually allow connections.

They require you to submit your credit card information (even for the free tier). No thanks.

whenever I tried using it, no instances were available, so I just gave up.

It's confusing. Especially the ARM instances aren't available in some availability zones, and there is really nothing in the error to tell you the AZ is the issue.

Hetzner has a hosted 1TB Nextcloud for ~$5 a month. IMHO with that price, it makes no sense to selfhost it.


That's definitely a nice looking deal. Hetzner's come up a lot here, I'll be giving them a good look for sure.

There is really no benefit to self-hosting for an individual's basic productivity tools. Just back up your files. When your managed hoster becomes insufferable, upload your backed-up files to a new managed hoster. You will spend 100x more time (and money) curating your self-hosted thing than just backing up and uploading once or twice in your lifetime.

I have switched email providers 3 times in 25 years. With the last one, I bought a domain and pay for custom domain hosting. I don't have to think about my e-mail again until the next time I switch in 10 years. When I switch, I'll just sign up for the new provider and sync my IMAP folder over, change MX records, and boom: done. No paying for a VPS, no maintaining software patches or server or network issues, no e-mail administration or web interface administration or dealing with spam or IP reputation or anything else. It's all taken care of.

I call this "leveraging the open protocols". Very smart people wrote very details specs that other very smart people implemented. Leverage that for robustness, instead of fighting the sisyphean server management issues.

As far as hobbies go, self-hosting is very rewarding. But if a hobby is not the end goal, then don't do it yourself.


At Swiftpoll -- https://swiftpoll.net (The best polling app on the web! We promise!), we use a combination of Racknerd (https://www.racknerd.com/NewYear/) and Contabo (https://contabo.com/en!)

For our Gitlab runners, the metrics server, and the test server, we use Racknerd. For the production server, we use Contabo.

Racknerd costs about $10-30 per year per virtual machine. Racknerd runs crazy sales, such as black Friday's double bandwidth deals and free giveaways (https://lowendtalk.com/discussion/182479). Over 1000+ pages of craziness on lowendtalk!

The only caveat we have with Racknerd is the lack of support as a terraform provider. Racknerd uses Solus, and unfortunately, we did not have access to the admin APIs, such as reinstalling instances. To solve this, we made a playwright library that automated installation and maintenance tasks on the Racknerd Solus control panel.

For Swiftpoll production, we went with a Contabo instance in their St. Louis data center. Contabo has support as a terraform provider with APIs to do just about anything you'd like. It works out of the box, and we didn't need to create anything hacky in Javascript, which we loved. Our only caveat is Contabo may charge "setup fees" when making changes to instances. Contabo costs about $9 per month and up.

We have been using Racknerd and Contabo for a year. So far, we did not encounter any problems. We keep our hosting costs under $50 a month. Downtime with either provider is super rare, and I don't remember when the last time it happened was. Both providers offer generous bandwidth measured in terabytes.

I really like Scaleway: https://www.scaleway.com/en/.

Similar offer and simplicity as DigitalOcean, but way lower prices. I had to reach support a few months ago, and they were pretty responsive.

They also show the prices per month, not per hour/milisecond/byte/whatever. As I'm also the one paying the invoices, I really like that.

I can't figure out the prices per month thing, if I select GP Compute and 'Monthly' it just says "Sorry, no results…"

Their pricing page is really confusing to navigate and figure out what's what.

I agree with it being very confusing. If you select Monthly it seems to just remove the products with pricing per ms or per hour.

Also, Scaleway is European, with datacenters in France, Amsterdam and Warsaw.

FYI for anyone in the US, this means they're going to ask you for some sensitive information like a copy of your passport. Even worse, they're probably going to wait until after you've given them all your registration details, including your credit card information, to ask for it.

I’ve been a Scaleway user for years and they don’t have anything except for my CC info, where did you get the idea of sending a copy of your passport from? FWIW I’ve never had to send my ID or passport to any European service provider.

Scaleway asked me for a copy of my Dutch passport too which I refused, and then they limited the services available to the long-time active business account this was for.

Neither their docs or two separate support agents I spoke had any information on why it was necessary, where the copy would end up, how long it would be kept, etc. which seemed very fishy and is totally illegal.

This was about a year ago.

You sure this wasn't a spear-phishing attack instead of a legit request?

Yeah the verification process initiation including document upload all went through the account control panel. The account audit log also contains records related to the incomplete verification process, and there's a permanent banner telling me to complete the thing too.

Appreciate you asking though

Maybe this is needed to figure out how much VAT to charge you?

Since both are EU businesses, these were "Intra-Community Acquisitions"[1] so the VAT was reverse-charged.

Either way, they knew the business' VAT ID, Chamber of Commerce ID, etc. and had been billing us successfully using that info for a good while, there were no outstanding bills, and there was nothing nefarious or questionable going on our servers or with the associated traffic (just running legacy applications and services).

All in all a super weird experience, which was a shame since we were otherwise and up until then quite happy with them.

[1]: https://en.wikipedia.org/wiki/European_Union_value_added_tax...

I’ve never had a European hosting provider ask me for my passport. Not sure where you’re getting that information from

> Not sure where you’re getting that information from

From personal experience. There have been other conversations on HN in the past about this. Mine was not an isolated incident.

FWIW I'd throw my hat in the ring for Vultr as an alternative to Linode or DO. (Disclaimer I used to work there) Prices right on par with the competition, one click nextcloud, good support. And they are independent, not a subsidiary (like Linode now is of Akamai) and not VC funded (like DO)

I would second Vultr. They have some features, like BGP peering, which you won't find with Linode or DO. Admittedly it is a niche feature, but useful if you want to announce your own IP blocks.

Trying out self-hosting is almost as addictive as trying out note managers :-)

Anyway, here's my current setup:

- Hetzner Cloud VPS -- https://www.hetzner.com/cloud -- I use it for a public website - https://vlad.studio/ - otherwise I'd get the cheapest option); 4€/mo

- 1Tb Hetzner Storage -- https://www.hetzner.com/storage/storage-box -- mounted as an external drive to the server above; € 3.81

- Filerun -- https://www.filerun.com/ -- used as web interface and file manager, instead of NextCloud. I wanted to like NextCloud, but couldn't. Filerun looks nice. It is not as complex .

- Various mobile Nextcloud-compatible apps connect to my Filerun instance. I'm not happy with this part of setup yet.

So this filerun, I looked at the demo page and it seems nice for the file storage aspects, and you say it can run any Nextcloud plugins? So it would be able to do all the little services like to do and grocery lists, calendar, some sort of office stuff, that kinda thing?

Cannot say, unfortunately - I do not use any of these plugins.

I also found NextCloud UI for filemanagement to be rough. I have opted to use Mountain Duck and am quite happy with how it works. It's definitely much faster and more reliable for uploading.

What’s the point of replacing one corporation with another (especially if you have to maintain the server yourself)?

Host it on cheap hardware at home, with plenty of RAM and CPU. If the power is off for 15min, that doesn’t matter for personal use.

As a plus, you can host it in plaintext (or at least, load the LUKS key), so that applications can index your data, allowing a lot of cool stuff. You can lock it in down more strictly, tailoring it to your specific environment.

> What’s the point of replacing one corporation with another (especially if you have to maintain the server yourself)?

I dont think most people have issues with corperations in principle. It is a few particular companies people want to avoid.

Hosting is commoditized; if your host becomes a problem you can migrate. This isn't as easy with Google.

> Hosting is commoditized; if your host becomes a problem you can migrate.

Not so when you are using custom offerings from a hosting company. OP specifically mentions "Nextcloud office suites". How will he migrate that?

Yah it's just a plugin, an advertised feature if you will. Whatever rented presence I have for a public IP will be treated the same way one treats a container (might end up being a container, who knows...) - persistent config and storage data is stored outside the container and invoked on launch, and similarly, I will have all config and storage backed up on my local server as well. So as they say, if I decide I don't like my hosting company, I will be ready to simply shut them down and open a new account elsewhere, apply config, "restore" backups to new bucket/blockstore if need be, done.

Install it somewhere else and move the data directory across?

Nextcloud is foss, right?

love this - my ISP does not love this

Get a Raspberry Pi 4 and install https://umbrel.com on it. Easy to set up using Tailscale and optional TOR support if that's something you want. And it has all the "important" stuff like Nextcloud, Vaultwarden, Pi-hole, Matrix server, etc.

There's a bunch of Bitcoin related Apps as well but it's easy to just ignore those.

Full list of "apps" here: https://github.com/getumbrel/umbrel-apps

Your approach is nice and easy and sensible. I personally preferred hetzner to linode (which has a history of bad security) but I imagine the minimum price is higher.

When you’re ready to reduce prices and increase privacy, take a look at a tiny (like t3a.micro or .nano) ec2 instance that forwards to/from your “real” server at home which can be beefier. The home server maintains a vpn connection to the ec2 instance. you also need to configure nat and port forwarding on both sides so you’ll need to get your hands dirty with nftables/netfilter (probably a night or two of pain realistically).

Once it’s set up and working and you can get your monthly spend down to $3-$4 (I pay up front for three years of ec2 credits). You need your own hardware but a used nuc for example is pretty cheap and adding storage is a matter of buying an external usb hard drive.

And if you stick to https/tls (via letsencrypt) the Amazon forwarding instance can’t see what’s in your traffic (just which ips are visiting you and when).

(You could obviously cut Amazon out entirely if you’re comfortable hosting directly from your home ip but I never wanted to deal with the potential isp headaches.)

Is there some reading about setting this sort of thing up that you could direct me towards? I am ignorant but competent (I even had a CCNA at one point) and not afraid of manuals, and this setup actually sounds ideal for me - I have a ton of disk space and compute power at home. If not for Starlink I might even just try running it from home anyways.

The only thing I don't have from your scenario is a public IP due to being on Starlink as I said, so there's no way for me to let anyone in my front door here even if I was willing to endure the pain, which I don't think I am. Is there a way to have my server always be the initiator of the VPN connection, and the bouncing server just say "sorry" if the home server isn't responding?

I am tempted to bring up the question of having a CDN cache (or whatever the terminology is) for those occasions, but that sounds like money, in the final analysis the straightforward setup might still be my best bet for now.

But yah, the idea of just having a small public bouncer that forwards stuff and doing everything from my home servers has occurred to me before, I just don't know yet how to set that up and I get nervous with anything public, it's a scary world out there.

Doing it the "dumb" way just gets me onboard and I can start looking at all the little parts, it's how I learn stuff really... like I said, very interested in doing some more reading on exactly how one would implement that, as detailed as possible because I am very stupid when I'm first setting out on something, I need the text equivalent of someone talking real slow like I'm five. :>

I think it'll be hard to find a specific article about this, but I've give you some pointers

You don't need a consistent public IP from Starlink with things like Wireguard, Tailscale, or Cloudflare Tunnels. You just need some server somewhere with a static IP which is pretty easy to get for cheap (see suggestions for where to get the server elsewhere in this thread). Once you have a public IP you can:

A) Setup wireguard on that server and configure your local machine to connect to it

B) Setup tailscale on both computers

C) Setup a cloudflare tunnel and point to it via a domain you manage in cloudflare (with added bonus of cloudflare protection out of the box)

From there you just need documentation on properly doing reverse proxying with Nginx or similar and setting up Lets Encrypt (certbot or acme.sh).

The "hard part" is getting past Starlink, but if you think about it you must be able to do that somehow otherwise how do you get internet? That is what the 3 options provide: a way to open the connection somewhere you trust to allow others to make requests. (The "others" may be you just outside the local network)

or you could use https://www.tarsnap.com/spiped.html instead of a VPN

> The home server maintains a vpn connection to the ec2 instance. you also need to configure nat and port forwarding on both sides so you’ll need to get your hands dirty with nftables/netfilter (probably a night or two of pain realistically).

You could simplify substantially by simply using the ec2 instance as a reverse proxy (well, that’s simpler to me, at least, as someone much more familiar with reverse proxies).

Yup but if you terminate tls at the ec2 side (as in a reverse proxy) the tradeoff is less privacy since the upload/download data is fully observable from the ec2 instance. (Correct me if I’m wrong here, this is my memory from when I decided on my own setup years ago.)

Some people might not care about this which is fine. Personally I’d rather take on slightly more overheard in setup (nftables vs nginx or whatever) and get the privacy in the bargain.

Ahh i understand now. So you’re basically just using ec2 for the IP address then, right?

Honestly I've tried all of the VPS services and they're problematic. They are barrel scraping companies generally and some services literally block their entire networks. And occasionally their data centres catch fire. And of course the moment you get a minor billing issue you find all your VMs also catch fire. Also with any VPS, don't expect to be able to send emails from it. That's not going to happen even if you manage to beg enough for them to let you send them because no one will listen to SMTP from a VPS.

If I was going to go with anyone I would use AWS. Their stuff works better than any other company I've found and also support is actually existent. But expect to pay for it.

I don't want to pay for it so I keep my DNS and static web content on S3/CloudFront/R53 and let Apple handle the cloudy shit like email, calendars etc. Both AWS and iCloud+ cost me about £4 a month in total.

Someone will own you in one way or another. Better to leverage that and have an exit plan than not leverage it and cost yourself time, money and sanity because that's the real outcome of what you want to do.

You can use services like mailgun to send an email, so it's not a strict requirement

You can but that multiplies complexity somewhat. I have spent 25 years dealing with outbound email from various things from country level ISPs down to small web apps and the only way to not die inside a little is to send via one of the larger providers (O365/Google/iCloud/Yahoo/AWS SES etc).

I had to even dump Fastmail because they had delivery issues with Yahoo.

I don't agree, mailgun provides simple API and there are many similar services. Even if the desire is to use Google or O365, it's possible to do so, including AWS SES.

Overall, it seems to be a minimal cost.

On top of that, this is talking about self-hosting, for which the delivery is not a concern at all, the target of the emails is "yourself" (can add filters to avoid things reported to spam)

SMTP is the only API that should be considered.

I didn't imply HTTP API, I said "API" on purpose: they provide an SMTP API for bulk sending (an smtp relay), simple smtp sending and a nice curl command that can be used in simple scripts.

If that is not enough, Nylas is an option.

I just find extreme limiting your hosting options for email sending, especially when you can still host wherever you want and use email services from other providers.

In case of self hosting, the clients are also limited (usually yourself, family and friends), so you can even ask to "allow list" the address you plan to use for those communications.

The flip side of 'stop relying on some faceless entity to manage it' is that it is a LOT of work to keep something like this up and running. Google automate it and have a ton of engineers to tend to things. But everyone else has to take care of software upgrades, security patching, backups and restore, cyber threats etc.

In the long run, independence is worth something. It is just costly.

That's why, in spite of being on the Internet since 1994, I haven't ever really self-hosted anything. At every point, it looked like too much hassle and too much expense.

In this case, I've identified a couple of hard target outcomes which pertain to accessing documents, backing up photos in a way that I understand and trust, and a couple of other minor things, and it appears to me that those specific targets are in reach at a non-crazy price.

But I agree, it's very intimidating to even contemplate, hence my coming to get the thoughts of this community on what's the best way.

There's pro and cons.

There's a lot that can be automated now. I self-host everything, been doing for more than a decade now. Partly on home server and partly on servers I rent. Everything is kept up to date and the pain points have been fairly low.

It's definitely not a set-up and forget thing, but it is definitely not a lot of work.

It's also a good way to learn things, the process itself is not a waste of time.

I dropped all GAFAM services. Nextcloud, Gitea, Matrix, Sonarr, Lidarr, Radarr etc can cover it all. You will have to put in a little work to self host these, or a little money to have someone else host them, but now you are paying with time and money instead of your privacy and identity.

If you are into self hosting swing by #!:matrix.org

I've been on TornadoVPS (formerly prgmr.com) for ~a decade and it's been fine. They lack the ability to do lots of storage like you may need for this project. If you can configure Nextcloud to use s3 or a competitor's clone for storage I think that would be a reasonable, measured exposure to Amazon/Google/Microsoft and you'll get to take advantage of their high reliability where you really need it.


Why not run it from your home network? You're likely already paying for home Internet, and since most of the access will be local, it'll be neither slow nor will it cost anything.

Don't forget that running your stuff and putting your data onto servers belonging to big corporations means that even if they're not scanning everything the way Google scans your email, they can still access it, so it's better, but not worry free.

Over the long term, I don’t trust a NAS in my house to hold my photo collection (online.) Not because of the NAS, but because of my house.

Living spaces have a lot of factors that people optimize for when picking them. Data centres only have a few, and one of the most important is “is not prone to natural disasters.” Another is “has clean power with well-implemented surge and lightning-strike protection, and likely a UPS for graceful shutdown on power-cut.” I did not weigh any of those factors heavily when selecting where to live. A 1000-year flood could put my house underwater.

And while you could certainly stick a drive or tape with your photo collection in a fire safe in your home — that’s not really the point, here. Online sync and thin-client access is.

> They can still access it

Data stored in these services can be stored E2E-encrypted with only your clients, not the backends, holding the keys. IIRC there are many apps designed exactly for this “store data across many low-trust remote clouds, never giving the cloud providers the keys” use-cases.

On the other hand, IIRC there are no apps shipped by the NAS hardware vendors that do this, because they assume that your own physical ownership of the NAS is enough for you. And while you can set your NAS up to act similarly to a PC, with an encrypted disk that’s mounted on boot — that’s pretty useless, since a NAS is only serving a useful purpose when it’s already up when you need it, so its uptime will inevitably be measured in months.

Which is problematic, when your modelled attacker isn’t “the cloud vendor snooping on you”, but “the government raiding your house.” Data E2E-encrypted at rest in the cloud, with the only keys being on a device in your pocket can quickly locally wipe, is much more secure against that threat. (Not that this is the most likely threat for most people, but I like considering it, because if you can solve for this, you end up solving for basically all other threat models “for free.”)

Yes - you're absolutely right to have off-site backups. In the past I've had reciprocal backups with others for years, where I host a small box of theirs and they host a small box of mine for exactly this.

Also, having had a colocated box be literally under water during Hurricane Sandy while at a real, proper datacenter in Manhattan, I can say that a thousand year flood can be just as much a concern for datacenters as for homes ;)

There are plenty of cheap colocation providers, too, you know, but that might be more for people like me who don't trust corporations at all.

However, I disagree with the idea that a NAS can't do encryption, but then again I would never consider running an environment that is based solely on what is "shipped by the NAS hardware vendors".

That you're more worried about the government raiding your house than you are worried about them slurping all your data from the hosting provider without you even knowing could be an entirely different discussion thread. Me, I want to know, and I want there to be a proper subpoena, whereas the NSA employees who work for Google or Amazon aren't refusing to work without a subpoena. I don't assume that data encrypted in shared hosting is safe, because hypervisors can be used to pull keys from memory. But that would definitely lead to a different discussion :)

This thread is still interesting to me because I'm often asked to help small businesses figure out online backup with reasonable security, not necessarily complete security. Good luck!

> I don't assume that data encrypted in shared hosting is safe, because hypervisors can be used to pull keys from memory.

Well, surely not, but that’s not what I was talking about; I said E2E-encrypted. The remote (the cloud, or your NAS) shouldn’t be doing any encryption or decryption. It should be a dumb store for your client-side-encrypted data.

How would “the government raiding your house” access the data in the NAS without first shutting it down (assuming network access is authenticated)? And then they would need the encryption password when booting it up again.

(Ignoring the fact that the government might have other means to make you cooperate.)

The attack used on regular AT PCs with encrypted-at-rest boot disks but no TPM, is that DIMMs of DRAM can be popped out and immediately put into a specialized board that will keep them refreshed while scanning them (but cannot write to them.) As long as the encryption key is in plaintext in RAM, this allows you to recover it.

(Also, before you remove the DIMMs, you can get the RAM + its board very cold, to decrease the decay rate during the swap.)

Popular hardware NAS appliances might have all their RAM soldered on-board or part of an SoC die, but they still don't tend to have TPMs, nor an IOMMU (critical for limiting DMA rights by peripheral.) So accessing the keys on these is only a little bit more fiddly — either involving hijacking the address + data bus between the CPU and RAM; or, easier and more universal to modern devices, by putting a specialized PCI peripheral device onto the NAS's peripheral bus, that then dumps RAM by requesting DMA transfers to itself.

This is against the ToS for most residential internet providers and can subject you to disconnection/ban from one of the few/only carriers available in your city.

The last time I perused the ToS for my ISP, it disallows this sort of thing if it's commercial in nature. This specific use may not be against ToS from my perspective. For the record, I selfhost a containerized instance of nextcloud inside a virtual machine on a hand-me-down 2015 iMac. It works just fine and no issues with my ISP so far.

I imagine that funneling it through an encrypted VPN would make it impossible to detect.

My ISP accused me of running servers (I don't) simply because I used too much traffic.

I send 100% of my traffic VPN encrypted and I pay an extra $50 or $70 monthly for "unlimited" data transfer.

But then, it's not open to public access, just to you or whoever you trust with your VPN credentials.

How tyrannical. In Romania, my ISP Digi gives me a free domain name.

Due to old infrastructure (early adopter problem) and generally having more people with internet connected devices, ISPs differentiate between residential and business internet plans.

Business plans typically provide either an SLA or at least less likely to employ traffic shaping. Residential plans tend to thus prohibit running a public facing server for commercial purposes, since they're over-provisioned on the assumption that residences don't always fully utilize their capacity. This is becoming less of an issue with fiber optic internet lines, but coaxial cable networks can easily see a few heavy users adversely impact everyone else.

All of that said, I've never lived anywhere with an ISP that prohibited public facing home servers- they really only had the stipulation about commercial purposes. I imagine if your server was receiving a ton of traffic (say, a popular blog) you'd get a nasty letter about it, but I don't know of anyone who has run into it.

"Most"? Nope. Perhaps that was true twenty years ago, but it's definitely not true now.

I'm running Nextcloud on a shared webhosting and have mounted a Hetzner storage box as external storage into it. That comes down to around 7 €/month for hosting costs, which I find fair.

I'm using the Nextcloud Android client, which could be better but does its job, to sync photos from my phone to the cloud. That can also be configured quite nicely to have specific folders for uploads, that should help you with keeping stuff seperate with several devices.

I've no experience with the office suite to be honest, never really looked into that.

Using shared hosting will come with the benefit that you will, most likely, have custom TLD to run your mail from/to. Shared hosting also does regular backups without you taking care about that etc. etc.

I'd say you should get yourself a Hetzner "Level 4" shared webhosting for domain, mail and "root" of the Nextcloud and then mount a storage box into that. Should be around 9 €/month then.

I let Hetzner run NextCloud for me https://www.hetzner.com/storage/storage-share

It's an amazing value.

Do they allow installing any Nextcloud app of your choice? I could not find that information.

Yep it works! Obviously if the app requires specific software installed on the server it may or may not have what you need, but I've installed several apps including the Music app, and use mine as a Subsonic server.

May I ask for three specific apps? Contacts, DAVx5, and Memories.

that sounds like an awesome deal, but the latency for me in the US would be painfull. Looks like they do not offer US hosting for this.

They don't. FWIW it's never been an issue for me in the Midwest, but YMMV. In general I'm syncing the files I need to my computer or backing up photos to my phone so latency doesn't really matter for my use case.

That's even easier, indeed.

I'd encourage hosting out of your home if possible: For an upfront cost of maybe an Intel NUC or equivalent, you get capacity that's dirt cheap. I run Sandstorm and let Cloudflare's cache handle the brunt of any of the public websites I have so that doesn't hit my house.

I don't have a ton of concerns about publicly-accessible Sandstorm, but if you're using something else, either Cloudflare Tunnel or Tailscale are relatively good options to hide your home connection and secure your access to your server from the public.

I'm not super hot on Nextcloud as a product, I tested it and found it slow and janky. It's exceedingly popular, but there's a lot of better apps out there if you run something that can spin up some containers.

I'm using an old dell optiplex fx 160 (intel atom n230 / 3gb ram / 500Gb hdd) as my personal server. The Nextcloud web interface is almost unusable, but since I use it only to synchronize my files across my phone and my computer it does not really matter.

Whenever you consider going self-host, consider what REDUNDANCY and DOWNTIME are worth to you.

Sure you can use an old computer and host at home much cheaper than AWS / Azure / etc. But on AWS / Azure / etc if the physical machine your VM is running on dies, your VM moves to another physical box within seconds without you having to do anything, within seconds, and without data loss. What is that WORTH to you?

That said, you can get a Synology NAS (easy) or build your own NAS (harder) and run Nextcloud in Docker. In the event of hardware failure (it will happen, it's just a matter of when), you'd just have to bring that Docker container + backing storage back up on the replacement gear.

While I agree with 1st sentence, amount of f*-ups VPS providers have is substantially higher than amount of f*-ups power providers have in my experience.

Furthermore, if power at your home can be managed by having 2nd power supply and sorts, mismanagement on IT side of your VPS is out of your control.

Linode. I've been a customer for almost 15 years. Has never served me wrong.

Another vote for Linode. Everything I need, Multiple domains, E-mail for all family members and a buddy's business E-mail, and light web hosting run on a $5 Linode running basic vanilla Debian, exim4, dovecot, lighttpd.

Word of caution: If you're doing this to achieve resilience from the risk of big tech canceling your accounts and making your life difficult, moving to a VPS doesn't fully solve it. You're now at risk of VPS provider canceling your account and making your life difficult. Less likely than RandomCloudProvider, but still a risk. I'm currently experimenting with self-hosting on my own iron at home, to mitigate this. This is a bit more challenging, mostly due to your residential IP address having low-reputation among the E-mail system. It might be a good temporary fail-over solution for when you have to find a different VPS provider.

Of course home-hosting mitigates the VPS provider risk, but then you're at risk of your ISP canceling you. Unfortunately, it's "dependence on corporations" all the way down...

Bottom line, I plan to treat the rented presence the same way one treats a container - there will be persistent storage of all config and storage data elsewhere, such that the server can be abandoned and stood up somewhere else with no more data lost than whatever doesn't come in during standing up the new server. If I can manage it, I will encrypt all stored backup data before writing it to the bucket/block store, that seems like a reasonable target. Even rot-13, or whatever is the binary equivalent, will discourage lazy noses.

As far as backups, I have many terabytes of ZFS blocks here at home, and all cloud backups will be replicated back here. I wasn't actually aware of tailscale so I think that's going to be integrated into things pretty deeply lol.

I'm not planning to run my own email at all, just make sure that all my old emails and all my future emails end up stored on a server I control (ie. my zfs server at home, plus whatever redundant ones I might also have in the world).

> Word of caution

It's a good point, however this can be easily rectified by having same node in sleeping mode at other provider. That's what I do for critical services. If provider A bans you for whatever reason, you can (reasonably)easy start same service.

And this just can't work if you are using conventional services like apple/google.

They have NextCloud in their "marketplace" (at no cost) for easy install.

Depends on your budget I guess.

A dedicated server on something like Hetzner is ~£50 a month, put portainer or proxmox (or portainer inside proxmox) and run whatever you want.

Use traefik to direct things.

Use backblaze, or hetzner boxes etc as storage backups etc

Rather than hosting in the cloud, you might consider using Docker Desktop (or Podman, or Rancher) on a used Windows laptop along with a tunneling solution (I maintain a list here[0]) which gets around NAT/CGNAT and keeps your IP address private. Something like Cloudflare Tunnel is the easiest, but you can also selfhost your own server in which case your VPS is simply a relay.

[0]: https://github.com/anderspitman/awesome-tunneling

I'm in the process of standing up my own hardware (1U server/some simple compute box + FreeNAS) for doing this.

There's a variable I don't understand with self-hosted cloud/storage: what guarantees do you have that they're not peeking at what you store? Why do you trust them over $CORP? Do you encrypt your data at-rest (dm-crypt, fscrypt, etc.), and do block storage providers support this?

edit: fix typo

It's an issue, for sure. My thinking is I would include some sort of in-house encryption for stored objects; encryption is never perfect and rolling your own is usually a recipe for failure, but it would get in the way of intrusions of the "bored employee poking around the servers" variety. Such a lazy nose would move on to easier pickings, and nothing I'm doing matters enough to steal - even my creative work is CC so if you want my 3D asset files, they're all yours buddy.

99% of people are never going to have anyone interested in anything they do, but if you're really paranoid you'd want to host everything at home and use a VPN or cloud server to redirect to it.

However, if you go with a really small cloud/server provider, you may run the risk of a bored employee poking around. The larger ones will have auditing in place to catch stuff like that.

Of course you take care to set up encryption at erst and in transit, it's not that difficult. In case of virtual machines they can still peek at the memory of your system just like AWS and others, the question is why would they take the trouble.

1) what is your budget?

2) how much stuff are you planning on hosting?

3) how much bandwidth do you need?

nextcloud was (I've not checked recently) very hard to secure properly, so you'll probably want to hide it behind a VPN or some such.

The other thing to think about is the amount of time you have budgeted for initial setup, and on going maintenance. you will need to have backups, and those backups need to be tested.

For hosting, you need disk space, so if you have enough bandwidth at home, its car cheaper to have a low power server with a couple of big disks in it, than it would be to host that data on S3/other block storage.

AWS will be much more expensive than linode or DO. assuming you are not using the managed services.

finally, I would advise getting your physical instances controlled via ansible or terraform, and if you are using docker, get that config in some sort of repo, so you can teardown and bring up your infra on demand. This make disaster recovery much easier (or porting)

> nextcloud was (I've not checked recently) very hard to secure properly

I've not heard of nextcloud being particularly onerous to lock down, compared to other systems of its ilk (and it should be easier than securing something string together from smaller parts). The main trick with “standard” packages like these, once you've done the initial hardening, is making sure you keep bang up-to-date with security patches from upstream.

> its car cheaper to have a low power server with a couple of big disks in it, than it would be to host that data on S3/other block storage

Though a good block storage provided should give your data more redundancy and a lower time-to-recovery than a pair of old drives in RAID1. Of course, you still want good backups on another provider, just in case.

Check performance too: some block storage providers might be notably faster (SSD/RAM cached storage etc) or much slower (mostly traditional drives, high levels of contention on the storage arrays, latency between your app and the storage array) than those inexpensive local drives.

1) Enough for what I'm proposing to do

2) My personal data. Who knows. Might end up needing a TB or more by the time I'm done snapshotting and whatnot, this is gonna be a process.

3) Enough lol

Sorry to be flip, but at this point I have more abstract targets:

-hoover up all photos taken by all my devices and place them in a central backup place with both cloud and local storage. Ideally some nice UI for browsing and organizing and such.

-replace Google Docs/Sheets functionality, including public internet accessibility (though I may do the VPN, or an SSH tunnel maybe, these are not bad ideas at all) with a server I control.

And yes, handling it with my own scripts and such has occurred to me as well, but you know how that ends up going.

Nextcloud seems to be a popular tool that does these things. That's what I know. There's been a number of interesting suggestions in the comments that I'm now gonna look into before I act, though.

Can you clarify how nextcloud is very difficult to secure properly? I haven’t heard of such thing.

> I want to get all my email history backed up somewhere other than gmail's servers on an ongoing basis.

I thought I wanted this, then realized I really can live without all the old emails. Once you decide that you're left with photos and that's about it, which makes archiving easier.

Not only that really old emails can be liability. When searching through my gmails, I come across some truly stupid emails that I sent or received.

Lately, I am purging all emails older than 10 years, unless there is a reason to keep them. True it doesn't delete those emails on the other side but, at least, it reduces the chances of any accidental exposure on my side.

I found that namecheap's VPC offering was the best price when I shopped around. I needed 2+ cores and 5+ Gi. They sell 4 Cores/6Gi for $15.88 monthly (or save money with quarterly/annual commitments.

It beat AWS's m5a.large's 3 year RIs ($27/mo) & 3 year compute savings plan ($31/mo) (no upfront).

It beat Azure's D2a v4's 3 year RI ($30.75)

It best GCP's e2-standard-2's 3 year committed use pricing ($22/mo)

It beat Digital Ocean's 2 CPU 4Gi Shared ($24/mo)

It beat Vultr's 2CPU 4Gi Shared ($20/mo).

It beat Linode's 1CPU 4Gi Shared ($20/mo).

I just heard about hetzner.com from this post, and it seem like they can beat namecheap's pricing. Hetzner's CPX31 offers 4 CPU / 8Gi for ~$14.77/mo. I might have to check them out

You can also consider hosting it at home. More than likely you've got fiber at home, with decent capacity, and you need (1) closet (2) UPS (3) static IP from ISP with a decent firewall/router. It's not ideal, but a lot cheaper and could be perfect for what you need.

I've installed Syncthing (https://syncthing.net/) on my phone(s) and it syncs selected folders directly to my Synology NAS at home. As Syncthing uses its own discovery system, there is no need to open anything up to the internet.

Starlink, no public IP. A bouncer has been suggested and I'm gonna look into that, cause I do have tons of compute capacity and sufficient bandwidth to handle my cloud document needs lol

Where are you located? Just for interest's sake as I don't often chat with users of Starlink

Just off the shore of Lake Winnipeg. :>

Ahh profite de la vue!

You could get a somewhat managed NextCloud instance on Hetzner: https://www.hetzner.com/storage/storage-share (but it appears that it's not yet available in their US datacenters) Of course you could also self-host on Hetzner (they do have US datacenters in general, just not every service they offer), which is pretty cheap.

There's also https://www.pikapods.com/ that offers to host it for you in a simplified manner if you don't wont all the server hassle.

Rhea [BOT]: The user is looking for a solution to manage their data storage that isn't relying on a faceless entity. The user wants to use a Nextcloud office suite for text documents and spreadsheets, store photos and videos in a backup collection, and back up their email history. They are considering using AWS or Linode, or Digital Ocean, and possibly adding in a Mastadon server for ActivityPub. The current plan is to use Nextcloud on Linode for a while and see how it goes, but the user is open to other suggestions.

It seems like the user is looking for a secure and reliable solution for their data storage that is affordable and allows them to stand up any additional services they might need in the future. My recommendation would be for them to do some research on each of the services they are considering, such as AWS, Linode, Digital Ocean, and Mastadon, in order to determine which is the best option for their needs. Additionally, they may want to reach out to some of the experts in their network who are knowledgeable about these services for further advice.

If I was doing Nextcloud alone, then I would be using Hetzner storage share. I run other things alongside Nextcloud so I'm using a single Hetzner VPS.

There's a lot of Hetzner in this thread, and in my comment. There's a reason for that.

For backing up photos I highly recommend immich. It is a self hosted google photos replacement under active developement.


One of my fears is that Google will shut down my account because some algorithm of theirs says so, and since there is no customer support to appeal to, I'd lose my primary inbox. That would be catastrophic enough, but I'd also potentially lose access to all other accounts that use that email address for recovery or to send login links/passwords. So one of my next to-dos is to set up an email on a domain I own for these critical recovery scenarios.

I would suggest evaluating this for yourself.

Good luck!

Host Nextcloud at home on your own little server. Also, setup Wireguard server at home, so that you can still access everything locally by VPNing back to your home network. That is how I do it.

I am happy with the synology server i have at home. I don't have to setup the server on my own but sftp, docker, nginx is included already. Mostly i just use it as remote storage for files.

I not only run one of these in my office, but am also converting smaller business clients to this platform. It is able to do quite a few services on its own (in addition to the Microsoft AD structure I have been nursing along for over 30 years to make sure I grok most of my MS-based clients' needs).

The only caveat is it is not a good development platform for me with regards to Python - Their file structure, update process, etc. tends to break any development environments I set up. I still roll out a vm or dedicated box for dev work.

Overall, rather pleased with their products, simple, easy NAS with many possible services to hang on it. OpenVPN and either static IP or DDNS means I can access it from any Internet connection over IPSEC.

Worth looking at, in my opinion.

p.s. Also have one running Postfix/Dovecot for a small business client, works fairly well, although I do have to keep an eye on those services after power outages and updates, they tend to require "repairs" in Synology's vernacular.

I'm using a Qnap NAS. It has its own operating system but you can really install anything on it if you are dedicated enough.

One thing: Get the 3/4 drives models. I thought two drives will be enough, but I actually had an instance where one drive failed and the second drive quickly failed after that (thankfully I had already swapped the first defective drive with a newer one). But it could have been possible that two drives go bad at the same time.

Just go to https://www.kimsufi.com/en-ie/dedicated-servers/ and grab a cheap dedicated server with 1TB+ HDD.

How do you protect your data from accidental loss then? Simply backup using Restic to a Backblaze B2 bucket.

And that's how you get dirt cheap self hosted services.

Also take a look at Owncloud OCIS. It looks truly compelling, they invested something like 5 years into a proper file syncing solution that can be federated, etc.

I use https://racknerd.com/ was lucky to get a good vps for 19$ a year (A YEAR).

What storage and ram?

40Gb SSD, 2GB RAM, 2 cores

I recently setup PhotoPrism[0] on my NAS and am happy with it.

0: https://photoprism.app/

If you take a sibling commentator's suggestion to stand up something at home, there's always older, beyond-upstream-support rackmount servers to be had on Craigslist for a song. You'll end up upping your electrical bill in the process, but it's a fun option if you want to homelab it as a hobby.

Nextcloud AIO/hetzner's template on a Hetzner VPS.

Or you can just rent a storage share, a managed nextcloud instance, hosted by hetzner. https://www.hetzner.com/storage/storage-share

Gonna give another rec to DigitalOcean, we use it for work as well and haven't had any major issues.

They took away FreeBSD VPSs so I left. I'm happy at Vultr now.

I'm curious what keeps you invested in BSD. I'm not a hater or anything, and in fact I have some experience, my last Sysadmin job involved a network with almost exclusively FreeBSD servers and I had to do some fast learning on their spin.

It was objectively great, but eventually I migrated everything to Linux in order to make things more accessible to the people coming over from Windows servers, and I didn't find that things got any less stable or reliable. Of course, it might be that I'm the common factor there, for good or ill :>

For me, why I have all my servers in colocation running FreeBSD. Is for the reason that it's not Linux.

Even though I am Linux Engineer by job title, I've lost the sticky from the adhestive that Linux used to give me. It feels now more of a pressured big-corp grab then an OS that made game-changing moves. It's made it's comfort zone.

I am cynical and that when one thing gets popular I tend to shift. So when FreeBSD becomes the next glory, I'll probably jump to Solaris or something; that and bHyve.

That hypervisor has never let me down. With ZFS Snapshots and bHyve writing directy zvols, is just too tasty to turn down.

Hipster engineer :)

I still have my FreeBSD droplet running there, maybe I should look at moving to Vultr or just migrating the services on that droplet to Linux.

I just switched away from Digital Ocean after years of using them. Simply because they stopped providing a FreeBSD image. I now use Vultr and they're fine, except that I have to solve two captchas many times a day (maybe because I'm not in the US).

Contabo user here, no complain in a couple of years using between one and two VPS. 0 downtime.

+1 no fancy pages. there are no constant advertising emails. there is only cheap and working VPS with good performance

i will echo what others have said, avoid aws for the higher charges, a cheap vps should be plenty

i run nextcloud on a vps from a gaming provider and got it hauling, heres a few things i learned

- avoid the snap of nextcloud, i could not get it working 100% (many apps would mysteriously not work etc), i am using a manual install from zip w zero issues

- if you are planning on using s3 as backing store, set it up BEFORE any users, otherwise you may lose data on disk. with that said, i use local storage and backup to s3, using s3 as the backing store in nextcloud made it obviously slow

once youre setup i recommend tuning php-fpm following their guides

Learn how to use the cost control tools on aws. I don't know what you're doing, but I don't see how a single user storing documents and media can end up with a massive bill.

I don't know about the application layer but for decent cost bare metal hosting I've recently discovered, and have had good experiences with Hivelocity.

+1 for the Cloudron recommendations. You can use it to selfhost nextcloud and many other apps on linode/digitalocean/hetzner/vultr/what have you.

If feasible I'd suggest hosting at home. Much more versatile and getting a similar amount of horsepower in the cloud is going to be pricy

OVH for me. I rent a bare metal server ($80/mo) for development and a use an OVH VM for my personal site. Their DDoS mitigation seems legit.

(Anecdotal evidence) While I've been happy with OVH over the last year, literally last night they suspended my servers because they weren't able to charge my card.

No warning that they would charge it, no pre-authorization, no grace period to rectify the problem, just shut down the servers at 2AM in the night.

Not happy to say the least.

Curious, how long have you been running your service with them? A few months?.. years?

This was the first year I've been with them, and pre-paid for a year of dedicated servers.

Doesn't really help OP, but if there's anyone out there just wanting to host static content, I recommend GitHub Pages.

All this discussion about Nextcloud in a thread about Linode.

With Hosting, you largely pay for reputation. Reliable/trustworthy providers are gonna cost more.

On one end of the spectrum you have good-enough-for-the-CIA AWS. $$$

On the other end of the spectrum you have communists running Data Centers out of their basement. $

I suggest Linode. Super reliable provider, much less price than the big names. I would expect the same service out of DigitalOcean, Vultr... just pick the company where you like the owners honestly. Linode's the one me teacher used tho.

Assuming you got the income, $40/mo for 8GB should be good enough to run a ton of hobby projects. Or maybe $20 for 4GB to start, I think Linode lets you upgrade droplets.

Avoid AWS. Using AWS for a hobby project is retarded. Its meant for enterprise.

Option B would be to use a RaspberryPi. Its extra work for less performance, but Pis are a lot of fun. Its like owning a toy that can run NextCloud.

If you're getting into self-hosting btw check out https://yunohost.org/ and https://landchad.net/

Is there a easy self host alternative to Vercel for next js apps?


But is searching for maintainers

Vercel is 99% infrastructure, unless you don't care about latency.

Digital Ocean + Cloud66

Check out lowendbox.com there are much much cheaper options

And it's sibling, lowendtalk, and search sites like https://www.serverhunter.com/

But definitely be extra careful with your backup and DR plans if using dirt cheap hosts…

That's a great plan! I'll say that self-hosting may be _the_ number one thing I'm most passionate about due to concerns similar to yours (privacy, ownership, and so on). I've self-hosted many of my own services for a very long time and so I have my own experiences to share as well.

I'll say right off the bat that I don't see any red flags with your proposed plan. The following bullet points are primarily meant to offer some additional options or mental nudges to help you brainstorm - like I said, there's nothing abjectly wrong with your architecture, so this list may just offer more ideas:

- I've self-hosted a few email servers (and still do) and I think punting on that (or just doing the backup plan) is probably the right approach - you can DIY it today, but it's a part-time job. If you ever do decide to take ownership of your email, bringing your own domain to Fastmail or Proton Mail has also worked well for me. Today I host one domain on Linode and one on Ramnode. As with most things email, there are tons of nuances with doing it yourself - I had to get both my email servers' public addresses placed on an allowlist with their respective providers.

- I self-host most of my services on my own hardware in my homelab. I eschew the big, expensive, loud, power-hungry hardware in favor of smaller, cheaper, and swappable hardware, and the strategy has worked out really well. I primarily use ODroid hardware (they offer both ARM and x86-64 hardware). You mentioned a floating/non-public address as a constraint, so you could still do this with tailscale/headscale/something similar and gain the benefit of cloaking your services inside a private network (and using some public/free cloud instance as a low-power VPN endpoint). I don't think DigitalOcean/Linode are bad choices, but I very much like owning the hardware layer as well.

- I've been self-hosting before Nextcloud existed and used its progenitor (ownCloud) and developed a harsh distate for the huge, sprawling complexity of the system (it was hungry for resources, broke on upgrades constantly, etc.). That story may be better now, but I've sinced move on to hosting very targeted, smaller services. For example, instead of Nextcloud's file syncing, I run syncthing everywhere, and instead of Nextcloud's calendaring, I run radicale. Nextcloud will probably be fine, but I've been happier with running a smaller collection of services that do one thing well (syncthing in particular is an exceptional piece of software)

I could really ramble on but I'll just include a list of the stuff I host if you have any questions about it. I blog[1] about some of these, too: Transmission, Radarr, Sonarr, Jackett, Vaultwarden, espial, glusterfs, kodi, photoprism, atuin, Jellyfin, Vault, tiny tiny rss, calibre, homeassistant, mpd, apache zeppelin, and minio. Outside my lab hardware I run a few instances of nixos-simple-mailserver, mastodon, and goatcounter (used to run plausible). I also run a remove ZFS server that I mirror snapshots to as my remote backup solution.

[1]: https://blog.tjll.net/posts/

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact