Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Do you self-host your database?
183 points by vira28 on June 29, 2021 | hide | past | favorite | 231 comments
Now that there are so many options like AWS GCP Azure Digital Ocean Heroku Render Firebase Supabase etc..

I was wondering does anyone these days Self-host databases on their infra?




For personal use and side projects, I always self-host. It's so much cheaper considering the tiny size. I usually start with sqlite instead of the "real" database, and in 99% of the cases it stays so.

At work, I never self-host. Life is so much easier if blame for whatever unrelated reason can be outsourced to some cloud provider or internal datacenter team.

Because if we self host the database, we will be responsible that it is not reachable, even if it's because someone in datacenter changed some seemingly unrelated firewall rule. I want to avoid having to explain anything to our customers. Just "have no idea, datacenter team is working on it. You can ask them?" has done wonders for my mental health and job satisfaction.


> Just "have no idea, datacenter team is working on it. You can ask them?" has done wonders for my mental health and job satisfaction

Never seems to have much benefit for me, because I'm typically the one who answers the phone/email/etc, and "the datacenter team" never does. No one is happy with that, and I still get the calls/email/messages, demanding the 'fix' or ETA. I would rather things be more under my control, because I inevitably am still 'blamed' when things go wrong.


But the thing is in most of the companies you don't have full control over the whole stack. Even if you have "full control" over the database, you don't have control over networking, firewall, OS, "security" patching, VMs, Docker, Kubernetes, Load balancers, vendors managing parts of the infra, internet provider, hosting provider ... Not even datacenter team may have control over all of it, but at least that's their job and their area of expertise.

And receiving a call and telling: "here's datacenter emergency phone, please ping them" or forwarding mail to datacenter team with "can you give ETA to the business" is an order of magnitude easier than trying to concentrate and resolve the actual problem while being interrupted every 5 minutes and not having required level of access to actually do it, even if the knowledge exists.

Though in most of the companies, business will not call you, but some poor business analyst or project/product manager, which will learn quickly to keep you out of the loop and don't waste anyone's time if you are consistent with answering "have no idea, ask datacenter guys".


> And receiving a call and telling: "here's datacenter emergency phone, please ping them" or forwarding mail to datacenter team with "can you give ETA to the business" is an order of magnitude easier than trying to concentrate and resolve the actual problem while being interrupted every 5 minutes and not having required level of access to actually do it, even if the knowledge exists.

You seem to have exceptional customers. Like the parent comment, my customers wouldn't have that. I'm their point of contact, and they want me to do the legwork and provide them with updates. They want to complain to me directly, often citing that they have enough problems with their business to care about problems my business is facing. We'd be out of business if we insisted they talk to our vendors directly. They would straight up laugh in our faces as they fired us, and frankly, I'm with them, since I wouldn't put up with that from our vendors either.


That's a valid point, looks like we are not in the same role. I work mostly on internal software in a big "enterprise". My customers are mostly internal customers (marketing, sales, ...) but I have a luxury of having a business analyst, project manager or product owner that receive calls from them, so I don't need to explain anything directly to customers.

In rare cases that some C-level break through and complain directly to me, it's usually easy to explain that if he wants a solution or an estimate fast, he should be dealing with datacenter team, since I don't have any leverage upon them, and he can complain directly to the IT-Infrastructure Director, to whom I don't even have direct contact.


Reason why I transfer from BA to data :P


Set and maintain strong, healthy adult boundaries. If somebody blames you for something that you don't control then call them out on it. You don't even have to be mean, just point it out. There's a lot of legal precedence for "But (s)he allowed me to abuse him(her)!" as an excuse for abuse.

Honestly that's the best thing you can do for on-the-job sanity. I understand that people feel they have a right to abuse others, but at some point you really don't have to take it.


Same for me. If I don't have full control over it, then it is better to be managed and outsource the risk. Junior programmers can't mess with the settings and it tends to just work out of the box for most things.

For my own stuff where I have full control over it, I prefer my own setup and actually setting it up the way I like it. If something isn't working then I can change it and fix it since there isn't any red tape. So the full control goes in my favor.


If you're self hosting at work, then this is probably your low-hanging fruit for freeing up time by outsourcing that headache. And as you mention, there's risk involved. Why put yourself in a position where people will point fingers at when things go wrong? Why stake the reputation of your product on something which isn't core to the business you're working for?

My answer to the original question. One big difference between work and home is that I have more choice on what I can do with my time at home. I can't waste time at work, but I can waste time at home. So, I setup my own personal stuff.


I definitely have a preference to have my own database for the purpose of development so I can change the structure, data and such as I need. It is kind of a necessity for developing a good suite of functional tests. All too often I have seen companies share a development database and that is a really bad sign for testability and generally stepping on each other.

So if that is managed elsewhere but I can do all I need that is fine, if not then I tend to host a database locally for this purpose.


That's definitely true. For development purpose I also prefer a local database that is only mine, but for production, staging, ... I don't want even to have access. Because in many companies, "you touch it, you own it" rule exists.


Not to mention backups that just happen and point in time restores (awesome for testing as well as recovery)


It doesn't work this way. You're also responsible for your poor hiring decisions, including third party providers.

There is no absolution in technical decisions, sorry.

> "have no idea, datacenter team is working on it. You can ask them?"

You can ask them? This response would get you fired from a Level 1 help desk in any competent organization, let alone a position of authority or execution.


We're currently transitioning from a multi-tenant 2TB postgres DB hosted on AWS RDS to using sqlite instead, a separate database for each client.

We're doing this for multiple reasons: a) As our DB grew the service became very expensive, one of the biggest items in our AWS invoice; b) Keeping the PG servers up to date is a pain, we simply don't have time for this; c) We wanted to be able to migrate to other clouds and even be able to offer a self-hosted version of our platform.


I'd be fascinated to hear more about this.

Where do the SQLite files live? Are you using some kind of NFS or EFS or similar for them? Sharding them across many machines?

How are you handling backups and high availability? Are you using Litestream?

How many SQLite files do your application servers have open at any one time?


SQLite mentions NFS as problematic because SQLite makes a lot of use of advisory locking. Which is supposed to work on NFS, but apparently is often buggy.


Writing to a SQLite file on a remote filesystem only works with a heap of caveats: don't use WAL, and disable SHM.

SQLite is crazy fast in WAL mode, but it's because they use a memory-mapped file to coordinate consistency across multiple processes/threads. You can't mmap a remote file.

(Source: hacking in a DB replica workaround to PhotoStructure so people can store their library on arbitrary filesystems... BTW, I'd love to be wrong here: if someone's figured out how to do this, please share!)


I'm idly curious why you need to use replicas in a workaround context, and why arbitrary filesystems are a problem.


The local-replica approach is only applied when your library database is on a remote filesystem, which SQLite can't read/write to (due to the absence of shm and flock).


I think I get it now: you want people to be able to point PhotoStructure at a NAS and have it Just Work. That was the context I was missing.

I was wondering if maybe the cute hack that was posted recentlyish (https://news.ycombinator.com/item?id=27016630) to enable SQLite3 to access large remote databases over HTTP might be relevant, but that approach requires a small server to be sitting at the remote, and only implements read-only access in any case.

Now I'm wondering what an NFS/SMB translation of the above idea (which uses SQLite3's VFS hooks) might look like: ramp internal buffering to the max, use separate transient files (with eg date-based unique names) for each in-flight request, etc. Append-only journal files also come to mind...

It's crazy how flaky network filesystems are at the lower levels. Like, I use NFS at home every day, without Kerberos (which lets me enable encryption, which IIUC provides AEAD), all while knowing my data integrity entirely depends on the IP checksum computed by my 15 year old ISP router. :D

(I already know I need to upgrade. Shhh.)


Also super interested in hearing more.


Also super interested, specially handling migrations and structure/schema syncs.


Interested in more info as well.

Also, how do you handle migrations, both schema as well as data?


Slack used to partition their databases per workspace (customer) and then moved away from it. The "Disadvantages" section lists reasons not to do that. https://slack.engineering/scaling-datastores-at-slack-with-v...

Two disadvantages that stand out for me are:

1. You must be able to scale the database up to your biggest customer and down to the your smallest this can be increasingly difficult to provision.

2. You are stuck with customers being isolated and makes it much harder to develop features that cut across a group of customers.


2. can be a huge security feature. If your web application only has access to the database of this one customer, you practically can't leak data from other customers by accident.


Slack has cross-workspace connectivity and communication as a product feature. It’s a really cool feature.


My experience mirrors that.

If you have a dynamic set of databases or even tables, be ready to invest into tooling to recreate features you take for granted in a normal database.


> 1. You must be able to scale the database up to your biggest customer and down to the your smallest this can be increasingly difficult to provision.

IIRC, Slack had a lot of scaling issues when they won a deal with IBM. Jumping from medium-size customers to such a behemoth will obviously be a challenge, but a lot of the difficulties they faced was non-technical and more business-related (“What do you mean, you want our product but not our monthly plan charged per seat?”). Anyway, that's a nice problem to have for a start-up ;).


IIRC this is the kind of problem that CockroachDB seems to solve well, given their implementation of range partitioning with automatic rebalancing. This ensures the partitions are evenly distributed even with an uneven customer distribuiton.

I have no affiliation with the product, but I have been comparing alternatives and it seems like that's where it shines.


2 can be a feature too!


This is a good post. Thanks for sharing!


3. By splitting, you can give your customers control on data residency, as mandatory in GDPR,

4. By splitting, you limit the exposure in case of SQL injection.

5. By splitting, you can have good performance for the customer who has 1M records, and good performance for the 100K customers with 10 records.


> Keeping the PG servers up to date is a pain, we simply don't have time for this

I've never used AWS RDS (or similar), but I had been thinking that part of the advantage of RDS was you didn't need to "keep the PG server up to date".

But apparently I was wrong! What does this look like on RDS?


(note: we use MySQL on RDS, but the docs for PG seem to be similiar enough)

Minor versions you can tell it a day and rough timeslot (few hours) for it to apply them. Major version upgrades always require you to start them, and the docs lay out prep work to do before hand (check for incompatibilities, setup any custom config before hand, take a backup/snapshot, do dry run)

One of the things I love with RDS is that last bit, doing a dry run with a clone of the database is fairly easy (can't say what it's like at the TB level of data, probably not as easy/quick) to test.


You mostly need to do that if you have a security division that's looking for reasons to justify their existence. One reason involving little actual work on their own side is to constantly nag other teams to install updates "for security reasons", regardless of how much this actually contributes to security. Which usually isn't that much in case of RDS not exposed to the public network at all.


This sounds super interesting, are you able to share any further detail?

I'm currently wrestling with a 35TB Aurora DB and considering options for future growth. The system is multi-tenant and one of the options on the table is breaking out the tenants and hosting them on either their own system or alongside smaller subset of tenants.


Done that before, where my largest clients are exponentially bigger than my smallest client. The key is to realize one-size-fits-all doesn't work, not in the long term. I'm talking about feature development, storage, etc.. In the end, we break out large clients onto their own instances of RDS, where smaller clients stay multi-tenants. They share the same schemas, etc.. We wrote a lot of utility scripts to migrate clients from small to big, and vice versa. Smallest clients get automatic minor and major upgrades to latest features almost immediate. Larger clients generally only get major upgrades.


The issue with this approach (other than isolation) is that the largest tenant may be orders of magnitude bigger than others and still need more sharding to perform well while their smallest still needs to share a db. Have you looked at TiDB (MySQL) or CRDB (PG)?


Absolutely.

Thanks for the suggestion of TiDB, haven't previously considered this but it looks interesting, will take a closer look.


I must admit I'm very curious as to what real-world thing(s) translate to 35TB of stored data (that presumably isn't large blobs, like video or ML models).

(And oh wow I just realized the billing must be amazing...)


Yep, the AWS cost footprint of this system is eye-watering. The compute spend is 2x the RDS spend!

Can't say too much, but it's Financial Services related.


Ah, a gigantic time series event firehose. (That was what I was curious about.)

I guess this cues the stereotypical "have you looked at kdb+" question, then, which you've probably already fielded :)

(https://kx.com)

IIUC, kdb+ does in fact have a real DB hiding in it, and it's not all just "weird programming language" - but it does seem to require/prefer a reasonable amount of buy-in to the way it does things to be valuable. I guess the lateral question is whether its performance obsession extends to smaller on-disk footprint, and whether the savings would be worth throwing out all the the existing Aurora integration work. (Answer: very probably not.)


How do you deal with common sqlite lock file when using it for multiple users?


I solved that once by having a service layer on top of the db. Just a simple REST-like API used by the redt of the app (in Spring).


Not a problem up to 100,000 customers imp


I used it 10 years ago and initially locks were a slight problem. Not sure if this has been solved somehow in sqlite itself.

I solved the problem by handling all the DB operations in a separate thread that handled write and read queues. It certainly was not as easy as using mysql/postgresql where server does everything for you. Nowadays there are probably plenty of libraries that handle that for you.


WAL mode has been added in the last 10 years which handles read concurrency really well IMHO. You’re still constrained to a single writer but that’s not usually a problem as long as you keep your write transactions short.


This pattern is what I did when I used SQLite in production. I used a lightweight actor model microframework to make it easier to route write requests from multiple areas of the code to the single writer thread and offer a continuation (or async/await) perform any follow up actions. Performance easily went beyond the requirements for the app.


I'm not sure how to express my reaction to this kind of design/architecture without being rude. Especially with performant open source databases being available. Was this some sort embedded app or toy project?


You'll probably come to regret that. A DB per customer is a maintenance nightmare.

I remember a discussion here a few years ago about it, and there were so many voices chiming up to say 'we did this, terrible idea'.


Why?



I'm very curious, how do you handle concurrency? Or is this why you have one db per client?


Sounds like a nightmare if you ever have to go under SOC2.


Very interested to hear your story and details!


“Self-host” is such a weird word. Having your own stuff yourself should be the default, should it not? I mean, you don’t “self-drive” your car, nor “self-work” your job. The corresponding words instead exists for the opposites: You can have a chauffeur and you can outsource your job.

I think the problem is entirely caused by the US having absolutely abysmal private internet speeds and capacity. Since you can’t then have your own server at home, you are forced to have it elsewhere with sensible internet connections.

It’s as if, in an alternate reality, no private residences had parking space for cars; no garages, no street parking. Everyone would be forced to either use public transport, taxis and chauffeur services to get anywhere. Having a private vehicle would be an expensive hobby for the rich and/or enthusiasts, just like having a personal server is in our world.

— Me, 2019-10-13: https://news.ycombinator.com/item?id=21235957#21240357


> I mean, you don’t “self-drive” your car, nor “self-work” your job.

You could just take these metaphors up a step and you get the opposite effect.

For example, building/fixing your own car or running your own business. Those are considered unusual compared to using a third party.


And you don't usually grow your own food, yet it's not seen as strange.


It would be seen as strange if you say you are a farmer but you don't have a farm that you grow food in, instead you just temporarily rent a farm from random places for short amounts of time.

Last year I was renting my farm from AppleFarms but they started charging high prices when I wanted to expand so I switched to AmazonForestServices. For migration I used the DockerShippingService from GiantGarageInc to transplant all the seeds and plants from my previous farm to the new farm. They have amazing support and their staff was very diligent in plucking all the plants carefully.


It’s non-obvious, but that’s actually very close to how farming works today, at least in the US. Farm-owners will lease their land for a season to grow a specific crop. The land needs to meet requirements for acreage, irrigation, etc., but someone else will “use” the farm.

I’m sure this happens on a continuum from “that’s a big field, can I use it?” to “here are seeds, I expect XYZ yield in N months”.


> "Having your own stuff yourself should be the default, should it not?"

I don't think it has anything to do with internet speeds. Self-hosting is simply too complicated, not just for non-technical users, but for many developers too.

Compare the simplicity of installing a desktop app. Click to start the installer. Click next, next...finish. Done.

Self-hosting in the cloud is ludicrously complicated in comparison. Docker, Sandstorm, Cloudron, your command line script - these are neither simple or easy. (A developer's definition of 'simple' and 'easy' bears no relation to the real definitions of these words).

It's also one of the reasons why SaaS, for better or worse, is eating the software B2B (Business-to-Business) and B2C (Business-to-Consumer) markets. In comparison, the self-hosting market for web apps is negligible.


> Self-hosting in the cloud

That’s a contradiction in terms. It’s like saying that you own your home when what you have is a timeshare. The cloud is a timeshare.


I think the main point is self managed vs buying the service? I think a more apt metaphor might be renting a car from a company who takes care of the car vs buying a car and managing all the maintenance myself.


Internet speeds may suck in some parts of the US, but don't get it twisted: companies host their DBs on the cloud to mitigate the risk of their own incompetence/for lean gains from a managed service (RDS has done wonders in this realm).


Chauffeur is the right model for most on-prem business IT, which is typically owned and physically held by the business, but driven by a contractor who comes to visit as needed.


Can you provide some more information about the claim that the US has abysmal internet speeds and capacity?

Edit: Grammar.


I said “abysmal private internet speeds and capacity”. Note the word “private” (which you omitted). By all accounts, the speed and capacity available to US residents in their homes is far inferior (both in price and speed/capacity) to what is available in many nations in the world. Sorry, I don’t have a reference, it’s just what I hear people say every time the topic comes up.


Not only it is on average slower in the US, but also usually more expensive: https://fairinternetreport.com/research/internet-speed-by-co... (first link I found doing a web search)


IMHO self hosting your database (even in the cloud) is the best way to do it.

You have control over the version.

You have control over features.

You have control over performance.

It’s tons cheaper for greater performance - especially when you go over a few hundred gigs.

Yes, the hosted ones have built in replication - but my data is far too valuable to put in the hands of a third party. If they lost it - they could only shrug and say sorry and that’s it. The TOS indemnifies them.

I think it’s kind of honestly lazy — to not take the management of your data in your hands if you run a database of any significant size.

That being said, we do replicate and backup 9 ways to Sunday.


As with most things in infrastructure, it is a tradeoff to be considered properly.

A startup, where you'd rather have your engineers working on adding features to your product instead of "managing and operating" a database? Have a lot of money to throw at this problem, being aware that you are paying for convenience? then using RDS and such would probably be a better choice.

An established company/product with a team of people working at a non-crazy pace, and catering to steady enterprise customers? Would like to build a sustainable long-time infrastructure setup that is optimised for efficiency and cost? running your own Database chosen well for your needs, backed up and tested rigorously, and managed by people who cares for the company's infrastructure would be a better idea.

It is weird when large companies with a dedicated platform team choose to run so many databases for their teams/services on a PaaS. That clearly is an optimisation opportunity.


> your engineers working on adding features to your product instead of "managing and operating" a database?

That's the typical sales pitch for cloud offering, but I really fails to see how it applies to DB. Which part of “managing and operating” your database do you save when using such a service? And how big of a load did it put on your engineers in the first place?


As a Sysadmin who has spent a lot of time engineering systems to do specific tasks, I will tell you that there is a lot of effort that goes into your "production" database. Some of the tasks are not going to magically go away just because you use a managed cloud service for database - securing access to your database, monitoring basic metrics such as CPU, disk usage etc, making sure your data is backed up and tested for recovery. However, if you decide to self-host your production database, you've got more work such as

1. Keeping your database software up-to-date. Applying security patches without a huge downtime.

2. Managing the installation so that it is operating correctly - it's Systemd init files are okay and that it will come back to life should the server restart, logs are being stored correctly at an appropriate disk partition and that they don't fill up your data directory,

3. OS configuration aligns with your and your database software's needs. Which sysenvs did you set? were they changed as part of your OS updates?

All of the above are worth it in many cases. Spending a few days optimising the installation for your workload can make a huge difference and also empower you to use custom extensions like Timescale which is not so easy to do in some managed database services. Also, cost savings because you take that effort on yourself.


Your crown jewels backup/restore will always work. You can easily/always spin up a new db copy.


What do you mean by "managing and operating" a database?

Applying migrations? initial setup and configuration?


I think he means the usual stuff, so monitoring (disk space, memory, disk io, slow queries), patching (following mailing lists, emergency patch on sunday afternoon), failover, scaling (read replicas, multi master), backup, backup restore test, tuning, auth, security etc.


How does a cloud offering's going to help you with slow queries and tuning? Same for the security part (the biggest threats coming through your app anyway).


Honestly, I don't know and would like to know as well. Can you share some insights into how much a cloud database allows to be tuned?

I haven't used a cloud offering before, but OP was asking about managing and operating a database. And that list is what we do at work or at home.


> initial setup and configuration?

If you think there is nothing more to maintaining a database than initial setup and configuration, you are in for a rude awakening. Shit happens.


I mean how much time daily would it take on average?

I felt like generally you have nothing to do, except sometimes you gotta do something like respond to disk quota alert and stuff


you dont need to do much, i would say my time budget on self hosted replicated and backed up MySQL database administration is under 20 hours per year. yes the first couple years took more time (learning, building out a backup strategy) but id have the same amount of learning with a hosted solution so i don't consider that part of time spent.


And every thing you have control over you are also responsible for maintaining. If that's your thing, ok, but not everyone wants to be a DBA all day.


self hosting a now 300gb database using mariadb, been in production for 14 years with almost no downtime and very good performance. i do not DBA all day, i do it maybe once a year to do a version upgrade.

this perception of having to be special or only focus on sysadmin stuff for self hosting is incredible. cloud providers have managed diamond cartel levels of consumer propaganda in the tech sphere, its pretty amazing once you notice it.


It's not perception, it's empirical evidence. Every thing I've set up myself is another thing I'm responsible for maintaining. Each and every one of them generates some baseline of stress, because there's only so many things I can juggle in my head at once. I'm relieved when a concentrated economic effort allows me to throw money at something I'd rather replace with doing something else, something I like doing.


At the early startup stages, you barely have to touch it. It'll just merrily run without intervention for months.

At the stage where you need to spend a few hours a month dealing with it, you can afford to hire a DBA.


In my country, after certain size, pg on aws ec2 with a few capable dba/devops salary still cheaper than rds or aurora


In actually most countries. Even the USA. Depending on the size of course.


and most importantly, you have control over your DATA.


This is the MOST important reason no one talk about.


> self hosting your database (even in the cloud)

> my data is far too valuable to put in the hands of a third party

Confused, you self-host "in the cloud" without a third party involved? What does that mean? Are you not counting the party hosting your machine?


They're talking about running the db on a vm, setting it up and tuning it themselves. As opposed to a managed solution which is setup and tuned and hosted on a vm controlled by another company.


I'm still unclear on if the VM here is on a machine they own/on their own premises? Or is it under someone else's control? "In the cloud" sounded like the latter but it sounds like the intention was the former?


It means when you use the database service from AWS, they provision the machines and the replication and the database themselves.

Whereas you can get a machine on AWS and setup the database yourself.


They are talking about renting a VM from a cloud provider and setting up the DB themselves (example: EC2) vs renting a fully managed DB from a cloud service provider (example: RDS).


But if you're running on a VM from a cloud provider isn't your data still in the hands of a third party..??


Yes. A Postgres which is currently only 1TB in size. It's read heavy, the write workload is very low in tens to spikes of low hundreds per second whilst the read is in thousands per second. The workload is stable during traffic spikes due to app caching and CDN caching (for guests).

I do this for cost reasons, self hosting the database is so much cheaper than the managed options that for the same cost I can run the entire app stack, load balanced web servers, etc.

No containers anywhere but maybe in the future I'll add some.


Cheaper in terms of $$$. If you like managing a database, I'm sure it's a good deal!


Postgresql is really easy to manage. I manage a few instances and I rarely have to do anything. They might consume just a few hours per year.

Oracle, on the other hand, requires a lot of babysitting and you'll probably be better off using their cloud (and still it will be more of a burden than Postgresql).


It was very little effort to setup. Used puppet to define everything I wanted the infra to be, and have pretty much left it be.

It's only reachable from the app servers, backed up to tarsnap.

It's really not a lot to learn and hasn't been a chore to operate.


I'm mostly thinking about all the stuff that goes wrong. Corrupted data files, server(s) die, monitoring, getting paged, backups, testing restores, yada yada. It works until it doesn't, and then the value you derive from self-hosting is questioned when there's a failure. If you can self-insure against shit happening & like doing so, then it's probably worth it. Mostly just point this out for the potential self-hosters who haven't thought everything through, and more likely (like myself) who wouldn't know what to think of in the first place.


There are definitely some things to be considered here, however I find that most people drastically overestimate the amount of work associated with hosting things.

Also they tend to underestimate the amount of work required when using managed solutions. For example, you'll certainly want to do secondary backups and test restores even for managed options.


100% this. The demo application I built for my book[0] that sets an entire standalone PostgreSQL server can be understood in a few hours.

If your database fits one server, you should consider it.

[0] https://gist.github.com/strzibny/4f38345317a4d0866a35ede5aba...


I don't see any difference in maintenance effort. No matter if the database is self-hosted or in the cloud, I need external backups (built-in backups on that cloud service are convenient but not sufficient) and a tested way to restore the whole infrastructure from scratch.

Someone else managing that DB doesn't prevent me (or someone malicious) from destroying the data, various infra screwups still are a thing (though perhaps less common). The only difference is if doing the same things through the managed DB admin panel takes meaningfully less effort than ssh on a self-hosted box; and it seems a wash - in the managed DB is easier to use complicated features than when self-hosted, but they tend to be more complex for simple things.


If uptime is critical and you don't want this to be a full time job then don't do it... Opt for PaaS.

But for me uptime isn't critical, I can happily be down for a few days if needed. It hasn't happened once yet but I can tolerate it were it to happen.

Plus this is how I learn. By doing it I learn how to do it and what should be considered. When it goes wrong I learn how to avoid that in future and what I should do to minimise risk of it recurring.

I find the database to be the easiest past to manage, it's file storage and availability of that which I outsource to AWS S3. The volume of that is harder for me to trivially solve. Self hosting Ceph seems more intimidating. Postgres is easy though.


I had a similar setup, only with MySQL as the database since 2003. Started on a server in a closet of my own apartment, then moved to Leaseweb, then Hetzner. The project had its highest point some 14 years ago, with well over 1mln page views per month, but I could still fit it on a single bare metal server.

As the project started to fade out, I moved to Hetzner Cloud to reduce the cost of standby infrastructure.

It was down for 4 hours in a row several times over these 18 years, but otherwise the uptime was close to six sigma (under 3 hours per year).

It's surprising how rock solid hosting providers are. Can't imagine hosting anything substantial on AWS after all these years.


In practice, both hardware and software used in production is 10 times as reliable as it was 10 years ago. I’ve seen less downtime on the average Dell rack server with RHEL than the average yearly AWS downtime.

Additionally, running a managed DB involves a lot of upkeep as well, it’s just a different set of tasks.


I keep hearing about tarsnap - why? It doesn’t seem to have anything new and costs more than the alternatives.


For self-hosting, MariaDB (or Percona or MySQL) with InnoDB is really unbeatable.

- Version upgrades don't require migrations.

- Replication is pretty easy, well understood and allows version differences between the two ends. Nowadays they even got crash resilience right.

- It doesn't require VACUUM or any other regular maintenance.

- XtraBackup is awesome.

I've been running a pretty large and rather critical MariaDB database for about 15 years (of course it has migrated to a different machine a couple of times), without anything 'interesting' happening. (Except for power failures messing up replication consistency - but that seems to be a thing of the past.)

My experiences managing PostgreSQL were... not as great. (Though with regard to most other aspects, PostgreSQL is a lot nicer than MariaDB.)


Are you using GTID's for replication? Most of my replication issues stem from not being able to use them


Yes, as I understand it GTIDs are part of what enables replication safe crashes.


Yes. I even go so far as to embed the database in my applications.

In general, this is implemented as two main flows:

- Data collected from "the edge": Web servers that serve ads, or receive form fill-outs for lead generation, that do nothing but record this information in a logfile

- Configuration and Reporting pushed from "the hub": A central processing node (usually in a hot/warm configuration, but I've been known to use things like raft here) that receives information from flow 1, makes a decision and writes the result to flow 2.

Because the "data" I want to materialise to clients always either fits into ram, or is in a coherent stream ready to deliver to a client exactly, my applications are very fast: I can always just add another node (in the right geographic place), however I also noticed another really interesting benefit: It is much easier to develop for than using an external database.

I have a general purpose tool I wrote decades ago that moves logfiles around reliably (and at low latency), merges them together, implements failover, etc, so I really don't have very much to write to make a new application. Sometimes I experiment with the encoding of the payload, but sometimes not.


So you're using web apps that run sqlite? With some logic on a central server, which then updates all the web apps?


No. I mean, I have used sqlite with dbmail (not a web app) in this way sometime late 2000s, but if you're just going to load the table into memory anyway, you might as well skip the SQL and just record the data.

For example, I did a "sandwich store" app that would list orders in JSON. These were written to a log file via a PHP script. The replication tool would copy them to the "home base" (the store itself). When the kitchen would print an order, it would record a log line saying the order was started, and this would be replicated via a different logfile to the two web servers. The user might submit an order and not see it right away, but I hid this using a cookie so you might only notice if you were using two web browsers logged in with the same user. The receiver on the edge would write out files for each order containing the most recently loaded status, so collecting the status just involved asking the servers for the contents of this file. When the order was scheduled to be delivered, another log line would update those files. And so on. Nothing really resembling a "database" here at all -- just files and memory.

Most recently, my ad server (erlang) has a ets table (cached in dets to speed recovery) that contains all of the publisher+targeting details as a key, and the list of matching advertisers. This table gets updated when home-base records a configuration change (say, because an operator adds a new site or advertiser). Configuration changes look something like this:

     {{market,[{id,<<"marketid">>}]},
      {{2021,6,28},{10,40,9}},
      <<"operatoruser1">>,'nodename',
      {patch,market,
          [{patch,demands,
               [{patch,demand,
                    [{id,<<"advertiserid">>}],
                    [{patch,customers,
                         [{patch,market_customer,
                              [{customer,[...]}],
                              [{patch,...},{...}]}]}]}]}]}},
     {{usr,[{id,<<"username">>}]},
      {{2021,6,28},{9,52,21}},
      <<"operatoruser2">>,'nodename',
      {patch,usr,[{patch,accts,[{insert,<<"siteid">>}]}]}},
     {{market,[{id,<<"marketid">>}]},
      {{2021,6,28},{9,50,42}},
      <<"operatoruser2">>,'nodename',
      {patch,market,[]}},
     {{market,[{id,<<"marketid">>}]},
      {{2021,6,28},{9,49,37}},
      <<"operatoruser2">>,'nodename',
      {patch,market,
          [{patch,demands,
               [{patch,demand,
                    [{id,<<"advertiserid"...>>}],
                    [{patch,customers,
                         [{delete,market_customer,[...]},
                          {insert,market_customer,...}]}]}]}]}},
They're actually stored as binary terms in a disk_log. A process reads the disk_log and materialises the in-memory configuration for the web services and rebuilds the ets table. Erlang has a lot of tools that (perhaps unobviously) make this very easy.

And so on.


Yes, I selfhost Postgres/Redis/Mongo for small projects (db + app server on the same machine or a small number of VMs). Usually in docker, sometimes systemd units. It's amazing what you can do on a single $5-50/month machine if HA isn't super important.

On larger projects (typically once k8s gets involved) I'm running on a cloud provider anyway and I might as well use a hosted version like RDS for the main database.

It comes down to the importance/budget of the project. I'm not a Postgres expert by any means but I'm confident enough that for simple use cases I can manage selfhosted. And if I need more, hosted versions are available at a cost.

However, any hosted DB product I use has to be open source and in theory easily replaceable with a selfhosted version. After the Parse.com fiasco I'm averse to closed/proprietary components in my infrastructure.


Do you prefer docker over systemd units?


I always wonder what the point of running a docker version of a daemon you can easily run with OS package manager, especially for those well tested packages like databases.

And then security patches are rolled out accordingly to be updated automatically by the OS service. Not sure how good with docker container is on this part.


The great benefit of docker is, that you can replicate the exact same environment on a number of different machines. So the developer on Windows has exactly the same database version and configuration as the production server. Especially useful if you use some extensions or external modules for your database.

If you host something serious and you don't need special builds of databases it is probably better not to use docker. But it's a bit more effort.


I always thought using a local machine is an anti pattern. There are plenty of virtualization software, one should just run the same OS as the production via VMware locally or give each person their own dev environment in the cloud with the same OS and be done with it than trying to align the environment partially.


Doesn't docker do that for developers? You run a container and then applications run in a defined environment.

I don't know how you wan to enforce that all developers and all production machines use the exact same environment. Maybe your production still runs RHEL 7, so all the developers need to run their desktop on CentOS 7?

Or take open source projects. Isn't it easier to put in the readme: "To start developing install docker and run 'docker-compose up'" vs. "Please install Ubuntu 18.04 into a VM, those 100 packages, but make sure this 50 packages are not installed."

In my experience virtualized desktop environments are very painful to work on, even with fast connections.

Docker (or containerization in general) is very useful for development. Much more useful than for deployment/production.


Though I don’t use docker extensively anymore , but I think people generally new to tech use it to prevent any haywire situations that happen during configs


I am not OP, but I prefer systemd unit myself. I can use stock CentOS/Rocky/RHEL/Fedora, install the database package and be running in no time.


It depends. Many small projects I'm involved in already have a docker-compose.yml with database, backend and other services. Combined with a private docker registry somewhere (Gitlab, ECR). If that is available I will use docker to deploy.

If that is not available, then it depends on how complex the dependency graph is. If it is simple and I can just `apt install` all dependencies then I'll wrap the install in an Ansible Playbook and use systemd. If not I create my own docker-compose.yml for deployment.

edit: when multiplexing multiple projects on the same host (happens with very small projects) I always use docker.


When I last tried, it, AWS RDS still required EBS-backed disks. It may be acceptable for 99% of use cases, but that means there's a ceiling around 60k IOPS, even with PIOPS disks.

Outside of RDS, you're afforded a bit more creativity in your performance profile: multiple disks in RAID 0, tiered storage (burstable AND, rather than OR, provisioned IOPS storage), and instance store, to name a few. I actually once witnessed one of our databases riding out a 400K IOPS storm for a few minutes.

Before I get downvoted: yes, these introduce risk into your data architecture. It's crucial to understand what these choices will do to your failure profile, and plan accordingly. In our case, fast-failover to a hot spare database, combined with total recovery within two hours, was enough of a compensating factor to manage that risk.


> When I last tried, it, AWS RDS still required EBS-backed disks.

Beyond having an IOPS ceiling, the other problem is latency spikes caused by EBS failures. At my last job where we had a write heavy RDS instance, an EBS failure would occur almost monthly, with no warning, and cause write latency to spike 3-5x for a couple minutes. This was in 2017-2018.


FYI RDS aurora engines do not use EBS (instead a custom replicated log based storage layer) and thus do not have this iops limitation. They are extremely fast.


I self hosted everything from Postgres to ElasticSearch.

IMHO, managed solution ins't that much reliable than running your own with the right planning. Failover strategy of AWS RDS is just absurb. It's literally just promote other instance, switch DNS over. And I do see issue where the new master are behind a few transaction compare with the replica...

Managed database is just to move the responsibility when the database is down to somebody else.

Self host gives you a lot of flexibility to mix and experiment with new technologies and tooling.


Amazing. What are some advices that you would give for someone who is looking to self host?


Never run postgres with the default configuration. It comes with some extremely conservative settings so you are unlikely to get decent performance out of the box.


Could you elaborate on this or point us to documentation (url, book, ...) about it ?


The postgres docs are excellent, https://www.postgresql.org/docs/13/runtime-config.html

I have seen tutorials where folks go over the most essential settings, but I can't seem to find the ones I have used now, so me just doing a search for 'postgres tuning' is no better than you doing the search yourself.

Maybe someone else can chime in with a good guide?


We created https://postgresqlco.nf which not only offers detailed help about all configuration parameters, but also provides a "repository" for you to upload/download your Postgres configurations, and a Tuning Guide. Hope it helps.


http://pgconfigurator.cybertec.at/ offers a quick way to obtain an often-acceptable tuning.


+1 for spending a little time to tune Postgres for your needs. Also, backup and test restores regularly.


In the 6 years of running all the infrastructure for multiple thousands of daily clicks myself I only once had a currupt database that i fixed like 15 minutes after I got the bug report via email. And that's 5 years ago.

So yeah running my own postgres, one on each of my web servers actually. Mostly because it's simply sooooooo much cheaper and I hate monetary growth limitations for my projects


This article convinced me to go managed for Postgres: https://rbranson.medium.com/10-things-i-hate-about-postgresq... (2020)

> The good news is that the pain caused by many of the issues brought up in this post can be reduced or eliminated by using a managed database service like Heroku PostgreSQL, Compose PostgreSQL, Amazon RDS for PostgreSQL, or Google Cloud SQL for PostgreSQL. If you can use one of these services, for the love of all that is holy, please do!


Last place I worked self-hosted. Price performance on old hardware was (and seemingly is) drastically better than cloud DB for smaller usage. Current work is using AWS RDS and it seems to perform pretty poorly unless we're willing to pay a lot more than our scale justifies. (Frankly, I suspect running our prod DB on a developer laptop would outperform the low-cost RDS setup we're using...)


I do host a medium-sized PostgreSQL database, but i would love not to. I need the control you won't have with a cloud offering (you can't install extensions) but i don't want to do all the other steps, and you always have the fear that backups do not work correctly.

And i am not the only one hosting it by my own, many people are using self-hosting PaaS like dokku, flynn or caprover. And all these solutions have a common problem, they all need to reinvent the database as a service layer. What is currently really missing is a good open-source PostgreSQL as a service appliance, something "simple" like Heroku. There have been attempts like Elephant Shed, but they all try to do too much and therefore fail in adoption as they never reach stability. Or people are forced to use complex solutions like patroni which is doing many things, but if something fails you have no clue what to do.

So what is really missing?

1. A simple old-school PostgreSQL vm image 2. Built on a copy-on-write file system to clone production environments like https://postgres.ai/ and Heroku for development reasons which will not really need any storage space. 3. A built-in backup solution like pgBackRest storing the files encrypted in some cloud storage (and restore options!) 4. It does not need a complex ui or inspection/monitoring software, there are many solutions you can run on a different machine. 5. Replication, Auto-Failover etc. are hard just make it a single server which you scale vertically, if you need horizontal scaling you have very specific needs, there can't be a one size fits all solution, so don't even try it. And a real bare-metal server with nvme disks has a lot of power it's insane how fast it is compared to cloud hosted databases.


If you genuinely have a fear about backups, why don't you just automate a weekly/daily restore of your backup and have it check the data makes sense (e.g. there's an order in the order table within the last 24 hours, that the orders have increased, etc.).


> I need the control you won't have with a cloud offering (you can't install extensions)

Depends on who you go with. Google's managed postgres thing lets you install supported extensions.

https://cloud.google.com/sql/docs/postgres/extensions


They don't let you "install" an extension, the `CREATE EXTENSION` command is just enabling an installed extension for a specific database. There are a ton more extensions than Google Cloud SQL or AWS RDS support. Both of them for example don't have the often used hypopg extensions which allows you to hypothetical create an index to check whether it's used for your queries and see how your execution plan changes. Building indexes for large databases can take a lot of time.


> supported extensions in my experience, vendors support few extensions (for good reason), but lots of time you need a specific use-case in that long tail of unsupported extensions.

for example, timescaledb isn't supported anywhere, so the options are managing it yourself or using their own hosted Postgres version.


DigitalOcean does offer Timescaledb in the their managed postgres offering AFAIK, along with quite a few other extensions (PostGIS etc)


But does it offer the new TimescaleDB 2.0 with all the new features? They changed their license and made all enterprise features free but only for their Community License (for self-hosting). Cloud providers are only allowed to use the fully open source Apache licensed extensions which is missing some parts.


The version of Timescaledb offered by DigitalOcean is Apache licensed, which is somewhat limited compared to the community licensed version.


Yes. A few years ago, was all-in for the cloud. Today, we have way too many examples of the cloud providers shutting down people they disagree with, for reasons that usually have nothing to do with terms of service. Given the rapidly expanding judgment of speech that is deemed "unacceptable", this is a real problem for everyone, not just conservative or free speech sites.

One example of many: America's Frontline Doctors, famous for their dissemination of factual information about Hydroxychloroquine and Ivermectin for Covid treatment, had their hosting pulled by AWS just a few weeks back, despite the ever-growing evidence that their stance is scientifically supported. (And I don't care if they were lying their butts off, AWS still shouldn't have pulled their hosting.)

Cloud services offer awesome leverage, which is a huge advantage, but anyone relying on them for their business or organization to operate is being completely irresponsible.


> Completely agree. I am so disheartened with how co-workers don't seem to take any issue with the aws service suspensions. They are just like oh well, it won't ever happen to us and they must have deserved it. New rule for myself when designing any system now is no cloud vendor lock in allowed.


We host our own CockroachDB cluster on 3 baremetal machines.

We also host 2 separate PostgreSQL instances, each with an asynchronous replica (which serve some of the read queries) and with bareman for PITR.

The PG instances used to be on RDS. The performance difference (and cost savings) is....at least least 10x.

We have no devops/sysadmins.


I'm not sure whether you are asking whether people use Database-as-a-service instead of installing the DB themselves or whether you are asking about on-premise vs cloud Infrastructure-as-a-service.

As many have noted, DaaS is very convenient but not always completely flexible with configuration and the pricing is not comparable since there is usually a minimum price even if you only want 1 table with 10 rows so it won't scale in the same way.

For all of my home and work projects, we have servers hosted on the cloud to get better internet bandwidth and we install and run our own SQL Server and MySQL instances. A bare Vm is pretty cheap on the cloud and installing databases is fairly easy, although I have never personally setup clustering or failover in MySQL or SQL Server, I think that is quite involved.


I embed an SqlLite3 DB into the application, which self hosts the DB as well as self hosts it's own Internet/web server. The application syncs with other copies of itself running in other locations. The application hosts its own DB, its own server, and has a RAM footprint of under 2MB before the DB is loaded into the memory mapped backing file. The application (video security) is screamingly fast, cross platform, and 100% self contained. Works fine on an air gapped network. Its biggest issue is the hardware footprint and expense to operate is so much lower than every other option, it makes people suspicious. That suspicion should be on the crappy other options, so bloated they are a sad, expensive joke.


Yes. I self-host for my infrastructure. MariaDB and PostgresSQL. I've been self-hosting MySQL and Postgress for nearly 15 years now. But it's not something I recommend my clients doing.You need to be really careful to avoid foot guns like the one that took down new blur. For example. I don't run my DBs in containers, although I run my applications within containers.


Makes sense.

How do you handle firewall? Just open up the source IP with the IP of wherever the Application is hosted?


Yes, limit connections in firewall. Ideally you are only listening on a private network anyway, better yet only listening on loopback or a local socket. But still firewall it. You can also configure the database to only accept connections from a particular source. Do that too.


Thanks for clarifying.

> you are only listening on a private network anyway what if i host my app in say some where else than the DB?

> better yet only listening on loopback or a local socket. For that app and db needs to run on the same server. Isn't it?


What are the downsides of running DB in containers?


I wonder if it is a good deal or not. But i Host m' own Virtuoso instance, an open source DB for graph data, with a 200GB .dB file. Plus an Elastic with roughly 500GB of data. On a OVH dedicated instance, with 128GB RAM. All that for something like 1100€/year.

Would a hosted service be a better Idea, in your opinion? (That machine also has a LAMP stack for some of my web stuff).


I used WordPress as my datastore for a while on personal projects. I could wrestle it mostly into shape and use the built-in rest api. WP is my day job so I know it well and like it, but I'm not a backender, so it was a bit clunky and time consuming to get things the way I wanted.

Then I needed to host that WP instance somewhere. That was also a pain.

I decided last year to try and use Node/Express/Mongo for my backend instead and stay fully in JS land. But now I need to host a Mongo db. Instead of that, I tried Atlas for hosted mongo, but ....it feels like it's from corporate America circa 2004.

Then came Supabase.

I randomly came across it, gave it a try, and it's heaven. It's absolutely dead simple to get working (for my simple needs). It can work as a regular DB or as a realtime thing like Firebase.

Strongly recommend trying Supabase.


+1 for Supabase. Currently playing with Phoenix Framework and using Supabase's postgresql offering. Just needed to enable Session pooling and the CITEXT extension, but otherwise it all worked fine. Very price competitive with many of the other DBaaS.


Supabase cofounder here. I just want to give my thanks for your support. It’s always nice coming across a comment like this in the wild - makes all the hard work worth it.


Hi I hope its okay to ask you here.

I am looking at supabase because someone recommended to me that it uses postgres.

My current backend uses postgres and postgis and I was looking for real time geospatial service. I'm wondering does supabase make it easy for me to do real time geospatial querying, something like open a listener for radius(10km) around the user and have it continually update the results around me? I see that you have postgis extension available, so can I do that thing I mentioned?

I'm using firebase currently, with geofire, for the real time aspect of my app. It have limitations like can't do ordering or additional filtering because of the geofire approach.

Please let me know, and good luck with Supabase !


Your use-case is a bit novel, so it might be worth asking in our forum to make sure I haven't misunderstood your requirements: https://github.com/supabase/supabase/discussions

I think we have all the pieces in place for you to use:

  - a Postgres database with PostGIS
  - a Realtime listener for listening to database changes
I don't know if you need realtime though if you are constantly sending the user's location to the database. It might look something like this:

  - Turn on PostGIS
  - Create a Postgres function which takes in the user's location and calculates radius results. eg: get_objects(lat text, lng text, radius smallint)
  - Call this function from the client every XX seconds using the client[0] `supabase.rpc('get_objects', { lat, lng, 10 })

Hope that gives you a few ideas!

[0] RPC: https://supabase.io/docs/reference/javascript/rpc


Thanks! That's awesome. I made a lot of progress yesterday with supabase, setting up some tables and auth :D

I really like it, and I'm going to keep maintaining a secondary project with it just to keep up with all the good work. Seriously, all the best!


Only issue with Supabase so far is no HA


For personal stuff, I tend to keep it about as simple as possible with an HSQLDB instance hosted in-process. (All JVM/Clojure)

I use a small library that encapsulates the use-case (including some schema migration stuff that should probably be replaced with flyway).

https://github.com/mschaef/sql-file

There's a lot that can be said about this approach, both for and against, but I find that it brings a lot of power, is easy to set up, and generally stays out of the way. Given that the fundamental abstraction presented to user code is essentially SQL with connection pooling, there are also good escape strategies to get to architectures that would support however much capacity I'm ever likely to need.


Self-hosting MariaDB with multi-master replication. Works pretty well for my cases. Have a friend that was surprised by the AWS pricing changes on Aurora (they charge per query now instead of bandwidth/usage?)


We ran on this set up for a couple of years. We even migrated from linode to AWS ec2 instances (that was a fun problem!). To start with Galera was okay, but as our traffic picked up things became sketchy.

As traffic grew we'd encounter evermore outages - it seemed that some percentage of requests between nodes would be dropped (despite AWS's insistence to the contrary) and the nodes couldn't seem to gracefully recover. With the rapid growth, and tiny team we didn't have enough time to fully get to the bottom of it sadly.

The final straw for Galera for us was when we started exploring multi-region replication (us-east-1 and Singapore). The bandwidth between regions was just not high enough to support a cluster. Aurora offered all that and more (though MariaDB to MySQL was a bit fiddly). Since migrating life (and uptime) has improved immeasurably.


multi-master with conflict resolution is non-trivial.

Do you have automated failover set up?

Also, didn't know about Aurora's per query pricing. That is crazy.


It's been a while since I ran MySQL in prod, but a multi-master setup where all writes go to one of them (and other is set to read-only, and your grants respect read-only) is super operable. If you have your config layer key off of read-only status and you kill existing client connections when you switch from read-write to read-only, it's a pretty small effort to switch masters.

I tend to leave the decision to switch masters to humans, and just automate the process. Unless you have bulletproof conflict resolution/reconciliation, automating failover is inviting a conflict that leads to major pain and a large effort to fix. Better to have a few minutes of write downtime while waiting for someone to make the decision (which includes making sure the dead host is truely dead and not just split brained)


Sorry, I meant multi-source replication (different masters, different DBs to same slave(s)).


The idea of "outsourcing blame" mentioned in several comments seems really weird to me: If I make the choice to outsource something, and whoever I outsourced it to fucks up, I'm still the one who made the decision to outsource it. The same goes for outsourcing to someone who does a better job than I could have: then that was a great idea, and yay for me.


It's always good to have a scapegoat. A lot of Open Source companies that sell support services are in many cases really selling themselves as a blameable party.


"Nobody ever got fired for buying IBM".


Right now we (team of 12, 1 devops engineer) use mongodb atlas. While mongodb/nosql itself really is not giving us any value and that I really wished that postgresql was picked for our relational data, I must admit that atlas really is a nice product.

Hosting is not only about speed and price. We use atlas (=hosted mongodb), because we get that nice dashboard with all those statistics and performance tuning hints. Also, if we screw something up, mongo engineers are available immediately to help us. That was nice when we broke production because of index creation on the foreground. (Which pymongo defaults to, even though mongodb itself defaultst to backgound creation). We consulted them for tuning a frequently running “slow” query and that helped.

In short: hosted solution for support and less maintenance. Backups are arranged. We can choose which version to run on and upgrades are also arranged.

Question: is there a comparable service for postgresql like mongodb atlas?


"Question: is there a comparable service for postgresql like mongodb atlas?"

So many options. Amazon RDS and Google Cloud SQL both offer PostgreSQL. Heroku PostgreSQL has been a solid option for years. Crunchy Data's Crunchy Bridge is a recent offering with serious talent behind it.


But i want support like atlas gives me :) amazon and google dont do that. I know i can get hosted solutions, but that really is not the same. Enterprisedb maybe?


Crunchy Bridge might be what you're after then.


Thanks!


Makes sense.

> Question: is there a comparable service for postgresql like mongodb atlas?

There is no company behind Postgres. So, the big public cloud providers are providing their hosted solution.


We're hosting Postgres on HashiCorp Nomad and ClickHouse on a separate VM for Pirsch [0]. The Postgres db is only a few kb (maybe it's a few mb now, I haven't checked in a while), as it is only used for user accounts, settings, and some configuration. It doesn't do much so it's doing OK in the cluster using a host volume on one of the machines. ClickHouse uses more storage (don't know how much right now, but it should be less than 100mb) and resources and therefore lives on its own VM.

The main reason we self-host is privacy and cost. Postgres costs almost nothing, because it's part of the cluster we require anyways (also self-hosted) and ClickHouse can be scaled as needed. Hetzner has some really cheap VM, our whole setup, including the cluster, costs about 45€ a month.

[0] https://pirsch.io/


No. Unless your business is hosting databases for people your time can be better spent delivering real business value. Draw yourself a Wardley Map of the services in your product and you will see the place you should be spending time on is delivering features that the customer values not building copies of commodity services.


Self-host for me.

You can use a cheap virtual server + attached storage. In my mind, it's not as difficult as people let you believe (until you hit a scaling issue and want horizontal scaling for example).

I also teach how to do it in my upcoming book[0] where I even have a scripted demo to create your first cluster including SSL, SELinux, attached storage,...[1].

For work stuff, I would just use managed offering in the cloud the company already has. So far, that was AWS and Azure.

[0] https://gumroad.com/l/deploymentfromscratch [1] https://gist.github.com/strzibny/4f38345317a4d0866a35ede5aba...


Used to do this for myself and it worked great for over a decade. But because it was a decade old I had never made it easy to replicate. It was a pet.

I know better now, but also all of those other things are easier when they’re a terraform config than when they’re manually managed. I use RDS now.


We don't, the maintenance cost in man hours was way to high and replacing it with AWS Aurora made it at least as reliable with a lot less overhead.

On the plus side we can do ad hoc tests and experiments by creating a new Aurora cluster with a recent snapshot and try things out.


I used to self host it and moved to RDS (postgres), haven't looked back since. It's one thing to just run a DB with docker, but it's much different when you are talking about version upgrades, logs management, snapshot & restore... etc'.


I never used a managed database.

For smaller projects it's too expensive. I just deploy a dockerized database next to the application.

For bigger projects every customer so far wanted to have the data physically in their data center. So we just installed databases on VMs.


Yes, but we took it 1 step further and put the database inside the application.

Sounds crazy until you learn that you can expose user-defined functions to SQL and eliminate all network overhead when it runs in the same process. SQLite operations are effectively a direct method invocation across a DLL boundary. If you want queries to reliably resolve within microseconds, this is a great path to go down.

Not for every application, but it fits ours extremely well. Deploying our software to customer environments is trivial because we only have to worry about the 1 binary image. No external databases, docker hosts, etc. are required to be installed.


Yup, self-hosted MariaDB/Galera clusters, postgres clusters (many), rabbitmq, redis, and many others. Self-host everything, due to legal/governmental requirements, and also because it's usually cheaper and faster.


I used to deploy apps using Google AppEngine and DataStore. For most projects, now just use AWS EC2 with EBS and self-managed Postgres or SQLite.

In my experience, it is rare to have 100+ TPS applications. As for FAANG scale infra - YANGNI!


Interesting. Haven't heard many using DataStore.

How are you managing a. monitoring b. backups for self-managed?


Yes. Our stuff won’t fit on any cloud instances for a reasonable amount of money.


This decision is easy: if you or your team is comfortable caring and feeding the DB and know how to harden and monitor the thing, save the money.

If y'all don't want to be on pager duty, though, pay someone else to manage it.


For work projects - no, haven't in many years, since 2016 iirc. For personal projects - yes - I have plenty of hardware resources to spare so it costs as much as the electricity needed to run them(which they do regardless). That said, backup dumps go to a gcp bucket since it's far less likely to fail then one or two drives shoved in the basement and storage itself is incredibly cheap. Besides personal projects in my case are not a source of income so cutting costs as much as possible is the sensible thing to do.


We self host on an IBM DB2 database. 4 hours unscheduled downtime in about 20 years. It was almost the only thing unaffected when we got hit by ransomware last year.


Interestingly, there are two stages of any project when self-hosting is way more efficient.

First is an MVP stage, when you do it on a single VPS for $5/month + $1 for backups.

Second is multifold scaling stage, when you rent or even buy a couple of cabinets of bare metal hardware.

BTW, AWS Postgres does not support master-master replication, which makes no-downtime migration pretty hard. Just remember about those small quirks making vendor lock stronger.


I'm hosting own MySQL, Mongo and Elastic clusters. At the beginning it takes more time to setup than cloud providers solutions and require more knowledge and tooling to do some ops(upgrades, backups etc). But you know which version of db you are running and how it is configured. Additionally cloud providers own solutions are binding with one of them.


Agree. It's a trade-off. If one knows how to do a bit of ops then it's feasible to self-host.

what's the scale of the application(s) that uses these data stores? How do you monitor these clusters?


Std. Grafana + Prometheus + node exporter + specific data source exporter


Hmmm varies, if it's corporate it's on cloud vendors and managed, if personal or hobby projects it's unmanaged.


A lot of big companies do. The large UK utility I work for has dozens of large production Oracle dbs. I'm sure this is also the case with SQLServer / Postgres / DB2(!). The tide is turning though and for new systems /projects without strict regulatory requirements I'm sure cloud will become the primary choice.


The company I work for self hosts our databases. We self host HBase, Elastsicsearch, MySql w/ Vitess, Kafka, and a few other things. Our scale is large enough so it makes a lot more sense to pay for engineers rather than to pay for a service.

I work on the Data Infrastructure group and am more than happy to answer any questions about it


Thanks. How do you handle a. TLS certificates? b. Anything other than firewall restrictions for DB servers?


For bootstrapped projects without a PMF - which are my side projects I self host.

Postgres hot hot with 2 servers.

For clients I always recommend PaaS


Self hosting PSQL via Docker container + Ansible. Also hosting MongoDB, Elastic Search, Redis instances (for session & caching, separate instances).

Might consider researching using pg_bouncer later (but really not needed right now).

Pricing and support for vendor solutions are too bad for small scale app.


> Pricing and support for vendor solutions are too bad for small scale app.

In-line with my experience too.

Where are some of the hard things that you learnt self hosting these many services?


PSQL no downtime upgrade (e.g. 12 > 13) via logical replication.

So difficult that I think it's worth writing an article for it: https://dev.to/pikachuexe/postgresql-logical-replication-for...

But staying update with latest version is possible and relatively easy with Docker: update the Dockerfile, build and push, run deploy via Ansible playbook


I am self-hosting postgres containerized on the server with auto encrypted backups to s3.

Besides this I also run a self hosted production grade elasticsearch cluster which I manage via ansible.

Both of these would be insanely expensive if I chose any of the managed solutions from aws or gcp.


For anything in prod, not anymore.

For anything in the big clouds, also not anymore.

For everything else: "it depends", but hyperlocal and datacenter stuff is generally a mix of self-hosted and third-party-managed-self-hosted.


If portability from one cloud to another is important to you then self-hosting your database is the only way to go. I use Kubernetes, and have MongoDB running in a docker container.


For postgres if you self host then you can create ad hoc native extension. Most of the time it's not needed, but I've seen 2-4x speed ups in critical queries


Yes, self hosting personally and also large mission critical enterprise systems at work primarily based on PostgreSQL which is self hosted on robust VM infrastructure.


Yes, in the sense of running a container in the cloud. The security requirements for what I work on means it's pretty much impossible to do it any other way.


We do simply because its cheaper and there is not much to take care of with our workload.

We are running mariadb on a debian vm in azure and using both vm backups and mariabackup.

Works fine.


We have a few independent systems. Each runs on a two core VM with room to spare, so why not run the db server there too? It literally costs nothing.


Please enlighten me. I run a commercial web application (Spring + MySQL) from a Linode VPS. Why would I would I want to host my database elsewhere?


Yes. We used Mysql setups, Percona Xtradb cluster and Clickhouse servers and it's ok to run it by myself cuz we work on bare metal servers.


Where does one start when wanting to self-host a DB?


Look up the various posts praising SqlLite3. It is super easy to use. So simple I jokingly suspect conspiracy for why it is not the dominate DB solution everywhere.

I work in C++ and have made a super-butt simple wrapper for SqlLite3, available as an open source lib: https://github.com/bsenftner/kvs


Yes, we have multiple physical and virtual servers. Why? Because I don’t like the added abstraction from hosted solutions, and costs.


Yes, of course. It’s not that scary or difficult.


I do it for my own projects. For work, where cash is not a constraint I use managed options (mostly AWS RDS)


I use SQLite with litestream.io/ for backups for personal hobby projects.

At work I would always go with a managed DB.


Do you use managed hosting as well? How do you make sure there is only one server at a time?


Yes, I self host my databases. Gives me control to choose my own backup mechanism and costs much less.


You’ll pay much more using cloud hosted DBs vs bare metal. Cloud hosted bills you by metering iops


Yup, still self host the DB as I have been doing for over 20 years. Have 40GB stored in MariaDB.


For production , I use managed service specific to MongoDb which is Atlas.


We self-host a postgres cluster in k8s (AKS).

The main reason being that the Azure postgres offering was not fit for (our) purpose.

We did start with the Azure offering, and our default is generally to use the cloud offering.


I’m also interested. We are currently using the managed pg server where we create multiple databases inside. The managed pg database does not support private endpoint if i recall and it gets pretty expensive.

Also single managed server doesn’t scale well.


Could you please elaborate on what the drawbacks with the Azure postgres offering are?

I am currently evaluating the same choice - Azure Postgres vs self-hosting in my k8s cluster.


Not parent, but at {work} we made a similar move, from Azure managed postgres to self-managed postgres.

I elaborated a litle on why here: https://news.ycombinator.com/item?id=24607664

but boils down to; - enormous performance difference - advanced/fine grained user roles (+default permissions, not settable without superuser) - use of extensions (productivity boosts for us come from; Oracle FDW, TDS FDS, TimescaleDB) - private network endpoints - lower cost


Every single one! The cloud is a trap.


Yep, all the time.


Qqqqatt




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: