
PostgreSQL DBaaS vendor comparison and calculator - barnabask
https://barnabas.me/articles/postgres-dbaas.html
======
davb
I recently used Digital Ocean managed PostgreSQL 11 and was pleasantly
surprised. Having used Amazon RDS in the past, I found DO's offering to be
simpler and faster to provision. I did have an issue whereby the password they
showed for a newly created user (on the Web UI) didn't work and I had to reset
the password to generate a new one, but otherwise it was pretty reliable. The
documentation could do with some expansion (the cluster has some default roles
that DO create but these don't seem to be documented anywhere) and it would be
great to see wider extension support (I considered trialling pgaudit but it
wasn't supported). Overall we were really pleased.

We're a small team without a dedicated DBA at this point, so outsourcing
management of our database infrastructure was a no brainer. I was more keen on
managed vanilla PostgreSQL than something like Aurora (where the
implementation could stray enough from the origin that it would more difficult
to migrate away should we have the need).

~~~
scarface74
Really? The lock in boogeyman...

If you’re using standard Postgres and reading the standard Postgres
documentation, how would you even know about any Amazon extensions?

Does DO give you data storage across three availability zones? Point in time
backups? Autoscaling synchronous read replicas?

~~~
davb
I'm not sure if DO's offering has the feature parity you need, but it works
really well for us, at our scale.

Having done many migrations between technologies over the years, I think it's
prudent to have some level of skepticism when it comes to "forked but
compatible" systems. I don't consider proprietary lock in to be a boogeyman.
That's not to say all proprietary technologies are bad (they're not! We use
them mix of proprietary and open tech) but having an escape hatch is one of
our criteria for adopting closed tech. Especially cloud services where prices
aren't fixed and are subject to future increase.

~~~
scarface74
If you are interacting with the database using the standard Postgres drivers
using the standard Postgres syntax why does it matter what’s going on under
the hood?

~~~
merlinsbrain
I don’t know of a single non-trivial situation where you can ASSUME that 2
implementations of the same spec will behave exactly the same in all possible
scenarios.

It’s possible your error budget accounts for this potential difference, the
parents error budget seems to have no room for the possibility of edge case
bugs/divergence in the implementation of their database of choice.

~~~
scarface74
They start with the same open source code. The main difference between
Postgres and Aurora/Postgres is the storage engine and optional integration
with AWS IAM, S3 extensions. If AWS didn’t store the data in a way that was
cross compatible with Postgres. There would be a major outcry. But he is using
DO. I doubt that his requirements are that of a large enterprise.

I don’t know of one large enterprise that you could go to and say we want to
use Digital Ocean.

------
samcheng
In my opinion, for DBaaS, number of cores, amount of RAM, and even I/O
performance for the buck are secondary considerations. After all, for any non-
hobby application, the data are significantly more valuable than the hosting
costs.

I would be more interested in a comparison of security features, backup
infrastructure, replication tooling, failover procedures, version upgrades,
etc.

~~~
massaman_yams
Definitely. I'd also look at pg version support (GCP is still on 9.6) and
extension support.

------
bgentry
Do these estimates account for some amount of I/O costs? For example with
Heroku Postgres the instances use provisioned IOPS which is bundled into the
cost, but if you’re running on RDS your cost may vary considerably depending
on actual usage or whether you provision the same amount of IOPS.

~~~
barnabask
The calculator doesn't factor in IOPS, and that's good to know, thanks. I just
wanted a blunt tool to get a feel for these vendors' price points at least
using the common denominators of memory, CPU, and storage. I could be wrong.

------
bernardv
You might want to include www.elephantsql.com as well. They provide a level of
support big cloud providers don’t.

~~~
barnabask
Good suggestion, I'll work on it, thanks!

I'm trying to keep the comparison apples-to-apples, and I believe only their
dedicated instances ($50 and up) provide the PostGIS extension. That's kind of
a requirement for the projects I'm interested in, and it seems like a strange
omission.

[https://www.elephantsql.com/docs/faq.html#postGIS](https://www.elephantsql.com/docs/faq.html#postGIS)

Edit: just added ElephantSQL, the update should be out in a few minutes.

------
gingerlime
Wanted to also mention aiven.io. We use them for a while now and we’re very
happy. Prompt and knowledgeable support and friendly service. They can deploy
on pretty much any cloud and even migrate between them (I think).

Not affiliated in any way. Just a happy customer.

~~~
jonathanoliver
Aiven has been pretty cool for me to quickly set up clusters of things like
Kafka or Postgres among others to do various tests with. I have really enjoyed
them.

------
lincolnq
Do any of these services provide point-in-time backup recovery? It is very
easy to implement with WAL archiving on my own deployment but I haven’t found
any hosted services who do it.

~~~
TurningCanadian
Google cloud has MySQL PITR [https://cloud.google.com/sql/docs/mysql/backup-
recovery/rest...](https://cloud.google.com/sql/docs/mysql/backup-
recovery/restoring#pitr)

Oddly seems missing from their PostgreSQL offering though, which is also
several versions behind.

[https://issuetracker.google.com/issues/71565188#comment73](https://issuetracker.google.com/issues/71565188#comment73)
mentions Aiven, which looks good to me..

~~~
theptip
I've been staying off Google's postgres offering for this reason; am I missing
something about Postgres replication methods, or is this a massive hole in
their offering?

------
parzivalm
I find it interesting how the author mentions $200/month for a side project.
Makes you think, everyone's side project budget is pretty different.

~~~
barnabask
For a hobby project that I wouldn't expect to monetize, $200 is too much, I
totally agree. For a side project that makes some passive income however, I
would hope to start out under $50 and scale up as needed.

------
ngrilly
Interesting post. But there other things to consider in addition to the price,
like the supported versions of PostgreSQL, the supported PostgreSQL
extensions, and service availability during automatic updates.

To illustrate my comment, I like the Google Cloud ecosystem, but Google Cloud
SQL doesn't support PostgreSQL 10 and 11, and when using a single node, there
is a downtime during automatic updates. DigitalOcean Managed Databases are
more recent but, according to their documentation, these "issues" are solved.

~~~
brightball
Agreed. I really like a lot of the GCP experience, but Cloud SQL definitely
feels like this service they just added and forgot about.

There are a lot of factors that impact how you select a cloud provider, but
database offerings are always very high on the list.

Amazon clearly gets it. Digital Ocean became legitimate when they added one
finally.

Google had other offerings in the DB space that are compelling but their PG
offering needs a lot of work.

I was really hoping they would be the company that purchased Citus.

------
xenator
Moved all values to maximum and got exactly my current Hetzner configuration:
8 cores, 32 Gb RAM and 500 Gb storage. I have around 500 Gb of data, but real
storage is about 3Tb.

Difference is that minimum price on DO is $480 per month. I pay 30 euros, and
some more for backup storage.

~~~
vbsteven
Can you tell me something about how you manage backups or point me to any
guides used for setting this up? I use the same Hetzner setup for some small
side projects and my backup strategy here is just a daily pg_dump and archive
on S3.

For larger projects I usually run Amazon RDS and don’t worry too much about
backups.

~~~
xenator
It is not very different from yours. Since all data can be easily redownloaded
and processed again I really care only about code. Which is managed by Gitlab
on a different machine and use internal Gitlab tool for backups.

------
skyde
Vey interesting but again will the same amount of core and memory result in
the same amount of transaction per seconds?

How can we easily know what is the optimal number of CPU core and Memory
needed to achieve X Transaction per seconds ?

I know benefit of memory depend on cache hit rate, or how large is your
working set. But assuming 100% cache hit rate how many core do you really need
?

~~~
oskari
Will the same amount of cores result in the same tps? No.

I gave a talk on this subject at PostgresConf NYC a couple weeks ago, my
slides are here:
[https://postgresconf.org/conferences/2019/program/proposals/...](https://postgresconf.org/conferences/2019/program/proposals/transactions-
per-dollar-postgresql-in-virtual-and-bare-metal-clouds)

I'll follow up with a blog post in the near future.

------
Graphguy
IBM has a relatively new managed PG offering with mutually exclusive scaling
of Disk and RAM. Neat feature to not have to rely on instance sizing,
[https://cloud.ibm.com/catalog/services/databases-for-
postgre...](https://cloud.ibm.com/catalog/services/databases-for-postgresql).

disclosure: I work @ IBM.

~~~
barnabask
Thanks Graphguy, I didn't know about that. I'll check it out. Finding out
about other options was part of my motivation of putting this out in the
world.

~~~
Graphguy
NP - I can submit a PR to your work tomorrow? Seems like a useful tool for
surveying the growing PG-aaS space!

------
nknealk
Comparing AWS T type instances isn't necessarily a 1:1 with some of the other
offerings which have dedicated CPUs

[https://aws.amazon.com/ec2/instance-
types/t3/](https://aws.amazon.com/ec2/instance-types/t3/)

------
ezekg
Heroku’s missing Standard 1 has always annoyed me, but it did save me the
migration effort moving from Standard 0 to Standard 2 right away, instead of
waiting a little longer before needing to upgrade yet again.

------
aantix
With all of these offerings, I really hate that it’s 2019 and I’m still having
to choose cores and ram and storage.

I really just want a no-brainer solution that horizontally and vertically
scales whenever needed.

~~~
barnabask
I wonder if Amazon Aurora is what you’re looking for:
[https://aws.amazon.com/rds/aurora/](https://aws.amazon.com/rds/aurora/)

Does anyone have any real world experience with this, in comparison to these
more traditional DBaaS offerings? Not sure if it’s a practical contender at a
smaller scale.

~~~
scarface74
Serverless Aurora is dirt cheap for low usage scenarios.

[https://aws.amazon.com/rds/aurora/serverless/](https://aws.amazon.com/rds/aurora/serverless/)

------
durkie
Thanks for making this! I've got a PostGIS app that is starting to groan a bit
with a Heroku Standard-0 database and I've been trying to weigh my options for
next step.

------
EmilStenstrom
Strange bug: If I increase the Memory in the calculator to 2 GB, and then down
to 1 Gb again, the AWS cost is stuck at $50.

~~~
tyingq
Offtopic, but as someone who hasn't done a lot of JS recently, debugging
something like this looks like a terrible job. I assume the non-sensible
function/object names are the result of some kind of packer? Do modern JS
folks have to recreate this sort of thing in a pre-packed environment to track
it down more easily?

~~~
andrenotgiant
The author published all the content and code on gitlab, here is the logic for
the pricing calculator:
[https://gitlab.com/barnabas/barnabas.gitlab.io/blob/master/d...](https://gitlab.com/barnabas/barnabas.gitlab.io/blob/master/docs/.vuepress/components/DbaasCalculator.vue)

~~~
tyingq
Ah, thanks. And the fix was:
[https://gitlab.com/barnabas/barnabas.gitlab.io/commit/f7fede...](https://gitlab.com/barnabas/barnabas.gitlab.io/commit/f7fedea54a3be8046df8d7e753cfb7f69fc870b9#289a33df87be2b903369ee3b4c88a22d9fc4a920_121_138)

------
paulmendoza
If you commit to RDS for 3 years you can go the server for 60% off which
changes the math a lot.

~~~
barnabask
I totally agree. But, I wrestled with putting the AWS numbers at the 1-year
commitment level when none of the other vendors have that requirement. In the
end I decided that was the level of commitment I'd put toward a side-project.

~~~
bdcravens
I think 1 year is reasonable. AWS upgrades often enough that it makes sense to
not lock yourself out of those performance increases.

------
devit
Why not just install PostgreSQL with apt on your server and use that?

Or if you need HA use Kubernetes and deploy the Stolon helm chart.

~~~
matthewmacleod
Because I don’t know what a “stolon helm chart” is, other than possibly some
kind of Norse relic, and in the amount of time it will take me to find out I
can have an easily-scalable, reliable, secure database in place using
something like RDS.

Sometimes—if you have the skill, knowledge, and resources—it will make sense
to run your own HA database cluster, with a good backup system, automatic
failover, and all the rest of the nice things you want. Other times, it would
be more expensive and less reliable to set up and maintain that infrastructure
versus a third-party service. There is no “one size fits all” solution to this
sort of problem.

------
bdcravens
Why stop at 500GB storage?

~~~
tda
Interesting also that for all providers except azure storage can only scale
together with cpu/ram. Never understood why that is.

~~~
bdcravens
Are you talking about this tool, or the cloud providers? AWS RDS lets you pick
storage size independent of the compute resources.

------
nubela
What about MySQL?

