Hacker News new | past | comments | ask | show | jobs | submit login
Fully managed PostgreSQL databases (digitalocean.com)
368 points by alexellisuk 38 days ago | hide | past | web | favorite | 144 comments

Love this new offering. What I don’t love is they are charging for egress bandwidth ($0.01/GB), even in the same data center [1]. I can understand it for outbound to internet or other data centers, but this is hard to swallow for the same facility.

Hopefully DO will reconsider this!

It also makes me wonder if all access to these databases is via the public network. I’m guessing so.

[1] https://twitter.com/digitalocean/status/1096087824066101248

Charging for data transfer within the same DC is really unusual - AFAIK, none of the big cloud providers do this. Like you, I just can't understand the rationale behind this decision.

To clarify here and from the launch blog post [0].

> Ingress bandwidth is always free, and egress fees ($0.01/GB per month) will be waived for 2019.

In other words, we're not charging while we build this out.

0: https://blog.digitalocean.com/announcing-managed-databases-f...

Well, while I appreciate you coming here to clarify, and waiving charges for 2019 sounds nice, there’s nothing in that post that says those charges won’t come into effect for the long term.

I guess I personally would want more color on that whole post-2019 egress situation before committing to using the service. That’s just me, though!

It's not just you. Paying for egress within the same DC/vendor seems crazy to me.

Edit: It sounds like that might not be the case though. Eddiezane said "We do not plan to charge for private network / VPC traffic."

I appreciate the clarification, but this reads less as “DO is chill” and more as “wait for your surprise bill in a year”

Appreciate you keeping us honest. There are a few things I can say without making false promises.

Once we finish and rollout full VPC support we believe the absolute majority of use cases will not incur bandwidth charges since most apps run on the same DC as their DBs.

We do not plan to charge for private network / VPC traffic.

We have not yet decided to extend the free egress public network bandwidth beyond Dec-2019 and whatever happens we will provide users with time to plan for the change (say 60 days).

If we decide to not extend it, users will incur .01/GB in all DO regions which is one of the most competitive bw fees in the industry.

> If we decide to not extend it, users will incur .01/GB in all DO regions which is one of the most competitive bw fees in the industry.

Hardly. Maybe if you're comparing yourself to the ridiculous prices of AWS, GCP or Azure but definitely not if you look at the rest of the industry.

I can get, assuming bandwidth is kept at a sub-optimal 10% utilization (which should definitely be optimizable by scaling your instances):

- Unlimited traffic with a 250Mbit/s port from OVH for $32/month, which comes to $0.003/GB

- Unlimited traffic with a 400Mbit/s port from Scaleway for 16 EUR/month, which comes to $0.0013/GB

- Overage traffic on any plan from Hetzner for 0.001 EUR/GB.

And then there are the Bandwidth Alliance members who give free egress to CloudFlare, which then charges nothing to most users: Data Space, Dreamhost managed hosting, Linode, Packet.net and Vapor.

If you need even more traffic, you can start to buy transit, which is dirt cheap.

Bandwidth costs are the number one reason I won't even look at most cloud providers.

>And then there are the Bandwidth Alliance members who give free egress to CloudFlare, which then charges nothing to most users: Data Space, Dreamhost managed hosting, Linode, Packet.net and Vapor.

You are stating that as if DO is not a member.


My words were "Bandwidth Alliance members who give free egress to CloudFlare".

DO is a member but they do not give free egress to CloudFlare (or anyone else).

Wow, I was under the impression all Alliance Member get free egress to Cloudflare.

So what is the point of DO being a member?

I still think the point is valid. It seems weird to pay for egress fees within the same datacenter. It is understandable for egress to the public web, but for inter-datacenter transfer, it seems less necessary.

While it is nice that egress is not being charged in 2019, I imagine that most people looking for hosted DB solutions are looking for support and cost beyond a 10 month period of time.

They don't have private networking yet, so it's all public traffic which fits other provider models, although some don't have any bandwidth charges.

If you configure private networking, there are no bandwidth charges. However they don't have split-horizon DNS, which makes it a manual process.

They are cheaper in general though, so it's probably still cheaper than most alternatives with similar services.

As of Wed, Jul 11, 2018, Digital Ocean Support has said there is no HIPAA compliance[1]. I hope this has changed.

DigitalOcean is very interested HIPAA and has been exploring the requirements to become compliant. As of right now, we are not HIPAA compliant and unfortunately, we don't have a public ETA I can share with you. If DigitalOcean is still useful for segments of your infrastructure needs, we're happy to answer any additional questions you have about our platform, but at this time cannot provide a BAA for this purpose.

[1] https://www.digitalocean.com/community/questions/does-digita...

EDIT: I'm delighted to hear that DO will sign HIPAA agreements, but I'm unable to find any documentation of this on your website.

EDIT: Since I got down-voted for pointing out a fact from your website, I've added the link and the quote.

Hi! I'm the CSO over at DO. We will sign Business Associate Agreements and certainly believe we fulfill the Security and Privacy Rules (and well beyond).

Not sure why that was posted on our community site, but we'll get it fixed :).

(Verification: https://keybase.io/custos)

No downvotes intended (new HN account, so I can't even downvote yet :))! It was a good question and a great call out.

I'll talk to the team about getting a click through BAA process in-place, perhaps somewhere in the control panel. Right now they tend to be executed once our customer success team gets engaged.

I did post an update to the thread you linked.

That's good news, thanks.

> Free daily backups with point-in-time recovery Your data is critical.

> That’s why we ensure it’s backed up every day. Restore data to any point within the previous seven days.

Seems very similar to Heroku's model at that price. Which is actually pretty fair. I'm so happy to see DO doing this because there is always part of me worried that if Heroku were to go out of business, change their pricing model, start dipping in reliability that I'd be stuck spending a bunch more hours on dev ops instead of focusing on development.

More quality competition is great to keep Heroku on their toes (though I've had nothing but love for them).

They state (1) free daily backups and (2) restore to any datapoint within the previous seven days.

But if they only create a backup once a day i can only restore to any point in time before the last full backup. Did they mean the constantly backup the postgres WAL files so effectively you have backups only a few minutes old?

They are probably doing log shipping or even streaming replication:


Sending all changes to another machine and they can be applied in the event of a failure of the primary node. The backup process allows you to truncate the logs up to the point of the backups.

If they didn't back up the WAL files, point in time recovery wouldn't work at all (not even beetween full backups).

Well, the tooling to do that is available open source [1][2][3]. Barman[1] is more suitable for on-premises, while wal-e[2] and wal-g[3] are more suitable for the cloud. The wal-e[2] actually comes from Heroku.

[1] https://www.pgbarman.org/

[2] https://github.com/wal-e/wal-e

[3] https://github.com/wal-g/wal-g

I highly recommend https://aiven.io/ for another alternative. They have some unique features like local SSDs, cross-cloud and cross-region replication and migration, VPC support and integrating services together.

ElephantSQL is apparently very good: https://www.elephantsql.com/plans.html

Heroku is part of Salesforce, doesn't seem like much risk of them going out of business.

There is always a risk of Salesforce shutting them down, though. Not that I think it's likely, but I wouldn't assume that having a large parent company means they will be around forever.

I really love seeing what DO is doing.

We use a lot of providers, and seeing DO making this available while immediately making the API available is really impressive.

A company that thinks about API first when enabling a new service (like Amazon) enables real developers.

In comparison, we use CircleCI and often you wait for months for features to be available through the API.

Have they fixed their IPv6 support yet?

Last i checked, they still issue a single IPv6 address to each droplet (eg a /128, not a /64) and silently block certain ports on IPv6, so no.

Each Droplet gets a /124 block, so 16 IPv6 addresses. Please see this page for more info: https://www.digitalocean.com/docs/networking/ipv6/overview/

Oh my bad. Point still stands.

Well, I guess they will continue not getting my money then.

Can someone explain the significance of this?

It's basically a non-standard use of IPv6, which is to ideally issue a /64 to each end point. Personally I don't mind, but some people think they need 18,446,744,073,709,551,616 addresses per host.

The idea being with all those addresses you can create subrouters to manage your network topology. Unless you're doing some weird tunnelling setup through your VPS, I can't see why you'd need or want such a huge block.

People complain about only getting a /64 on their home routers.

Blocking certain outbound traffic is another matter: that really stinks.

Who do you use instead? I'm really disappointed by their IPv6 support and undocumented egress firewall rules, but found other providers to have their own set of problems. Would love suggestions.

Vultr has proper IPv6 and is the same price.

This is actually some of the best news I could have received. I’m a recovering NoSQL user who’s currently working on an app that I hope to eventually grow to scale... I just don’t have experience managing clusters manually, so I’d honestly be more than willing to pay for auto-management.

Why is everyone concerned with scaling databases and clustering before even launching?

DBs are great when everything works fine, but then when it stops working fine for whatever reason, your hair are on fire and you end up having to learn real quick a whole lot of stuff about your DB... then you wish:

- You had learned and practiced this stuff ahead of time; or,

- Had paid someone to do it for you, so you can focus on your core business.

Focusing prematurely on scaling and replication is the exact opposite of "focusing on your core business". The only thing that I always recommend in terms of databases is having a backup system in place. When need be and you're one of those 1% that needs scaling, aws, google and azure are there for a warm welcome.

Making sure that scaling is practical for the platform(s) you choose is not the same as focusing your efforts on premature scaling.

I said, I "hope" to grow it to scale. Not my top concern ATM.

I have a postgres instance doing 3k/qps on a single core server without braking a sweat and at 9-10% cpu. Databases are underrated.

I wrote a blog post showing off the service here and just did a live demo capturing GitHub data into Postgres. Check it out for some screenshots and to find out more https://www.openfaas.com/blog/serverless-single-page-app/

Wow, I thought this was gonna be cheaper than Heroku Postgres by a lot, but really it’s only slightly cheaper in terms of CPU/mem, and actually more expensive in terms of storage. Would love to see someone benchmark the two offerings for comparisons, especially disk speed.

Heroku's ecosystem around their Postgres offering is pretty advanced. It will certainly take a lot of time for DO to even catch up to this.

Absolutely. Heroku Postgres is a marvel and I use it for all my projects. If DO isn't even competing on price, then I don't know what they are competing on.

Can you elaborate a bit more about that? What does "advance" mean in this case?

Right now our apps are deployed on Heroku but our Postgres is RDS. We didnt want to plan long term on Heroku. And I don't even know if that makes sense.

So I've been always wondering about RDS vs. Heroku Postgres.

Hard to put into words except for "it just works". But let me try:

- The CLI / toolbelt is pretty amazing. You can do all sorts of analysis right in your terminal.

- The upgrading/crossgrading is pretty easy to.

- Attaching multiple, different apps is really nicely done.

- Extra, third party, services can tap into the logs and give you full insight into what the hell is going on.

The experience is really good, because there is no "experience". It gets out of your way and feels really robust and reliable.

Indeed; don't forget the follower databases (read replicas that can be upgraded to master).

RDS has read replicas and a process to promote to master that's just as easy as Heroku.

With Kubernetes and managed Postgres, under the wonderful Digital Ocean UX and product design, I’m keen to try this on personal projects.

That said, I do wonder who the pricing is targeted at.

Our database primary is around the same price as the highest spec DO are offering, but for that we get 3x memory, 6x CPU, and 2x the disk.

For some things there are definitely huge benefits to using a hosted product, Jenkins for example costs us a fortune in engineering time. Postgres though is fairly straightforward for a simple setup, requires little ongoing maintenance, and scales well to bigger boxes without much work.

I can see spending up to $100 a month of this perhaps, but for the ~day it might take to set up Postgres on $50 of hardware with twice the performance, I can’t see going much further beyond that. Equally, higher up the scale, the top end is not that high performance for ~$2500 a month. The point in time backups is fantastic, but I’m not sure how necessary they are for most customers, over a standard Postgres hot replica setup.

Eddie from the DigitalOcean DevRel team here. Over the moon to get this in everyone's hands. Here to answer what I can.

I have to say, I'm extremely disappointed by the pricing. It doesn't seem any cheaper than AWS's managed SQL offering (through Lightsail), nor cheaper than Cloud SQL.

I'm trying to run a side project at a low rate, which is usually why I go to DigitalOcean - it's much cheaper, than, say, spinning up a bunch of Heroku dynos. In fact, I just moved a project from Heroku to DO to go from a $14/mo hosting bill to $5/mo on the cheapest VPS.

My one problem has been Postgres - I don't want to self-manage a database. However, Cloud SQL is ~$9/mo at the cheapest, and RDS/Lightsail are both $15/mo at the cheapest. I'd really been hoping that DO would provide a lower-cost alternative.

I'm really sad that the pricing structure doesn't bring managed databases down to hobby-tier. I don't even want a free offering; I'm happy to pay gig of space for $5/mo to run in perpetuity, with restrictions on backup retention or something.

Right now, if I'm an actual startup or even bootstrapped company with money, I see _zero_ reason to use DO's offering over your more-established competitors, and as a hobbyist user, I can't justify spending money on it for a no-income side project.

At the point you care about backups and standby nodes, you're already sort of outside the cheapo hobby tier.

As for getting the $5/month option: Digitalocean Droplets have a 1-click Dokku install, and you can use https://github.com/dokku/dokku-postgres to instantly pop up a postgres instance.

That's kind of fair. I've been doing some thinking since posting this, and I've been sort of debating what a "reasonable $5/mo version" of this would be. The more I think about what managed database services usually entail, the more I concede that my original request probably isn't really a viable product (unless it was a "free tier"-style loss leader). Like, disk space and bandwidth aren't really the primary cost involved.

I think what I really want is managed backups that live outside of my box, and the ability quickly restore from one if my database crashes or becomes unusable. I think automated failover to another node may not be a reasonable ask for such a cheap product.

Of course, at that point: I could spin up a 1GB DigitalOcean volume for $0.10/mo, and set up scripts to run `pg_dump` every couple hours, clean the volume of older backups to free up space, and ideally a script to reset the database to a given volume. _That's all stuff I don't want to do_, but maybe someone's built a reusable set of scripts for it or something - that Dokku container is promising as a starting point, though I'm a little annoyed it only works with S3(-compatible).

Dokku is a management platform kinda like a self-hosted Heroku.

I've used it instead of Flynn in the past with a lot of success.

Edit: If S3 is the problem, backup to a minio instance on your dokku (https://github.com/slypix/minio-dokku)

I suppose I could also use DO's Spaces, which is S3-compatible, though also has an annoying minimum (I do not need the 250 gigs you get for $5/mo) https://www.digitalocean.com/products/spaces/

> At the point you care about backups and standby nodes, you're already sort of outside the cheapo hobby tier

WTF? Why would anyone ever not care about backups?

Was about to chime in the same... dokku is great for the hobbiest/cheapish tiers... I wouldn't go below a $10/mo instance, but you can throw a surprising amount of work at it at the lower end.

Why would you be afraid to self-manage a database for a side project? It isn't very hard to install and run, especially if you don't need to tune the system for massive load.

When configuring a number of standby nodes > 1 what replication topology is used? Are {all,none,some} replicas synchronous?

Is the postgres super user available?

What is the list of supported extensions?

Can WAL (physical/streaming) replication be configured to a non-managed postgresql instance? I'm assuming logical replication slots should be supported.

Is there in-built streaming backups/point in time restore?

Any details you can share on how failover and general cluster management is performed?

Are version upgrades supported? Assuming that would use pg_upgrade but is there an option for downtime-less upgrades using logical replication?

I'll bet it's async replication with streaming. Anything else costs too much in complexity and performance. Of course the cost of such a setup is a small window where data loss is possible -- but if you're ok with this service you're already ok with data loss because the daily backups lose any add+delete operations you do in a day (unless you have triggers that copy the deleted data in a shadow table)

They have point-in-time restore which means they're archiving the WAL logs continuously. You don't lose an entire day's worth of changes.

Some of these are from our support playbook and I'm working on getting a few others.

> When configuring a number of standby nodes > 1 what replication topology is used? Are {all,none,some} replicas synchronous?

Trying to get an answer of this one.

> Is the postgres super user available?

At this time only our administrative users are superusers for the database. All other administrative tasks should be possible from the default "doadmin" user provided when setting up your database cluster.

You can see a list of current users in the database (including ours) with this command from the Postgres CLI: \du

If some administrative task isn't possible with the "doadmin" user just let us know and we can report that to our engineering team for review and potential future change. We can't promise anything immediately, but we can definitely look into changes long-term!

> What is the list of supported extensions?

address_standardizer address_standardizer_data_us btree_gin btree_gist chkpass citext cube dblink dict_int earthdistance fuzzystrmatch hstore intagg intarray isn ltree pg_buffercache pg_partman pg_stat_statements pg_trgm pgcrypto pgrouting pgrowlocks pgstattuple plcoffee plls plperl (PostgreSQL 9.5+) plv8 postgis postgis_sfcgal postgis_tiger_geocoder postgis_topology postgis_legacy (see note below) postgres_fdw repack (PostgreSQL 10+) sslinfo tablefunc timescaledb tsearch2 unaccent uuid-ossp

> Can WAL (physical/streaming) replication be configured to a non-managed postgresql instance? I'm assuming logical replication slots should be supported.

No, not available.

> Is there in-built streaming backups/point in time restore?

Only daily backups are available with our managed database service at this time. We maintain 7 days worth of backups for each database cluster. Backups can be viewed and restored from the Cloud Control Panel by clicking on your cluster and going to the "Backups" sub-tab. This will restore your entire database cluster to that point in time.

While the backup frequency can not be adjusted to more often/weekly/monthly, you can take point-in-time backups by creating a "fork" of your database cluster. The fork can be placed at a specific point-in-time (reference your logs to find out when the transaction happened modifying the data you want to get back), or "now".

> Any details you can share on how failover and general cluster management is performed?

Nodes are monitored and failover happens automatically with minimal downtime if a node becomes unavailable. Some more info [0].

> Are version upgrades supported? Assuming that would use pg_upgrade but is there an option for downtime-less upgrades using logical replication?

I _think_ all updates may require powering off but will try and confirm.

0: https://www.digitalocean.com/docs/databases/resources/high-a...

I recommend adding these points to the website once you know them.

Especially not knowing whether it's synchronous or non-synchornous replication makes it impossible to design systems on it, as that decides whether you can lose some (likely small) amount of data on a failover, or whether the system guarantees nothing is lost.

I also don't understand another key thing:

On the Pricing page you write "Standby nodes with automated failovers". At the same time, the pricing table offers 0, 1 or 2 standby nodes. So what happens if I buy the offer with the 0 standby nodes, and a failure occurs? Is my data gone? Does DO replace the failed node, and how when there are no standbys?

Kamal from DigitalOcean here. With only a primary node and no standby nodes, a failure will cause a new replacement node to be created with its data being a combination of the latest backup and the write-ahead log. This will get you the most recent data possible. Check out this page for more info on auto failover and how it works with the different configurations: https://www.digitalocean.com/docs/databases/resources/high-a...

How is the WAL stored? If there is a problem with the storage, networking, or DC, you will not be able to get the WAL in order to bring up the DB in a new region. Also, the process of replaying WAL since last backup can take a very long time for a high traffic DB, during which time it is going to be completely unavailable.

The WAL and all backups are stored offsite and are completely handled on our end.

Re: availability, that’s right if you only have a primary node and no standby ones. Like manigandham said, there won’t be any downtime if you have a standby node.

That's what a standby server is for, so that it's up to date without replaying the WAL. Nothing unique to DigitalOcean about that.

Thanks, that answers the question on 0 standbys.

Do you also happen to know the answer on synchronous vs asynchronous replication?

No problem. It’s asynchronous

Thanks. Will you offer a synchronous version in the future?

Since asynchronous means that you can lose previously acknowledged writes when the primary node crashes, which forbids many use cases (for example, most things involving money).

And Postgres already offers synchronous replication modes.

No concrete plans for sync replication that I know of. The team is aware of the need for it but there's not much info to share at the moment.

This is super helpful. Agreed with other comments about making this available more readily on the DO Managed DB pages.

Having looked through the set of extensions, wal2json (https://github.com/eulerto/wal2json). That makes sense since there's no way at the moment to get access to replication.

However, the use case for wal2json in addition to the access to replication is that it's the best way that I know of to get reliable change notifications from the database. I personally use this to pipe the JSON blobs into message queues to be consumed by applications.

As a short aside, I don't use PostgreSQL's NOTIFY/LISTEN because, as I understand it, you will not receive messages if there are no connections to the DB -- this is a show stopper for me.

AWS RDS allows this and Google's PostgreSQL does not. I've personally abandoned Google's version first partly because it seems to have stalled in terms of upgrades, and AWS for being way too expensive for what you get. Although it doesn't work for me at the moment, I'm hoping the DO option will work for me in this regard in the not so distant future.

just curious what version(s) of plv8 you're supporting.

Are the SSDs on these located in the servers that the DBs run on, or do they live on a separate SAN?

I ask, as at a previous job, we used to run our Postgres DBs on raw Droplets, and we'd get awesome performance on disk bandwidth (which we really needed), so much so that if we'd have moved to AWS we'd have had to pay very significant $$$ for provisioned IOPS to get the same bandwidth.

It'd be awesome if that same performance/price ratio was available with these new managed DBs.

Managed Databases are built on top of our core compute platform which use local SSD's. You should get that same awesome performance!

Do you have any plans for supporting logical decoding plug-ins? Would love if it was possible to enable change data capture out of Postgres on DO with Debezium (which currently supports wal2json and its own DecoderBufs logical decoding plug-in). Disclaimer: I'm the lead of Debezium.

What is the reason behind the limit of 22 connections for the small 1GB instance? I've noticed that PgBouncer can be configured automatically, but still standalone PG should be able to support more than that.

Each open connection is a fork that takes up ~10MB. There is actually a 25 connection limit (3 are reserved for maintenance) so that's ~250MB just on connections.

More info here [0].

0: https://www.digitalocean.com/docs/databases/how-to/postgresq...

I'd just like to say I'm absolutely fine with your pricing. Seems totally fair. Thanks.

Is there any plans to open data-centers in the South of the US? (Texas specifically), we want to consider DO but the high latency from the current ones is not an option.

Hey Eddie! If you guys could please take a serious look at split-horizon DNS, that'd be awesome! I can't wait for the MySQL launch, we'll be first in line

Does this have PostGIS support?

yes, see eddiezane's comment above

This looks pretty sweet.

With that said, I seriously think they're missing an opportunity by not offering GPUs. If DO offered a product similar to PaperSpace, basically a dead-simple GPU to connect to your notebook, I don't think small teams would need to look anywhere else for their cloud computing needs.

GPUs are on the product roadmap at DO.

oh - are they? I read this [1] blog post a couple weeks ago which has no mention of GPUs.

[1] https://blog.digitalocean.com/whats-new-for-2019/

Wow, they seem like an awesome company. Perhaps the perfect place for me to work. Thank you for mentioning them.

Full list of extensions and versions off of a live instance: https://gist.github.com/peterc/e4f7a288ed0eb7e4ffe2d8383a086...

Interesting to note that TimescaleDB is installed by default.

Thanks for the heads up, good news indeed!

DigitalOcean keeps knocking it out of the park with this and other offerings. I plan on migrating to their platform at some point, but wish they offered a free/hobbyist tier (something like what is offered by Heroku) for exploring ideas before committing to that price point.

The $5/mo is billed continuously by the hour, and you can store snapshots for $0.05/GB/mo. Basically, throw a few bucks on an account and you can do a lot of exploring so long as you shut down the droplets when you're not using them.

Personally I like the idea of putting up a bit of play money at first vs. the prospect of higher recurring charges for a site that doesn't sleep.

Ok, I do love this feature because it fits perfectly with the kubernetes offering. It's getting easier to manage things in Digital Ocean k8s.

I love DO.

However, in this case, it would cost me at least $800/mo. That $9,600/yr can buy a lot of hardware/upgrades.

If this $800/mo. is for the 32GB setup with 1 standby, for around $4500 you can outright buy 2 servers with similar specs (E3-1240 v6, 32GB DDR4, 2 x 960GB Micron 5100 SSDs) plus you get bare metal performance.

If you have a rack set up already, even with colocation costs you will still pay less in your first year than with a cloud set up like this.

Obviously, this calculation doesn't include any engineer costs, but if you have several several racks set up, chances are you have the manpower on hand, and you're going to be saving money compared to any cloud setup, it's all a matter of how much flexibility you require.

For most businesses, people cost way more than hardware.

Sure, if you run dedicated PostgreSQL hardware you also need someone to maintain it (probably).

Any chance of a cheaper, low-performance tier for hobby projects? I was hoping to see something at $5/mo like the the lowest droplet tier.

Absolutely. I was hoping for a 5-10$ offerring for my personal stuff, the only traffic I have is me but it'd be nice to not have to deal with managing my own SQL DB. Unfortunately all the major cloud providers seem to only have free/low cost plans for NoSQL databases, SQL DBs always seem to have a significant cost.

I guess for hobby projects, you can run your DB on the 5 USD droplet.

Well yeah, of course. I was hoping they would offer something managed to compete with Heroku's free/$9 tiers, but this is the cost of three droplets and isn't worth it for me at the moment. I'll probably test it out for a bit though but it will be too expensive for hobby/throwaway projects.

As mentioned in another thread, may want to consider dokku with the postgres plugin.

Sounds cool, I'll check it out.

A managed CloudSQL instance with 600MB RAM on GCP costs about 10.8$ a month (0.015$ per hour), 0.17 per GB per Month storage (say 5 GB, that costs about 1 $ extra)

So about 12$ a month for continuous 24/7/30day use.

The goal is always to get the price down as low as we can ;).

Does that mean this is the lowest tier we will see?

It costs $100/mo minimum to get failover support, pretty high for a starter package. That's a high end laptop a year to get a low-performance database with one standby and a daily backup.

How does DO compared with heroku and RDS in terms of price, AZs, APIs and etc? And what is the vertical scaling process like (don't seem to find anything on a quick google search)?

Love it. The experience for creating is great as usual (for DO).

I really like the built-in pgbouncer, easy to configure connection pooling thing too!


Not sure if I'm missing something here, but the pricing seems quite high (in relation to DO's other products and competition).

For example, if we take 8GB ram node, that's $120 p/m. Add in 2 standby nodes, and we add another £160 p/m. That equates to $280 per month.

AWS RDS, by comparison, is about $260 p/m for an rds.m5.large instance (also 8GB ram) on multi AZ. This can be reduced further via reserved instances.

Admittedly, the DO has more processing power, but all-in-all, I don't get this pricing at all. I was very excited about this, but the pricing is putting on the brakes.

It is good to see cheap managed PostgreSQL getting steam. Back in 2006, the startup I was with moved from PostgreSQL to MySQL because the support we were able to find was both expensive (300 USD/hr) and not satisfactory. Back then MySQL AB (this was before Sun) gave us a 10K two year deal on three servers and they had excellent response times and knowledgeable support. So for a long time I was biased against PostgreSQL because of support. It's time to put that to rest :)

I love DO. Personally I don't understand the pricing/appeal over getting a 5 dollar droplet and setting yourself up with automated daily backups on the droplet instance.

Well there's the standby nodes with automated failover, and the fact that maintenance and security are fully handled. For a lot of people that's well worth the extra $10 a month. Plus setting up daily backups is one thing, but verifying that it is done correctly and that they are easy to restore is almost always skipped "until it's needed", which of course is too late.

Sweet, but personally, I very much prefer to use a standardized approach like a Docker (or even K8S) PostgreSQL cluster rather than having to deal with a vendor API.

How do I see the detailed pricing of this offering if I don't have an account? I want to know what $15/mo gets me before I decide whether I want to sign up.

Can anyone give me an overview of why I'd want to consider looking at DO if im already comfortable with AWS? I don't really have any issues with RDS.

From my own subjective experience:

DO has huge reliability issues, particularly in NYC. Over the course of several months I had repeated brief network drops on random VMs (during EST business hours, lasting up to a couple minutes). Their object storage in NYC3 (only location at time I used the product) would constantly drop uploads of larger files (experienced in multiple locations behind different firewalls, not environmental) and was also down for several days at one point not too long ago.

Perhaps things have improved, or my experiences with roughly 20 VMs distributed between Toronto and NYC was abnormal given the overwhelmingly positive sentiment in this thread but I figured I'd offer my $0.02 if only to play the contrarian.

A lot lower price for your compute nodes to start with, less constrictive administration environment.

To me, this actually makes digital ocean a "real" platform. And it's still cost effective. Very exciting.

I've been a Linode customer for 10 years, but now this is too awesome. My next project will be hosted on DO.

Are there any big differences to this compared to Heroku’s Postgres? (Not that there need to be, just curious if there’s reason to prefer one.)

I’m still waiting for one of these providers to crack infinite scaling (similar to Aurora). Seamless patching is a good step though.

You can add additional DBs, I think in Heroku you can’t. So you can do multi tenant or host multiple projects

It would be great if they allow to install extensions like PipelineDB and Citus

Yeah come on DO don’t charge me for transfer within your own Datacenter

What is the orchestration layer for this? I saw a few DO people at the FoundationDB Summit, but that probably wasn’t related and you’re just using ZooKeeper ;)

Very exciting news! Any plans or ETA for mysql?

Nick from DigitalOcean here; yes! It's in the bottom of the blog post

>>> Our engineering team is working hard to bring you even more functionality for your databases in 2019. We plan to have additional engines such as Redis and MySQL, private networking with enhanced VPC, metrics, and alerting through Insights.

What's the benefit (to you) of MySql over Postgres? I've moved completely over to PG, and am happy as a clam.

As a pretty basic user, it is mostly about adaptation costs regarding tooling, infra and code. My current stack mostly includes mysql as the default db, so my applications and tools also default to mysql and for now I don't see a huge benefit since all I mainly do is CRUD. I am considering pgsql for some new stuff, but for the existing applications I don't see any benefit to move to postgre for now.

Do these use dedicated VCPUS or Shared VCPUS?

Shared. We're exploring using dedicated as part of our roadmap.

For Compute Droplets there are "standard droplet" using shared cpu and "optimized droplet" can use 100% of a cpu. Then for the databases, can user use 100% cpu even for $15 plan?

Is it true that Digital Ocean has no staging environment, and just deploys straight to production?

Can confirm a staging environment ;P. Have been hammering this there for a while.

There's been a staging environment for as far as I can look (>4y), and it's been improving continuously, with a lot of investments put in. There's also extensive CI/CD and multiple layers of continuous testing in dev/stage/prod happening. Definitely not a straight deploy to production.

Other cloud providers have staging env? We just have our own VSs on DO that plays the role of staging.

I believe he's referring to a staging environment for changes on their side.

Does this have some SLA?

Using fancy words like egress does not change the fact you are gouging.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact