Hacker News new | past | comments | ask | show | jobs | submit login
DigitalOcean launches its container service (techcrunch.com)
765 points by neom on Dec 11, 2018 | hide | past | favorite | 251 comments

DO's kubernetes release is an an example of why I am a big fan. As a sole developer, I can't afford high technical debt, but DO packages tech in a way I can manage. I hope they keep on and wish other services (here's looking at you AWS) would package their services as well.

I am a mechanical engineer who dabbles in web development from time to time. I am forever indebted to DigitalOcean for creating a super easy platform for someone who has no clue about VPS starting out. I know how to operate a linux machine but not the slightest idea about how to host a website myself until I came across DigitalOcean and their LAMP/LEMP tutorials.

Once I was comfortable with DigitalOcean, I tried launching a VPS on AWS and holycrap it was so insanely complicated. Within 10 mins of creating an AWS account, I was out. I understand that there is nothing wrong with AWS - it is not for me, but DigitalOcean has fullfilled my needs in the most perfect way with a huge knowledge base and detailed tutorials.

DigitalOcean is absolutely incredible.

Hi! I'm a member of the Community team at DigitalOcean. I wanted to thank you for your kind words about our tutorials. This kind of feedback means a lot to us. We're glad we could help you get your web site set up.

Just want to add on to this. Whenever I'm looking up a piece of software and I find a Digital Ocean tutorial on it, I know I'm in good hands.

Have yet to find a tutorial that wasn't great!

Amen. DO gets it when it comes to docs.

Your guides and tutorials are at the same level as stackoverflow for figuring out problems

How do I fix this: stackoverflow first hit

How do I do this: if DO isn’t first hit already I scroll down to yours

Thanks for the tutorials and for keeping them up to date. I’m probably not their target audience for the most part, but when I need to do something in an unfamiliar stack, [stack name] + digitalocean is usually my first search. Wish you guys had a little more of a professional oriented products (think AWS/GCP) and no ‘max 10 servers’ kind of rules so I could use it.

> no ‘max 10 servers’ kind of rules

You can contact their Support to get that increased. Just guessing at the reason, but if there was no limit, what happens if someone hacks your account and spins up a 100,000 node cryptocurrency mining farm?

The same thing applies to AWS, and AWS doesn't have '10 servers maximum' limit.

Beyond anything, it tells people about their target audience, which is indie development. That's fine, and it's a great market to be in. But in the case I have to spin up 17 servers in 24 hours in three continents, I can't really afford to deal with DigitalOcean's support under that kind of stress. This doesn't happen often, but when it happens, it absolutely breaks you.

AWS most definitely has service limits that apply to all products including ec2 for this exact (and other abusive) reasons. In fact, the aws limits are even more convoluted and can hit at random if not tracked. More details here: https://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_ru...

Yeah, as I was building out some apps over the past year it was a game of ‘which account limit will I hit next’. Most of them require a support ticket to be raised, and justification.

Hey Rolleiflex - Thanks so much for being a DigitalOcean customer! We would be happy to increase your Droplet limit if you get in touch. Just visit the support link from your Cloud control panel to make the request or drop me a line directly (first name @).

Thanks! Zach Director of Support, DigitalOcean

Hey Zach, thanks for chiming in. I’ve moved on from DO to AWS a long time ago, but I appreciate the sentiment.

May be they should allow customer to set higher default limit instead of 10? Which is silly.

( I never knew there was such a low limit of 10 instances. )

For what it's worth, my account has a limit of 25 and I've never requested an increase. So I guess after some period of use and payment they trust you and increase your limit automatically?

I've been a DO customer for 5 years but I'm not sure when my droplet limit was increased.

I thought the limit was for your protection and you can get it increased just by contacting them?

I preemptively contacted Support to verify my bona fides - I think my cap now (self-requested) is something like 50 or so

AWS also has a initial soft limit on EC2 instances. I dont remember what it is but its <100

Yeah, I’m sure they have — It’s just that AWS’ limits are a lot more compatible with a startup (vs an indie developer) than DigitalOcean’s.

FWIW, I find AWS limits confusing and seemingly random. Also, the fact that you can't limit total spending is _very_ unfriendly to (at least indie, as you point out) developers. I have no experience with DO though, maybe that will change with this offering.

Have you actually dealt with their support though? Your example of going from (seemingly) zero servers to 17 across 3 continents in 24 hours (indicating unforeseen absolutely incredible traction and growth) seems significantly less likely than getting a response from their support team increasing usage limits within the same timeframe.

Seriously though, your tutorials are probably the reason most people I know have used DO. You guys do an amazing job with those!

Got a personal Nextcloud instance going with DO and your documentation has been an amazing help.

Hats off to you and the rest of the team.

As a Linux SysAdmin I love coming across a tutorial from DigitalOcean when Im searching for a howto because I can always be sure that they will be updated, well written and very complete. A big thank you to you and the rest of the team!

> DigitalOcean has fullfilled my needs in the most perfect way with a huge knowledge base and detailed tutorials.

Agreed. The while I liked the price point and the UI, the tutorials were leagues ahead of everything else.

Your tutorials are simple and down to the point, just the way they should be. I became a customer thanks to the tutorials, and I haven't looked back.

The Django -> Ubuntu tutorials are amazing. Thank you.

I'm a customer, but half the time I'm using your tutorials it's for home projects not my VPS.

Really great job!

I've run across DigitalOcean tutorials that helped me out many times. You all do great work, thanks.

Chiming in as well. You guys do solid work, and have untangled me more often than I can count.

Always come back to DO for the concisely written tutorials. Thumbs up for the good work.

Thank you. The tutorials have been very valuable over the years.

Keep up the good work!

I'm glad I'm not the only software engineer that can't figure out how to use AWS.

So far, DigitalOcean does everything that I need. I hope they can maintain this great developer experience.

AWS is much more structured to treat infrastructure like cattle rather than pets. DO blends the line a bit, but in spinning up a ‘droplet’ you’re leaning more on the ‘pet’ type of thinking. I’m super bullish on DO - they have the API to act like AWS (to a degree) but the UI to support small/individual teams.

For my day job we use Azure, and it gets the job done. But all my personal projects are with Digital Ocean I love how clean and minimal everything is.

I bounced around hosting providers for years until I landed on DO. I have nothing but good things to say about them.

Agreed. DO's knowledge base changed the game for me. I went from never getting anything to work out when trying to build projects to having a plethora of tutorials on a wide range of topics that seem to always work out. DO's server service is awesome as well, but man I'm really really thankful for all their tutorials!

Aws can be complicated that's why we have a we use terraform to spin up our infrastructure and link it all together.

Hey there! I'm from DigitalOcean and wanted to thank you for sharing your experience around using our platform and community. I loved reading your comment and shared it with our team, who in turn would like to show you some love. We'd love to hear from you at sammy[at]digitalocean.com!

I am also a fan and a customer, but did you notice how many times their tutorials appear in search results ? Their SEO game is strong! That's how they got me.

AWS used to be easy, but over the last decade it's become a specialization. Every time I wander back to it, there's another layer of complexity in the way towards doing something simple.

I agree. It seems like a deliberate strategy. Amazon is trying to create a breed of highly paid AWS experts who will be keenly interested in promoting AWS, because their valued knowledge is provider-specific and not transferable. Similar to how Microsoft created all those MCPs who tried to push Microsoft tech everywhere regardless of how well it fit the task.

It's mostly just the natural evolution of catering to their #1 entity - large corps.

I don't think they care about the long tail.

The 'specialization' may just be an advantage, maybe not.

As we’re layering on praise, I’d like to pile on.

I’m a partner in a company in Puerto Rico. Last year immediately after Hurricane Maria hit Puerto Rico, I wrote all of our off-island service providers asking for any help they could provide. DO was one of the most generous responses. They donated 3 months of services based on our average billing.

We greatly appreciated it, and the nice note they sent me showed they’re not only great at providing a good product, but they’re eminently human as well.

Congrats on the launch to them.

I believe it’s small things like this that separate the under dogs from the Giants. The giants are good if you are a huge Corp like Apple, for the little guys DO is amazing.

Yep, you can find cheaper VPS out there but for reliability and ease of use it's hard to beat. As for AWS, it's hard to beat AWS for the scope of services they offer, but at the scale of a handful of DO droplets, it's just not worth the effort it takes in AWS.

Combine it with Terraform and you're winning the Internet.

I ran into some limitations with Terraform and at the time it didn't support Vultr, so I ended up writing my own provisioner. It goes a bit further with setting up DNS records as well and I rolled in some of my own Docker deploy stuff into it; although in retrospect I should have made that a different project.



Pointing this out... I've gotten a bit of basics working with vultr + terraform but it's not the most straightforward. I'm not the author, but an interested observer.

In my searches for vultr + terraform, your project never came up, but there seems to be some overlap or room for collaboration.

DO's prices are why I'm a fan.

I don't see why, I switched from DO to Vultr and then to Hetzner because the price was 4-5 times higher. Maybe it is a bit more reliable, but when I can get five servers for the price of one, I can add a lot of redundancy...

Yes if you are based in EU, as Hetzner only has DC in Germany and Finland.

Yes, but I think OVH has data centers in the US as well, if I'm not mistaken.

This I don't understand, there's a million cheaper VPS providers. I can understand choosing DO cause it's hip and they have great tutorials, but price? Nope.

They hit a balance of price, simplicity and I'm sure they will be there tomorrow though.

That and as a user of 12 different LowEndBox-style providers, DigitalOcean is only marginally more expensive, with way less downtime, faster support response, an easy to use API, integrations with everything, and even planned maintenance windows which are clearly communicated.

Most of the lower priced options are run on a more shoestring budget and get uptime measured in the 98s. Also, I find a lot more noisy neighbors with many LEB-style providers.

Perhaps OVH is worth a gander, I find it strikes a good balance between DO's pricing and the LowEndTalk providers, yet the uptime is still impressive.


OVH is horrendous for their dedicated offering, especially when trying to cancel.

Their webapp wouldn't work, they wanted me to do an API call to cancel, as they couldn't fix their UI.

Huh, I haven't had this issue on any of their brands. They tend to catch and fix most issues before I become aware on the dedicated machines I have with them.

FYI OVH is a key contributor to OpenStack, they've made quite a few contributions to Ceph, sponsored LetsEncrypt, and host most game servers (and even Wikileaks!).

Adding to this, I once bought their hosted container service, and when I contacted them they told me they no longer supported this product. How?! Why was I able to order it in the first place?!

I am a long time user of OVH. I don't care about fancy UI; I have hardly used it to get anything done except pay bills. I just wish they open a data center in India. That's the reason I was looking at DO; they have a DC in Bangalore.

1000x time this. OVH only has a bad datacenter in Singapore. I tried and left.

OVHs need to be in India and East Asia to interest me again.

Bandwidth and power are too expensive in most of Asia, OVH's business model is founded on building DCs next to cheap, plentiful power, then peering at the surrounding IXes for bandwidth. Most of the local IXes in Asia do not have local incumbent carriers participating, and those incumbents want hellish rates to peer.

Routing in Asia will remain crappy due to these peering problems, and volume wholesalers will be few and far between so long as power is spendy.

We are heavy users of GCP and AWS, but still use DO for bandwidth-heavy workloads.

AWS bandwidth charges are brutal, they are the secret billing item that kills products.

It can't make sense to host any kind of web anything when paying 10 cents / Gig egress charges.

If you have images at all ... nogo.

It's really odd, like they don't want your business because that's definitely one line item that always makes me keep my eye on the exit.

Aws has as light sail

Been a user of DOs beta kubernetes service and it works well.

Though I would say the title of the linked article is a bit misleading. It is a Kubernetes as a service, like EKS, GKE and AKS.

But not vanilla container service a la ECS, Fargate, the former Docker Cloud, etc.

Container Service as a Service

works in dev all the way down.

Great distinction. Either way it's nice to see DO expand its offerings. Love their service and the team.

Maybe this is a dumb question, but couldn't/shouldn't a KaaS (and other orchestration systems) just be a layer on top of "vanilla" CaaS?

CaaS-es (i.e. "things that present themselves as a Docker daemon or something like it") don't allow you to provision IaaS-level resources like VMs or disks, merely connect your containers to existing resources. When CaaS-es do allow you to provision stuff (like e.g. Hyper.sh does), they do it through a direct IaaS-level API that is separate from the functioning of the CaaS itself.

The major cloud providers' deployments of Kubernetes (and other server-side persistent-orchestrator systems, like the venerable CloudFormation) are deeply integrated into the cloud platform they're running on, such that the orchestrator itself can provision resources for a container to run on as part of deploying the container. This becomes important when elastically auto-scaling a container, because each container might need e.g. its own disk, and you can't create them ahead of time if you don't know how many you'll need.

This also means that, unlike a CaaS, k8s et al can manage the very cluster that k8s is running on, scaling it out to suit the size of the current/estimated workload.

Theoretically, you can bootstrap k8s on top of a vanilla CaaS—this is how minikube installs "using" your local Docker install, and this is how deployable PaaSes like Flynn and Deis work. But this approach doesn't supply k8s with the cloud-specific integration it needs in order to provision stuff. It might work if you're deploying against something with a standardized API like OpenStack; but none of the major cloud providers are compatible with such APIs, and so they need to build their own k8s plugins that call their IaaS-level APIs, to make k8s work on their clouds.

Or, to put all that another way: if there were standard IaaS-level APIs for k8s to hook into, Docker (and the CaaSes that either use or emulate it) would just hook into those APIs itself, and there would be no need for a higher orchestration layer.

tl;dr CaaS doesn't orchestrate the underlying infrastructure whereas k8s primary purpose is to create a cloud agnostic way to orchestrate containers and the infrastructure that they run on.

Kubernetes has worked to not be coupled to Docker (http://cri-o.io / https://www.opencontainers.org/), so in a sense, it already is built on top of a "vanilla" container service.

Don't you need to setup network paths? And connect containers to each other..

Then setup/teardown in specific order.. and if you have state, keep a subset of containers alive to avoid loosing state (even if it's replicated).

Kuberneted is designed in a way where stateless pods (collection of containers sharing the same ip and identity) are decoupled from stateful ones. There are concepts designed into K8S that allows components to attempt to self-heal.

For example, a pod that requires a postgresql pod to connect to will fail and crash out. The scheduler will start a new one. If the postgresql pod is up by that point, then the rescheduled pod will no longer crash.

As far as the network paths, one of the really cool things about pods running inside a k8s cluster is that they can access any of the other pods, even if they are on a different node. However, pods typically reference services (such as postgresql) by dns name. You specify the set of pods that belong to the service by label selector. This allows the pods to come up, tear down, crash, moved to another node, while the service maintains a stable point of contact. It is quite brilliant, and other orchestrators quickly tried to copy it.

Stateful workloads are still difficult. Each distributed stateful system has its own way of setup and teardown. What we will probably see are custom Operators designed for each distributed stateful system, coming out over the years.

Curious - what are some good alternatives to Fargate? Enjoy the ease of use, but more expensive than I'd like for low-to-no traffic side projects

GCP has Serverless Containers coming soon: https://cloudplatform.googleblog.com/2018/07/bringing-the-be...

Zeit V1 supports launching containers (but isnt part of V2): https://zeit.co/docs/v1/getting-started/deployment/#docker-d...

Digital Ocean Droplet

AWS Micro Instance

AWS Lightsail

Linode Nanonode

Shared Hosting


Tons of options for $5 a month.

Sorry, should've been more specific - looking for the same sort of 'provide a container and we take care of everything else' experience, but ideally for a cheaper price. I know I could get a micro instance and set up ECS on it, but it just seems like such a royal PITA...

Also like how my deploy script is basically 'build image; push image; aws ecs update-service'.

With my side projects that are running on a droplet, it feels like there's an incredible amount of additional setup that I need to do every time I add an additional project. Add the new site to the reverse proxy, setup a git server I can push to, set up post-receive hooks for the server, etc., etc.

I would look into the first option, Digital ocean's droplets.

You deploy a docker container and they handle the rest

How do you do that? I only know the traditional "install OS and handle devops yourself" model.

Having said that, I got a cheap and rather beefy server on Hetzner and installed Dokku on it, and I couldn't be more satisfied. It's like having my own Heroku for my low-traffic side-projects, almost for free.

what makes ECS 'vanilla'? i thought it was just a bespoke competitor to K8s.

Maybe you're thinking of EKS? ECS is an extremely basic scheduler that's closer to Nomad.

EKS is no more a competitor to K8S than DOK8s is a competitor to K8s. The CNCF Conformance page[1] shows a link to a spreadsheet[2] which indicates there are currently 96 products by 82 different vendors, including 34 hosted platforms like EKS, which are all Kubernetes.

ECS on the other hand, is a so-called "vanilla" container service which provides its own abstractions and offers no suite for conformance, or compatibility with other vendors' offerings. I have not heard lots of people say great things about ECS. If I could say one nice thing, it's that there is probably less to learn about ECS than about Kubernetes.

[1]: https://www.cncf.io/certification/software-conformance/

[2]: https://docs.google.com/spreadsheets/d/1LxSqBzjOxfGx3cmtZ4Eb...

For me, ECS having less to learn was its main appeal. You get integration with AWS load balancers giving zero down time deployment and its API to automate is very straight forward. I set it up 2 years ago and have barely touched it. I evaluated K8s at the same time and after a full day was left completely confused how to do the same thing even if I spent a full werk.

ECS has definitely got some oddities mainly in the task definition spec which is most a 1:1 with docket commands but with their own AWS stuff mixed in. Part from that the simplicity vs K8s was its biggest draw card. Lot has happened with K8s in two years I’d imagine, so same choice today might be a different story.

that is correct

Off topic in terms of the article itself, but I just wanted to give some love to DO. We are very data and processing heavy startup, using DO for more than 2 years. We did not experience any issues, super easy to manage, great performance, and super important for us - predictable cost.

Their tutorials are also great. Half the time my googling about Ubuntu server stuff end up on their pages

I'm happy to see DO get some attention. I hope this means that really good hosting services like DO can still thrive in the age of AWS. DO seems to be doing well.

I agree. I like that Digital Ocean takes their time to get a new product offering right. It shows. Especially when you compare it to AWS, which we're in the process of moving away from.

Since I already had experience with Linux, the first time I used Digital Ocean, it worked intuitively the way thought it should. And I think DO's documentation is some of the best. They seem to take documentation very seriously, which is important when it's late at night and you're trying to figure out how to do something.

Just wondering which provider did you choose to go with instead of AWS?

Digital Ocean. We compared AWS, Google Cloud Platform, and Digital Ocean. While the latter isn't an apples-to-oranges comparison like the other two are, we found that the price, ease of use, and reliability made it the best choice.

I'm not yet sure about support since they don't offer any phone support. But it can't be worse than Amazons where I once literally had to yell at the support rep to stop talking because he just kept repeating himself, over and over, and wouldn't let me move on.

I agree. I've been using them for years now and their uptime and product usability has been awesome and significantly better than other hosts I've used in the past.

I'm a long time happy DO user, this is exciting.

I've been managing multiple docker apps (using docker-compose) on DO for years. Is there a guide I can use to transition my apps from docker-compose to k8s? I've dabbled in k8s, but am not an expert at all.

Any suggestions?

Kompose has been around for a little while, which essentially "compiles" your docker-compose.yml into K8S config files. However, Docker recently announced ability to deploy directly from docker-compose: https://blog.docker.com/2018/12/simplifying-kubernetes-with-...

My understanding is that you'll still need to do some work, especially if you are building via compose instead of pointing to an image on a registry.

Yes, there is still a lot of work that requires a lot of knowledge about k8s. Kompose up does not just work, but it seems like DO could make that a simple set of commands with better documentation.

I've read the replies.

I've found that kompose does not give me the "try before you buy" experience I was hoping for.

For example, on another major cloud providers, they allow me to push up and deploy my container (stored in their registry) without even having docker installed on the client machine.

Of course, I would want to test and play with k8s before I used it in production. But, with kompose I still feel like I need to understand a lot about k8s. With docker compose I have almost forgotten everything but docker-compose build && docker-compose compose up

I think this is why Heroku was so popular. Just change git push to heroku push and try it out.

Since at least a few people align with my comment by up voting it, it seems like enough people would love an easy transition path from docker compose to k8s. This would be a killer feature for Digital Ocean: create a new cluster and then run a few well documented commands and your application is running inside DO on k8s. A guy can dream, right?

DO has always had the best documentation on just about everything technical; maybe this is just an opportunity to write up the steps? Those two documents so far, even the one from docker, still require a lot of extra reading for someone who has used docker compose for a long time.

Ability to go direct from docker-compose to Kubernetes (I believe kompose just compiles into k8s config files) was recently announced: https://blog.docker.com/2018/12/simplifying-kubernetes-with-...

Ditto - I would also be interested in a similar migration guide from docker-compose to k8s in DO.

DO is my favorite hosting service for sure.

Serious question: Is there an emerging cross platform workflow language to just write stuff to run on any cloud/container hosting setup?

The idea would be to be portable, avoid vendor lock-in and take advantage price differences or quickly route around a system failure in one of the providers.

We rely on bash.

Each machine that's spun up is built from scratch via one command-line call. The first half of the process interacts with each hosting API (we rely on DigitalOcean, Linode, and Vultr primarily), to build a clean slate machine with all of the packages and libraries that we expect.

The second half of the process runs the actual build process, building the instance step-by-step on top of the clean slate, blissfully unaware of which hosting provider it lives on.

This model allows us to be portable and avoid vendor-lock in, and a cross-provider infrastructure lets us gracefully handle system failures while keeping costs down.

I made something similar and turned it into a service [1] focused on WordPress, but unfortunately there hasn't been much interest from people as I thought there would be, though that could be due to my lack of marketing.

My goal was the same, to make hosting more portable with features like snapshotting and restoration of WP sites across servers and to even eventually expand beyond just servers, to bring in domain registrars and cloud storage to be able to move things around easier. For example: you have a site hosted on AWS EC2 with DNS at Namecheap and nightly backups at Dropbox and let's say the AWS Virginia region goes down. You create a new server in Digital Ocean and restore the snapshot from Dropbox and the linked DNS at Namecheap is auto updated.

The more I thought about this though, I began to realize that maybe these features wouldn't be useful to the audience I wanted to target, which was people who wanted to grow from shared hosting and have something reliable and less noisy neighbors, but still more affordable than managed WP hosts and lastly more control (bring your cloud/server provider).

[1]: http://pagefog.com

some unsolicited feedback: your name is terrible(unrelated to your product in any way) and your website doesn't communicate the problem you say you're solving all i get from your sites landing page is "wordpress hosting" which is not exactly uncommon scrolling to the bottom shows me some cloud providers. makes me think you just help people host wordpress in the cloud

Wondering how what you did compares to cloudinit? https://cloudinit.readthedocs.io/en/latest/

Any chance you've got an open source version of this script that you could share?

No, unfortunately, open-sourcing it has been on my to-do list for an embarrassingly long time.

But, building one is easier than it sounds! Think of the problem in two parts. First, find a distro used by multiple providers (we're on CentOS) and craft one script that uses each provider's API to spin up a clean machine.

Once you have that done, it's a matter of understanding your own build process, writing a script that you'll pass into each instance on creation that will fetch your source control, install libraries, and put all the pieces together.

Lots of if/else, lots of curl, lots of yum, lots of jq, but all of it is really straightforward.

Also if your providers + OS support cloud-init, then you can express a fleet of instances which run this sort of script at boot time in something like Terraform pretty easily. Switching clouds becomes "uh... what does <provider> call their <size> instance again?"

Alternatively, pre-baking cloud images that have already run such a script and are ready to boot becomes pretty easy with a tool like Packer.

Though, as the underlying OS changes, you'll need something to validate your scripts' functionality against, and a tool that's a little more declarative might make them less fragile to those changes.

I started using this model in '15 and it's not fragile. At all. The less I'm relying on outside frameworks, the better I sleep at night.

Certainly understandable -- I'd prefer to keep things simple and just have some kind of validation in place rather than rely on an abstraction if I can get away with it.

Having maintained various automation over the course of the past decade and a half, I can say things do change around. Over the course of only a few years though, obviously you can stick to some LTS release of whatever you're using and be pretty confident that e.g. "some-package" does not get renamed to "some-package-version" or split into "some-core" and "some-utils", or have a package get upgraded to a version with some less-than-backward-compatible configuration options, etc.

Is this a cookie cutter stack or do you have different models? Be curious to hear more about your application stack.

There's nothing special about our stack. We have four different instance types (static, api, db, proxy), and we rely on a lot of the usual suspects: Apache, Tomcat, MySQL, Varnish, and HAProxy.

Why/what benefit?

Is there a good way to sandbox terraform configurations? I'm not directly involved (just hear the screaming) but everything I'm hearing is that making modifications is a test of willpower.

For us it's been about as transparent as a brick wall and I'm not clear if that's down to our bureaucracy or built into the design. Both are anathema to the goal of making complex deployments straightforward and self-describing (you can't manage something this complicated unless big parts of it are as obvious as can be).

The recommended way, at least for AWS, is to have multiple accounts. One for production, and then however many more for test and development. Separate accounts let you run TF changes and know you will not impact production.

TF can be tricky to grok at first especially if you don't have everything in TF. But, I couldn't imagine managing more than a server or 2 without it or something similar at this point. Once you get into VPCs, IAMs, etc..., some type of tool is really required.

I'm also a little confused about your transparency comment. IME, tf is very clear what it is going to do in a plan. The current state files are also just json, and easy to read/search if you're not sure about something.

Declarative syntax is notoriously hard to debug, especially for newbies.

As a general rule, if you're giving someone a tool that uses declarative syntax, you also need to provide them a private (not shared) sandbox in which to test out theories, try new things, and reproduce errors seen in production.

Since we don't have that, TF is pretty much the worst solution for our problems. Kube or even Docker Swarm would serve us much better.

Terraform requires separate config for each provider.

There's no way around that really. I wrote my own provisioner and each provider is so different:


Vultr allows floating IPs for IPv4 and IPv6, but Digital Ocean only has floating IPv4. Vultr will start a machine with a floating v4, but you have to add a floating v6 address (giving you two v6 addresses). Digital Ocean does the same thing with v4 (giving you two v4 addresses). They both have different network adapter names, so you've got to configure those per provider as well.

Terraform and my own thing help in easing the transition if you ever need to move, but modifications will still have to be made.

Yes, kubernetes

yeah, but your configs aren't fully portable between cloud providers when you look at things like LBs, storage, etc..

Yes and No. For example, "type: LoadBalancer" works fine on almost every cloud, but various annotations need to be added for SSL termination on an AWS ALB, for example. The annotations don't collide tho, so you can have a load balancer with both AWS and Google Cloud annotations, and it will work fine on either cloud.

Volume classes are probably the best example of being cloud-specific, but this problem is solved by having a different volume class for each cloud provider, named the same, such that the deployment can always grab a disk regardless of which cloud its living in.

They are. Kubernetes has abstractions at exactly the right layers (e.g. Service to create a load balancer) so that you can exchange configs between cloud providers.

There can of course be some difference in the capabilities that each cloud provider supports (e.g. not all load balancer implementations may support UDP) but the abstraction is definitely there.

I thought load balancers popped out the end of Services and it was plugins that handled the specific cloud environment? I'd say that still constitutes cross platform.

Some newer candidates besides Terraform are Portable Service Definitions and Crossplane.



Huh .. It only supports the big three providers. No Digital Ocean or Vultr.

Apache libcloud works well for more basic services (e.g. storand and compute) and is extending into container management services.

Terraform/Packer/Ansible is a popular combo.

So now that's managed K8S, managed databases, load balancers, a cloud firewall that's partially VPC like, object storage and block storage.

Curious what's next. Lambdas maybe?

We have a big roadmap for 2019. Queues are interesting and so are functions in general. Nothing to share today but those are items we are assessing for future roadmaps :)

Queues are indeed nice, but please consider modern solutions such as nats.io and others instead of "just" Kafka / RabbitMQ and similar.

Any chance you can bump me up in the queue for getting managed DB beta access?

Is application hosting (heroku, app engine, elb) on the roadmap?

You could use Hephy Workflow (you might have heard of Deis Workflow?) on DOK8S to do that.

This is an open-source fork of Deis Workflow.

We're hoping to add support for Digital Ocean's Kubernetes and ancillary services like storage in the next release. Preliminary testing indicates that it is very much possible. The only dependencies of a prod Workflow deployment are, any compliant Kubernetes, Load Balancers, and an S3-api compatible storage, all of which are available on DO now.

The latest Hephy release does not yet support general S3-api compatibility with arbitrary S3 providers like DO, but DigitalOcean is our first target platform for expanding the supported offerings. The pull requests are open right now.

(Currently supported platforms include GCP/GKE, Amazon, and Azure AKS.)

[1]: https://github.com/teamhephy/workflow

[2]: https://web.teamhephy.com

Dokku is somewhat similar to all of them.

It's not multi-node though right?

I don't think so, at least not by default.

Ping me about queues!

queues please! :)

Managed databases? Is this a private beta?

You can request access: https://try.digitalocean.com/dbaas-beta/

It was announced last month, discussion was here: https://news.ycombinator.com/item?id=18294450

Functions, Kafka, redis, task queue, elasticsearch, auth, docker register, cloud build.

DO has a managed database?

Managed queues? Lambdas?

Kafka, redis,...?

Seems pretty standard.

Redis and Kafka are both on the roadmap for the second half. We are starting with PostgreSQL in Q1 and then MySQL.

Good to see someone doing this in the correct order

Thx so much for PG first. It will be PG11, right? Also, what will be the billable aspects - just storage?

Cool, thanks for the quick answer.

I use DO for my personal website and pet projects and it works well.

However, I am curious if any medium to big-sized tech companies are using DO in production. As far as I know, everyone is using AWS, GCP or Azure. What's DO's target audience?


A pretty decent list of high profile tech companies are listed on their customer page.

Ah nice. Didn't know about this page. Impressed that ghost is on the list.

I am kinda confused... On the one hand, most people here seem to be fans of the DO services and praise their simplicity, on the other hand, I see their page an wonder what they are offering...

The names of their services seem to be equally confusing as the AWS names. Yes, overall their portfolio is closer to the actual use-cases (as in 'I want to have a blog' -> they have an offer for that), but I am still wondering what a droplet is (looks somehow similar to a Virtual Private Server).

When Hetzner released their cloud service earlier this year, I tried it, loved it and still do. Sure they don't offer the same products (e.g. no S3/Spaces), but at least they use established technical terms instead of some made up marketing names you have to learn for again for every new cloud hoster you want to try.

When you say "Hetzner Cloud", what exactly do you mean? I have checked their marketing pages and they only seem to offer IaaS - is this correct?

You are probably correct. Their product range is quite limited and probably qualifies as IaaS. But on the other hand, everything fits very nicely together (e.g. adding a backup plan for your servers is just a matter of a few clicks and if you don't like clicking through a Web interface: Their API is quite reasonable and easy to use too).

I hear great things about DO and I really want to try it out but DO doesn't accept payment from our country. The same $5 droplet costs $25 here. I really hope you guys expand to the developing countries.

Waiting on Japan region. Most major ones have presence in Japan.

I've been a huge fan of digital ocean ever since I started renting a 5 dollar vps several years ago.

Their UX consistently is easy to navigate, has great documentation, and looks great as well.

I may not be in the category of users that requires or needs many of the features they've released, but I'm consistently impressed by how easy it is for me as a non devops engineer to grok exactly what each new feature they release is.

This looks super neat, I don't have any need for kubernetes as a small time vps consumer, but always happy to see them move forward in this manner.

Usually, I'm never satisfied with products/services and always wonder how they managed to screw up. To counter this behaviour I created a list with things which just work and I have nothing to complain about. DO is on this short list.

As someone that got the k8s invite and have been experimenting with it on DO, I just want to say that I like and this was the main reason I decided to stay instead of leaving for GCP

Meanwhile, Linode fails to innovate.

Well, Linode fails at basic security, so...

They're one of the few with correctly routed IPv6 though .. but they're bad about not fighting spam on their subnets.

Like how Digital Ocean failed at scrubbing disks before giving them to new customers, revealing private data?


Can I have more info on this?

Basic google search will show lots. Short version is they've been hacked notably at least once.


They are hiring for Kubernetes engineers https://linode.breezy.hr/p/7d3abe9bafd501-senior-software-en...

They've been pretty rock solid with good performance. They got their block storage out as well.

There is not any innovation here. It should be: "Linode fails to start cloning amazon, yet.."

Man, judging from your other comments, you must really have a bone to pick with DigitalOcean.

For one, AWS wasn't the first cloud provider to offer managed kubernetes. Two, every major cloud provider pretty much offers some sort of k8s offering. Third, EVERY cloud provider is trying to play catch up to AWS, that's not specific to DigitalOcean.

"Linode fails to offer basic cloud products that every other cloud provider has." FTFY

"Linode continues to be afraid of success"

Do I understand correctly that they provide the manager nodes for free?

Yes, this is the pricing model for everyone except EKS/AWS as I understand it. Manager nodes are bundled with whatever you spend on your worker nodes.

Google has gone so far with GKE as to offer HA masters distributed across availability zones at no extra cost. (On the day that Amazon announced EKS general availability, if I remember correctly, which is priced at $250/mo base cost, before you even get around to spend anything on worker nodes.)

Eddie from DigitalOcean here.

Just want to call out that our worker node pricing is the same as our Droplets (servers). There is no price markup on using our managed service. In fact it's cheaper than deploying it yourself on DO because you don't have to pay for the master node.

Yes! Hi Eddie, I'm Kingdon we met at RailsConf :D

I've been using Kops with Digital Ocean for some time on-and-off, comparing it to the new managed offering which I've been using in limited release, and it works great (either way).

The main disadvantage of Kops being (besides that it's Alpha only, and not managed), I will pay for all of the nodes I use. It should be clear that managed k8s offers a direct cost savings pretty much everywhere it's offered.

(It would be clear, if AWS was not currently leading the broader market and offering EKS with a price model basically contradicting every other vendor's.)

I think it's more like $100/month base, but still infinitely more expensive than any other provider :(

You are right, $0.20/hr * 24hr * 30day is $144

This is a huge number compared to the competition, but also a rounding error when it comes to the monthly infrastructure spend expenses of Amazon's target market here.

I mean, to be fair, that's a really reasonable price for a HA cluster. If you ignore the pricing models of literally all of the competitors' offerings.

So good for a startup, but you're better off using docker-compose if you're hosting your personal project.

This is exactly what they want you to conclude.

You can have a Kubernetes cluster for about $15/mo for your personal project on GKE, if you can cope with several f1-micro or a single g1-small instance hosting your workloads. That's the cost of the nodes, and that's the all-in price. Prices scale up linearly for greater capacity, just add more nodes. (Then of course I guess networking, traffic, and additional storage can also add to the costs...)

If you are comfortable with Kubernetes, you should not be priced out of the market, even for hobbyist projects; the ecosystem is too valuable. I keep saying that Amazon really does not want their customers to use Kubernetes, and it shows in their market offerings. Only Amazon charges this premium for managed clusters, and they don't even seem to recommend using it in the keynote talks I've heard mentioning EKS. "Unless you know you need Kubernetes" is a great way to stop the discussion about adopting new tech.

If you are not already comfortable with Kubernetes, then the primary obstacle to your using K8S is that. The cluster pricing issue is a problem for people who are hyper-focused on Amazon, only.

The only problem with the gke solution is loadbalancers :( It's weird that a signle LB rule costs more than the rest of the kube cluster

If you want to use your worker nodes as load balancers via Ingress, to do this on the super cheap without provisioning any Load Balancers, then you can also do that. (FWIW, DigitalOcean charges for load balancers too, and you can avoid spending on using them in the same ways. I think they are cheaper though...)

The thing to look up is nginx-ingress settings for DaemonSet and HostNetwork mode. The settings to use might be slightly different on GKE. I can give you the one-liner I use to make it work on DO/Kops, here:

helm install stable/nginx-ingress --name ingress --namespace nginx-ingress --set controller.hostNetwork=true,controller.daemonset.useHostPort=true,controller.kind=DaemonSet,controller.service.type=NodePort

That last setting about NodePort may be extraneous, I think you can skip it... actually now come to think of it, I think that is the part that prevents the ingress from provisioning a Load Balancer in front of itself.

Note of course, that there is a reason why (it is the default and) you may be inclined to purchase a load balancer, as doing it this way is fairly likely to turn out to be not only less reliable, but also super inconvenient in a lot of ways. Not "nasal demons" inconvenient, but...

hi, yes no charges for the managed / master nodes. This is Shiv from DO (I head Products there).

Can you give some insight on whether the master nodes are tiered and if so, how they are tiered. My DO master node doesn't respond to commands as quickly as GKE's but I don't know if that's because it's dependent on the tier I chose for the nodes in the node pool.

Eddie from DigitalOcean here.

Master node tiering is something on our roadmap for the future and not currently implemented.


Love Digital Ocean! I've been a member since early '13 and I still use it monthly to host my projects as an open-source developer.

I love everything from their clean design, great tutorials and easy of use for everything VPS related.

I hope the best for them.

I’m just sitting down to do a new startup, but I’ve been out of the devops game for a few years. I feel behind.

What tools/platforms/hosts should I use?

The system will be your standard API+Database+Event bus+workers. I’m a fan of digital ocean and I’ve never bothered to learn AWS (besides S3). I’m very familiar with docker compose, but I’ve never gone deeper than that.

This is a first year startup, we aren’t cost constrained but we are extremely time sensitive.

Should I use Kubernetes? Or Is there something easier that will better serve us the first year?

I think it kind of depends on what you're familiar with. If you're a rails dev and you can build something crazy fast with rails vs anything else I'd just do that. If you're used to working in event-driven microservices environments, then do that. I'm working in an environment using node microservices, mysql, rabbitmq, with k8s and it works really well. I wouldn't say we're _faster_ because of k8s, but k8s really helps us move quickly once we get a service deployed to the cluster.

I'm also working on a start up, and chose to start with heroku and a PHP monolith (with a handful of microservices to do some of the heavy lifting) because those are the things that allow me to move fast. If we ever make some money and the product does find market fit, we'd probably move to something like k8s, but it definitely isn't a part of the early stages for us. YMMV /shrug

Checkout zeit.co or apex/up. Both very easy and great for hosting many parts of your apps. Just no database.

Hopefully they offer easy upgrades and high availability. I always loved the simplicity of their services. Would also love a way to deploy 1 app to multiple locations with ease

Sorry if off-topic: How does DO compare to Linode? I have lots of experience with Linode but since I hear good things about DO I would love to try it out.

In my (personal) experience, you trade linodes customer service for digital oceans ssd speed.

Neither have very good CPU stats.


Where does it say Linode's perform isn't good? Any source DO has better disk speed?

It is good and all that they provide more services but why can't they provide the bread and butter of IAAS: virtual networking (aka VPC) - the ability to set up a virtual router and other nodes inside a private network. We are DO customer currently and need to hack around this limitation for quite a while now and it is the main reason we want to switch away.

DO's "private" networking was not even truly "private" previously as it was shared among its customers. Only recently did it get to the point that the "private" network is separated from the rest. Anyway, even the new "private" network does not allow for something like installing a custom DHCP server and configuring custom subnet for the nodes inside. One of the most common use cases is to route outbound traffic from all the nodes inside a private network through a public gateway and DO's current configuration does not allow that.

They are by far the simplest cloud platform one can use. They for sure honor the best principal (KISS) in Software Development community.

I wish DO implement something like Upcloud [1] flexible plan.

I could get 20x Core, 20Gb Ram, 50GB SSD for $250 / Month or $0.35/Hr. This truly allows you scale up and Scale out with all the flexibility.


i wish they would launch gpu instances :(

We are working on this but don't have a date to share today. I know many of our customers want it and we may be able to offer it in later 2019.

How do people do HA with DO? Coming from the AWS world I’m used to running in 3 or more AZs.

I used to view Digital Ocean as kind of a play toy, good for experimenting and not much else, but these days they're a key player for sure and they've been a super reliable VPS host. Can't wait to try out some container stuff.

Is Kubernetes now accepted as a way to deploy reproducible single servers, or is it for projects at scale?

I've worked on the assumption it's for clusters (10+) but if DO now support it - an alternative to puppet/ansible?

with the complete control plane managed by DO it gets more appealing for projects with smaller scale.

personally, for single servers i still use just plain docker or docker-compose.

DO is great, I like it very much for personal project. But it's worth give a try Google cloud as it will remind me there is a cutting edge cloud service there, just in case I will need them someday.

Hmm, now all it needs is just a comprehensive tutorial for somebody who ignored the whole container fuss so far (happy with Ansible). How to get from 0 to 100 to use Kubernetes?

It's a hard nut to crack. What I've done myself is to jump into any book published about Kubernetes and to do some online training through a couple of different MooCs. This may be a good starting point:



> A curated list for awesome kubernetes

I've seen this list before and it is super comprehensive. Thanks for linking it; I need more like this for my "extreme breadth of choices" slide, when I present to my coworkers who are not using k8s yet, to emphasize how many choices there actually are.

"How do you eat an elephant? One spoonful at a time..."

Having the workers on public droplets is very inconvenient. Are there plans to put them in private networking? (VPC / private subnet in AWS terms)

Folks who have tried this: how does it compare to GKE in terms of ease of use? (my impression is GKE has the best offering of all the public clouds)

Maybe I'm too cheap but I don't see an option for 5/mo nodes in any DC, they're starting at 10 or 15.

Hmm! I think this has changed since the beta.

Why not try a cluster with a smaller scaling group? You can create a cluster with only one node in it, but what is it that you are trying to do on top of your Kubernetes? In my experience with growing clusters, you probably want to scale your per-each individual node size up before you want to scale up the number of nodes in your cluster. (You might even find that you really need only one big node, say for your databases, and want to build a heterogeneous cluster with an autoscaling group of little nodes and that one big node. That's a possibility with node pools on DO K8s.)

An ideal cluster size for me is probably 5 nodes with ~8-16GB RAM each. You could make it still worthwhile to do the cluster thing with probably only 2 nodes at ~1-2GB each, but that'd be pushing it.

I am practiced at making clusters cheap, actually I once was published in the Deis blog, an article about how to deploy Deis v1 PaaS in a highly available fashion for as cheap as possible.

Many of those lessons from nearly a year of research that I did on the topic prior to that publishing, still apply on modern Kubernetes clusters; but many of them don't, and still others are out the window completely on these managed environments, where now it seems possible to get pretty much the same idea of "High Availability" as I was aiming for, but for much cheaper and with better guarantees.

For instance, since you are not running etcd for yourself (it runs under the hood, on the management plane) there is no specific rule that says you must have at a minimum 3 or preferably 5 nodes to keep a stable cluster anymore. This was the basics of learning to wield CoreOS and Fleet 101!

Consensus is handled on the masters, and that consensus is subject to split-brain problems, so this knowledge is still important, but you don't need to have it yourself. In many more basic clusters with managed systems like GKE and DOK8s, this knowledge is practically reliquary! Two nodes may ensure that one is there to pick up the slack when the other has a fault. Exactly how you'd imagine it should work without a Computer Science degree. But with two nodes, ... since you'll probably never see a fault like that ... and the whole environment is self-healing, even if one happens on your watch, might never even have to know about it.

I noticed this as well. I think they are probably still evaluating where to start it at. In all honesty $10 nodes are very fair. I had a semi-poor experience with $5 nodes (for masters, at least) when I used kubeadm on DO. The $15 2cpu/2gb is probably the sweet spot for this. Although the $5 would be nice to start for just messing with some workers.

Every other service has had containers for years. I'm not sure this is an exciting product announcement.

Great job! Maybe now they'll have time to adjust their ssh-keygen documentation to fix their password cracking vulnerability. (https://latacora.micro.blog/2018/08/03/the-default-openssh.h...)

Yeah, how dare a company do two things at once!

Who cares about security? Containers! Containers! Containers!

DigitalOcean has always been behind the curve of these things. All of the other big cloud platforms have had this for at least a few months.

I haven't actually seen anything 'new' come out of DigitalOcean in years.

Their homepage even says "The simplest cloud platform for developers & teams". Their goal is simplicity, not bleeding edge.

and that's just one of the many reasons they're so awesome.

Were they ever 'new' in anything? I use them for small personal projects (they've gotten a lot more stable recently). I never thought of them as an innovative cloud provider but just one that was cheap and easy.

Same here. IME, DO is a great choice for many projects, because it's inexpensive, straightforward and reliable. I wouldn't trade any of those 3 attributes for "innovative".

You remember a few years back when "performance is a feature"?

Well, sanity is also a feature. When none of your competitors are doing it, then it starts to look like innovation.

I mean, the transparent fee structure alone is why I push dabblers toward DO. I'm looking at you, AWS.

To me they are closer to the classic hosting companies, the ones where you can get a "Virtual Private Server" for $20/month rather than "Some Ether" for $0.0000001 / weird unit.

There was a window where they were a nice option for small SSD backed instances when EC2 was still doing low performance spinning disks for their bottom tier.

first only ssd host.

Like many others, I'm not sure DO's USP is being at the cutting edge. I like them because I get a decent amount of control, I have found their support to be quite responsive, and their products have always been very stable for me.

Offering these things without exorbitant egress charges is 'new' enough for me.

This ^^ just price the bandwith DO throughs with the instances at AWS prices. Plus I know exactly what my bill will be next month :)

Echoing @gtf21 and @chrisweekly: in short, us taking out the complexity in using and scaling with cloud capabilities and making learning easy for all developers has been our innovation. We don't always have to be the first to launch to add value to millions of developers around the world :)

Ya'll have the best tutorials and guides around, I've gone from embedded C Dev to full stack, and DO's guides have helped immensely.

I'm working on migrating a large part of my stack over to DO, and I'll be much happier when I'm done! :)

This would be nice if the company didn't ignore all of the spam which comes from their network and the spamvertized sites they host. They deliberately ignore reports sent to their abuse address and attempt to avoid responsibility by making people who want to report abuse jump through hoops to break down spam and submit it in a web form.

Companies which protect spammers will never get any business from me, plus their email reputation is already pretty crappy, so why would I ever want to run containers on their networks?

Digitalocean also should produce and sell chairs, sofas, yoghurt, car tires etc...

Thanks for the feedback, but we aren't Amazon =]

- DigitalOcean cofounder

Love you guys, I've had a private VM for years, at a good price, with great availability, that does exactly what I need. I also really like the firewall I can adjust through the web interface.


Appreciate the feedback, love and loyalty :)

Even Amazon don't do that, they get the working class to do that, they're the middle men.

But you started to compete with your customers. Good luck.

-DigitalOcean ex-customer

Did you try to launch your own cloud offering on someone else's cloud? Did you expect the other cloud would never expand their offering?

I doubt they ever tried to compete with you, and probably didn't even know you were doing something similar. You were just able to come to market before they felt they were ready with a similar product.

This is implying that you or others were creating managed products using Droplets? How is it DigitalOcean's fault if they want to create more vertically integrated products using their own technology?

digitalocean picked a niche and are doing it really well. that doesn't mean they have to expand into other areas.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact