Once I was comfortable with DigitalOcean, I tried launching a VPS on AWS and holycrap it was so insanely complicated. Within 10 mins of creating an AWS account, I was out. I understand that there is nothing wrong with AWS - it is not for me, but DigitalOcean has fullfilled my needs in the most perfect way with a huge knowledge base and detailed tutorials.
DigitalOcean is absolutely incredible.
Have yet to find a tutorial that wasn't great!
How do I fix this: stackoverflow first hit
How do I do this: if DO isn’t first hit already I scroll down to yours
You can contact their Support to get that increased. Just guessing at the reason, but if there was no limit, what happens if someone hacks your account and spins up a 100,000 node cryptocurrency mining farm?
Beyond anything, it tells people about their target audience, which is indie development. That's fine, and it's a great market to be in. But in the case I have to spin up 17 servers in 24 hours in three continents, I can't really afford to deal with DigitalOcean's support under that kind of stress. This doesn't happen often, but when it happens, it absolutely breaks you.
Director of Support, DigitalOcean
( I never knew there was such a low limit of 10 instances. )
I've been a DO customer for 5 years but I'm not sure when my droplet limit was increased.
Hats off to you and the rest of the team.
Agreed. The while I liked the price point and the UI, the tutorials were leagues ahead of everything else.
Really great job!
So far, DigitalOcean does everything that I need. I hope they can maintain this great developer experience.
I don't think they care about the long tail.
The 'specialization' may just be an advantage, maybe not.
I’m a partner in a company in Puerto Rico. Last year immediately after Hurricane Maria hit Puerto Rico, I wrote all of our off-island service providers asking for any help they could provide. DO was one of the most generous responses. They donated 3 months of services based on our average billing.
We greatly appreciated it, and the nice note they sent me showed they’re not only great at providing a good product, but they’re eminently human as well.
Congrats on the launch to them.
Pointing this out... I've gotten a bit of basics working with vultr + terraform but it's not the most straightforward. I'm not the author, but an interested observer.
In my searches for vultr + terraform, your project never came up, but there seems to be some overlap or room for collaboration.
Most of the lower priced options are run on a more shoestring budget and get uptime measured in the 98s. Also, I find a lot more noisy neighbors with many LEB-style providers.
Their webapp wouldn't work, they wanted me to do an API call to cancel, as they couldn't fix their UI.
FYI OVH is a key contributor to OpenStack, they've made quite a few contributions to Ceph, sponsored LetsEncrypt, and host most game servers (and even Wikileaks!).
OVHs need to be in India and East Asia to interest me again.
Routing in Asia will remain crappy due to these peering problems, and volume wholesalers will be few and far between so long as power is spendy.
It can't make sense to host any kind of web anything when paying 10 cents / Gig egress charges.
If you have images at all ... nogo.
It's really odd, like they don't want your business because that's definitely one line item that always makes me keep my eye on the exit.
Though I would say the title of the linked article is a bit misleading. It is a Kubernetes as a service, like EKS, GKE and AKS.
But not vanilla container service a la ECS, Fargate, the former Docker Cloud, etc.
The major cloud providers' deployments of Kubernetes (and other server-side persistent-orchestrator systems, like the venerable CloudFormation) are deeply integrated into the cloud platform they're running on, such that the orchestrator itself can provision resources for a container to run on as part of deploying the container. This becomes important when elastically auto-scaling a container, because each container might need e.g. its own disk, and you can't create them ahead of time if you don't know how many you'll need.
This also means that, unlike a CaaS, k8s et al can manage the very cluster that k8s is running on, scaling it out to suit the size of the current/estimated workload.
Theoretically, you can bootstrap k8s on top of a vanilla CaaS—this is how minikube installs "using" your local Docker install, and this is how deployable PaaSes like Flynn and Deis work. But this approach doesn't supply k8s with the cloud-specific integration it needs in order to provision stuff. It might work if you're deploying against something with a standardized API like OpenStack; but none of the major cloud providers are compatible with such APIs, and so they need to build their own k8s plugins that call their IaaS-level APIs, to make k8s work on their clouds.
Or, to put all that another way: if there were standard IaaS-level APIs for k8s to hook into, Docker (and the CaaSes that either use or emulate it) would just hook into those APIs itself, and there would be no need for a higher orchestration layer.
Then setup/teardown in specific order.. and if you have state, keep a subset of containers alive to avoid loosing state (even if it's replicated).
For example, a pod that requires a postgresql pod to connect to will fail and crash out. The scheduler will start a new one. If the postgresql pod is up by that point, then the rescheduled pod will no longer crash.
As far as the network paths, one of the really cool things about pods running inside a k8s cluster is that they can access any of the other pods, even if they are on a different node. However, pods typically reference services (such as postgresql) by dns name. You specify the set of pods that belong to the service by label selector. This allows the pods to come up, tear down, crash, moved to another node, while the service maintains a stable point of contact. It is quite brilliant, and other orchestrators quickly tried to copy it.
Stateful workloads are still difficult. Each distributed stateful system has its own way of setup and teardown. What we will probably see are custom Operators designed for each distributed stateful system, coming out over the years.
Zeit V1 supports launching containers (but isnt part of V2): https://zeit.co/docs/v1/getting-started/deployment/#docker-d...
AWS Micro Instance
Tons of options for $5 a month.
Also like how my deploy script is basically 'build image; push image; aws ecs update-service'.
With my side projects that are running on a droplet, it feels like there's an incredible amount of additional setup that I need to do every time I add an additional project. Add the new site to the reverse proxy, setup a git server I can push to, set up post-receive hooks for the server, etc., etc.
You deploy a docker container and they handle the rest
Having said that, I got a cheap and rather beefy server on Hetzner and installed Dokku on it, and I couldn't be more satisfied. It's like having my own Heroku for my low-traffic side-projects, almost for free.
ECS on the other hand, is a so-called "vanilla" container service which provides its own abstractions and offers no suite for conformance, or compatibility with other vendors' offerings. I have not heard lots of people say great things about ECS. If I could say one nice thing, it's that there is probably less to learn about ECS than about Kubernetes.
ECS has definitely got some oddities mainly in the task definition spec which is most a 1:1 with docket commands but with their own AWS stuff mixed in. Part from that the simplicity vs K8s was its biggest draw card. Lot has happened with K8s in two years I’d imagine, so same choice today might be a different story.
I'm not yet sure about support since they don't offer any phone support. But it can't be worse than Amazons where I once literally had to yell at the support rep to stop talking because he just kept repeating himself, over and over, and wouldn't let me move on.
I've been managing multiple docker apps (using docker-compose) on DO for years. Is there a guide I can use to transition my apps from docker-compose to k8s? I've dabbled in k8s, but am not an expert at all.
My understanding is that you'll still need to do some work, especially if you are building via compose instead of pointing to an image on a registry.
I've found that kompose does not give me the "try before you buy" experience I was hoping for.
For example, on another major cloud providers, they allow me to push up
and deploy my container (stored in their registry) without even having docker installed on the client machine.
Of course, I would want to test and play with k8s before I used it in production. But, with kompose I still feel like I need to understand a lot about k8s. With docker compose I have almost forgotten everything but docker-compose build && docker-compose compose up
I think this is why Heroku was so popular. Just change git push to heroku push and try it out.
Since at least a few people align with my comment by up voting it, it seems like enough people would love an easy transition path from docker compose to k8s. This would be a killer feature for Digital Ocean: create a new cluster and then run a few well documented commands and your application is running inside DO on k8s. A guy can dream, right?
DO has always had the best documentation on just about everything technical; maybe this is just an opportunity to write up the steps? Those two documents so far, even the one from docker, still require a lot of extra reading for someone who has used docker compose for a long time.
The idea would be to be portable, avoid vendor lock-in and take advantage price differences or quickly route around a system failure in one of the providers.
Each machine that's spun up is built from scratch via one command-line call. The first half of the process interacts with each hosting API (we rely on DigitalOcean, Linode, and Vultr primarily), to build a clean slate machine with all of the packages and libraries that we expect.
The second half of the process runs the actual build process, building the instance step-by-step on top of the clean slate, blissfully unaware of which hosting provider it lives on.
This model allows us to be portable and avoid vendor-lock in, and a cross-provider infrastructure lets us gracefully handle system failures while keeping costs down.
My goal was the same, to make hosting more portable with features like snapshotting and restoration of WP sites across servers and to even eventually expand beyond just servers, to bring in domain registrars and cloud storage to be able to move things around easier. For example: you have a site hosted on AWS EC2 with DNS at Namecheap and nightly backups at Dropbox and let's say the AWS Virginia region goes down. You create a new server in Digital Ocean and restore the snapshot from Dropbox and the linked DNS at Namecheap is auto updated.
The more I thought about this though, I began to realize that maybe these features wouldn't be useful to the audience I wanted to target, which was people who wanted to grow from shared hosting and have something reliable and less noisy neighbors, but still more affordable than managed WP hosts and lastly more control (bring your cloud/server provider).
But, building one is easier than it sounds! Think of the problem in two parts. First, find a distro used by multiple providers (we're on CentOS) and craft one script that uses each provider's API to spin up a clean machine.
Once you have that done, it's a matter of understanding your own build process, writing a script that you'll pass into each instance on creation that will fetch your source control, install libraries, and put all the pieces together.
Lots of if/else, lots of curl, lots of yum, lots of jq, but all of it is really straightforward.
Alternatively, pre-baking cloud images that have already run such a script and are ready to boot becomes pretty easy with a tool like Packer.
Though, as the underlying OS changes, you'll need something to validate your scripts' functionality against, and a tool that's a little more declarative might make them less fragile to those changes.
Having maintained various automation over the course of the past decade and a half, I can say things do change around. Over the course of only a few years though, obviously you can stick to some LTS release of whatever you're using and be pretty confident that e.g. "some-package" does not get renamed to "some-package-version" or split into "some-core" and "some-utils", or have a package get upgraded to a version with some less-than-backward-compatible configuration options, etc.
For us it's been about as transparent as a brick wall and I'm not clear if that's down to our bureaucracy or built into the design. Both are anathema to the goal of making complex deployments straightforward and self-describing (you can't manage something this complicated unless big parts of it are as obvious as can be).
TF can be tricky to grok at first especially if you don't have everything in TF. But, I couldn't imagine managing more than a server or 2 without it or something similar at this point. Once you get into VPCs, IAMs, etc..., some type of tool is really required.
I'm also a little confused about your transparency comment. IME, tf is very clear what it is going to do in a plan. The current state files are also just json, and easy to read/search if you're not sure about something.
As a general rule, if you're giving someone a tool that uses declarative syntax, you also need to provide them a private (not shared) sandbox in which to test out theories, try new things, and reproduce errors seen in production.
Since we don't have that, TF is pretty much the worst solution for our problems. Kube or even Docker Swarm would serve us much better.
Vultr allows floating IPs for IPv4 and IPv6, but Digital Ocean only has floating IPv4. Vultr will start a machine with a floating v4, but you have to add a floating v6 address (giving you two v6 addresses). Digital Ocean does the same thing with v4 (giving you two v4 addresses). They both have different network adapter names, so you've got to configure those per provider as well.
Terraform and my own thing help in easing the transition if you ever need to move, but modifications will still have to be made.
Volume classes are probably the best example of being cloud-specific, but this problem is solved by having a different volume class for each cloud provider, named the same, such that the deployment can always grab a disk regardless of which cloud its living in.
There can of course be some difference in the capabilities that each cloud provider supports (e.g. not all load balancer implementations may support UDP) but the abstraction is definitely there.
Curious what's next. Lambdas maybe?
This is an open-source fork of Deis Workflow.
We're hoping to add support for Digital Ocean's Kubernetes and ancillary services like storage in the next release. Preliminary testing indicates that it is very much possible. The only dependencies of a prod Workflow deployment are, any compliant Kubernetes, Load Balancers, and an S3-api compatible storage, all of which are available on DO now.
The latest Hephy release does not yet support general S3-api compatibility with arbitrary S3 providers like DO, but DigitalOcean is our first target platform for expanding the supported offerings. The pull requests are open right now.
(Currently supported platforms include GCP/GKE, Amazon, and Azure AKS.)
It was announced last month, discussion was here: https://news.ycombinator.com/item?id=18294450
Seems pretty standard.
However, I am curious if any medium to big-sized tech companies are using DO in production. As far as I know, everyone is using AWS, GCP or Azure. What's DO's target audience?
A pretty decent list of high profile tech companies are listed on their customer page.
The names of their services seem to be equally confusing as the AWS names. Yes, overall their portfolio is closer to the actual use-cases (as in 'I want to have a blog' -> they have an offer for that), but I am still wondering what a droplet is (looks somehow similar to a Virtual Private Server).
When Hetzner released their cloud service earlier this year, I tried it, loved it and still do. Sure they don't offer the same products (e.g. no S3/Spaces), but at least they use established technical terms instead of some made up marketing names you have to learn for again for every new cloud hoster you want to try.
Their UX consistently is easy to navigate, has great documentation, and looks great as well.
I may not be in the category of users that requires or needs many of the features they've released, but I'm consistently impressed by how easy it is for me as a non devops engineer to grok exactly what each new feature they release is.
This looks super neat, I don't have any need for kubernetes as a small time vps consumer, but always happy to see them move forward in this manner.
For one, AWS wasn't the first cloud provider to offer managed kubernetes. Two, every major cloud provider pretty much offers some sort of k8s offering. Third, EVERY cloud provider is trying to play catch up to AWS, that's not specific to DigitalOcean.
"Linode fails to offer basic cloud products that every other cloud provider has." FTFY
Google has gone so far with GKE as to offer HA masters distributed across availability zones at no extra cost. (On the day that Amazon announced EKS general availability, if I remember correctly, which is priced at $250/mo base cost, before you even get around to spend anything on worker nodes.)
Just want to call out that our worker node pricing is the same as our Droplets (servers). There is no price markup on using our managed service. In fact it's cheaper than deploying it yourself on DO because you don't have to pay for the master node.
I've been using Kops with Digital Ocean for some time on-and-off, comparing it to the new managed offering which I've been using in limited release, and it works great (either way).
The main disadvantage of Kops being (besides that it's Alpha only, and not managed), I will pay for all of the nodes I use. It should be clear that managed k8s offers a direct cost savings pretty much everywhere it's offered.
(It would be clear, if AWS was not currently leading the broader market and offering EKS with a price model basically contradicting every other vendor's.)
This is a huge number compared to the competition, but also a rounding error when it comes to the monthly infrastructure spend expenses of Amazon's target market here.
I mean, to be fair, that's a really reasonable price for a HA cluster. If you ignore the pricing models of literally all of the competitors' offerings.
You can have a Kubernetes cluster for about $15/mo for your personal project on GKE, if you can cope with several f1-micro or a single g1-small instance hosting your workloads. That's the cost of the nodes, and that's the all-in price. Prices scale up linearly for greater capacity, just add more nodes. (Then of course I guess networking, traffic, and additional storage can also add to the costs...)
If you are comfortable with Kubernetes, you should not be priced out of the market, even for hobbyist projects; the ecosystem is too valuable. I keep saying that Amazon really does not want their customers to use Kubernetes, and it shows in their market offerings. Only Amazon charges this premium for managed clusters, and they don't even seem to recommend using it in the keynote talks I've heard mentioning EKS. "Unless you know you need Kubernetes" is a great way to stop the discussion about adopting new tech.
If you are not already comfortable with Kubernetes, then the primary obstacle to your using K8S is that. The cluster pricing issue is a problem for people who are hyper-focused on Amazon, only.
The thing to look up is nginx-ingress settings for DaemonSet and HostNetwork mode. The settings to use might be slightly different on GKE. I can give you the one-liner I use to make it work on DO/Kops, here:
helm install stable/nginx-ingress
--name ingress --namespace nginx-ingress
That last setting about NodePort may be extraneous, I think you can skip it... actually now come to think of it, I think that is the part that prevents the ingress from provisioning a Load Balancer in front of itself.
Note of course, that there is a reason why (it is the default and) you may be inclined to purchase a load balancer, as doing it this way is fairly likely to turn out to be not only less reliable, but also super inconvenient in a lot of ways. Not "nasal demons" inconvenient, but...
Master node tiering is something on our roadmap for the future and not currently implemented.
I love everything from their clean design, great tutorials and easy of use for everything VPS related.
I hope the best for them.
What tools/platforms/hosts should I use?
The system will be your standard API+Database+Event bus+workers. I’m a fan of digital ocean and I’ve never bothered to learn AWS (besides S3). I’m very familiar with docker compose, but I’ve never gone deeper than that.
This is a first year startup, we aren’t cost constrained but we are extremely time sensitive.
Should I use Kubernetes? Or Is there something easier that will better serve us the first year?
I'm also working on a start up, and chose to start with heroku and a PHP monolith (with a handful of microservices to do some of the heavy lifting) because those are the things that allow me to move fast. If we ever make some money and the product does find market fit, we'd probably move to something like k8s, but it definitely isn't a part of the early stages for us. YMMV /shrug
Neither have very good CPU stats.
I could get 20x Core, 20Gb Ram, 50GB SSD for $250 / Month or $0.35/Hr. This truly allows you scale up and Scale out with all the flexibility.
I've worked on the assumption it's for clusters (10+) but if DO now support it - an alternative to puppet/ansible?
personally, for single servers i still use just plain docker or docker-compose.
I've seen this list before and it is super comprehensive. Thanks for linking it; I need more like this for my "extreme breadth of choices" slide, when I present to my coworkers who are not using k8s yet, to emphasize how many choices there actually are.
Why not try a cluster with a smaller scaling group? You can create a cluster with only one node in it, but what is it that you are trying to do on top of your Kubernetes? In my experience with growing clusters, you probably want to scale your per-each individual node size up before you want to scale up the number of nodes in your cluster. (You might even find that you really need only one big node, say for your databases, and want to build a heterogeneous cluster with an autoscaling group of little nodes and that one big node. That's a possibility with node pools on DO K8s.)
An ideal cluster size for me is probably 5 nodes with ~8-16GB RAM each. You could make it still worthwhile to do the cluster thing with probably only 2 nodes at ~1-2GB each, but that'd be pushing it.
I am practiced at making clusters cheap, actually I once was published in the Deis blog, an article about how to deploy Deis v1 PaaS in a highly available fashion for as cheap as possible.
Many of those lessons from nearly a year of research that I did on the topic prior to that publishing, still apply on modern Kubernetes clusters; but many of them don't, and still others are out the window completely on these managed environments, where now it seems possible to get pretty much the same idea of "High Availability" as I was aiming for, but for much cheaper and with better guarantees.
For instance, since you are not running etcd for yourself (it runs under the hood, on the management plane) there is no specific rule that says you must have at a minimum 3 or preferably 5 nodes to keep a stable cluster anymore. This was the basics of learning to wield CoreOS and Fleet 101!
Consensus is handled on the masters, and that consensus is subject to split-brain problems, so this knowledge is still important, but you don't need to have it yourself. In many more basic clusters with managed systems like GKE and DOK8s, this knowledge is practically reliquary! Two nodes may ensure that one is there to pick up the slack when the other has a fault. Exactly how you'd imagine it should work without a Computer Science degree. But with two nodes, ... since you'll probably never see a fault like that ... and the whole environment is self-healing, even if one happens on your watch, might never even have to know about it.
I haven't actually seen anything 'new' come out of DigitalOcean in years.
Well, sanity is also a feature. When none of your competitors are doing it, then it starts to look like innovation.
I mean, the transparent fee structure alone is why I push dabblers toward DO. I'm looking at you, AWS.
I'm working on migrating a large part of my stack over to DO, and I'll be much happier when I'm done! :)
Companies which protect spammers will never get any business from me, plus their email reputation is already pretty crappy, so why would I ever want to run containers on their networks?
- DigitalOcean cofounder
I doubt they ever tried to compete with you, and probably didn't even know you were doing something similar. You were just able to come to market before they felt they were ready with a similar product.