
DigitalOcean Introduces Kubernetes Product - fatterypt
https://www.digitalocean.com/press/releases/digitalocean-introduces-kubernetes-product/
======
jorams
This is a great development, but I'll have to wait and see how reliable it
actually is. I've had a few droplets running with them over the years, and
that has been rock solid (years of uptime on one droplet, no problems
whatsoever), but we recently started using Spaces for a commercial product and
it has been a catastrophe. There are connectivity issues leaving the service
mostly unavailable on a regular basis, and the status updates about it aren't
particularly timely.

While trying to migrate away to GCS, synchronizing data (using gsutil) has
proven practically impossible. The API is incredibly slow to list objects and
occassionally responds with nonsensical errors.

(Every once in a while a random "403 None" appears, causing gsutil to abort.
We could probably work around that by modifying gsutil to treat 403 as retry-
able, but since overall performance is so awful and we can regenerate most
data, we decided to give up.)

~~~
newsat13
Yeah, DO Spaces is all around awful. Deleting is extremely slow as well. We
had to write special code because DO cannot delete 1000 objects at a time
(takes like 2 minutes for the api call to succeed, if it succeeds at all). To
the extent that we had to just resort to delete entire buckets. The UI also
keep crashing when there are many objects :(

~~~
tedmiston
I recently had to delete a multi-TB S3 bucket and learned that S3 isn't great
at deleting tons of files either. The AWS Console just hangs forever. I let it
go for hours before finding another solution.

~~~
anderiv
It sounds like you’ve already resolved this, but for the benefit of any others
that stumble upon this, my solution for deletion of a large bucket is to set a
lifetime rule with a short TTL, after which the objects are deleted.

Set that rule, and come back to a beautifully empty bucket 24 hours later,
after Amazon’s gnomes have takes care of the issue for you.

~~~
tedmiston
That's what I ended up with as well.

------
flossball
Pricing is key.

Right now I spin up GCE based clusters as the head and version migration is
free and I only pay for 2 non-preemptible adequate nodes. The rest scale and
preempt somewhat cheaply as needed.

~~~
eddiezane
Eddie from DigitalOcean here.

You'll only need to pay for your worker nodes and we'll handle upgrades for
you.

~~~
Operyl
Then why does your marketing for early access say "get a free cluster." Is
that implying that Digital Ocean will pay for the worker nodes for early
adopters?

EDIT:
[https://www.digitalocean.com/products/kubernetes/](https://www.digitalocean.com/products/kubernetes/)
"Sign up for early access and receive a free Kubernetes cluster through
September 2018."

~~~
mejamiewilson
Jamie from DigitalOcean here. Yes users won’t pay for their workers, Block
volumes or Load Balancers in early access until the end of September 2018.

~~~
ericpauley
I've found DO load balancers cannot reliably handle tls termination over 100
connections per second, and fail completely above around 300/s. Are there
plans to make the load balancers more robust as part of this change? We had to
switch to DNS load balancing because DO's solution simply could not scale.

~~~
mejamiewilson
Hey Eric, load balancers are getting an upgrade in the near future. Keep your
eyes peeled this week!

~~~
ericpauley
That is great to hear! A few things I'd really like to see:

* Ability to retrieve host health status in API

* Better throughput guarantees, especially with TLS

* Ability to serve from unhealthy nodes if all nodes are unhealthy

* Load Balancer Health Monitoring

------
minxomat
Super nice. Been running k8s on DO using Rancher, but a native solution will
be really awesome.

I'd like to see it as tightly integrated into e.g. GitLab like GKE is.

~~~
eganist
For the curious: k8s is a specific kind of contraction -- a
[https://en.wikipedia.org/wiki/Numeronym](https://en.wikipedia.org/wiki/Numeronym).
Other examples include i18n = internationalization.

------
ksajadi
Sweet! Skycap also released the container delivery pipeline today. Can’t wait
to use them together with DO Kubernetes. [https://blog.cloud66.com/deploy-
your-applications-to-any-kub...](https://blog.cloud66.com/deploy-your-
applications-to-any-kubernetes-cluster/)

------
jasonrhaas
Good for them! They beat AWS to it. Amazon's managed Kubernetes service is in
the preview stage but should also be launched soon.

How do people like Kubernetes as a production ready solution to deploy
containers? I've been using Docker for a while now, just starting to mess with
k8s.

~~~
benatkin
> Amazon's managed Kubernetes service is in the preview stage but should also
> be launched soon.

Same as DO?

"DigitalOcean Kubernetes will be available through an early access program
starting in June with general availability planned for later this year."

I'm not sure which will be first.

~~~
tedmiston
I had originally heard Q2 for public launch of EKS Fargate, but officially
Amazon is just saying:

> * AWS Fargate support for Amazon EKS will be available in 2018.

[https://aws.amazon.com/fargate/](https://aws.amazon.com/fargate/)

------
joshuatalb
I have some questions, mainly around networking and the like. Is there
somebody from DO that I can get in touch with directly? Thanks.

~~~
al3xnull
Second on the network. My concern is around inter-node traffic and if it's
segmented or like their current private networking.

~~~
mejamiewilson
Hi, Jamie from DigitalOcean here. We will have VPC support on DigitalOcean by
the time we go live. But if you want to talk in more detail, my email is
jwilson@digitalocean.com

------
geku
We launched [https://www.KubeBox.com](https://www.KubeBox.com) beta today. You
get a fully managed cluster, control plane and nodes. And you can start with a
single node cluster with 8GB RAM and 2vCPU for $36/month. Additionally you can
get Rancher auto-installed for managing projects, users, groups, permissions
and workloads.

We are in early beta but if you are interested, please sign up and we will
activate your account asap.

If you want to talk, we are at KubeCon Europe, contact @geku on Twitter.

------
manishsharan
@Linode -- you guys seeing this ? Don't make me migrate away after 2 years.

~~~
Operyl
If you enjoy having hypervisors disappear for 12 hours without notice, go
ahead.

Until then, I'd say Linode is your better bet :).

EDIT: A little more information, I had two VMs go offline abruptly around 1am
one night. It took 3 hours for Digital Ocean to even acknowledge a problem
existed (I had opened a ticket), and that was only after I started poking
their twitter account. It was at least 12 hours before they brought it back
online, it was never acknowledged in any mass ticket. If you are unlucky
enough, you can have the same thing unfortunately happen to you. This is my
second experience of having such an outage at Digital Ocean and is, as a
result, the reason I still only use DO as a testbed and nothing more.

EDIT2: Another pretty bad example of Digital Ocean:
[https://status.digitalocean.com/incidents/8sk3mbgp6jgl](https://status.digitalocean.com/incidents/8sk3mbgp6jgl).

~~~
gravyboat
I'm actually looking at migrating away from Digital Ocean. I had a recent
incident that took over a week to solve with 48+ hours between tickets where
support wasn't even reading the previous ticket. They claimed that someone
looked at the hypervisor the system was on and found nothing, but as soon as
that occurred my issue was resolved. One of the worst support experiences I've
ever had where I was asked for the same information multiple times after
waiting days to hear back, and was even asked if my issue started during an
outage that was days after my initial report. Completely unacceptable.

~~~
zacharybk
Hey gravyboat - I'm Zach, Director of Support here at DigitalOcean. I'm very
sorry that we didn't provide helpful or timely support. This certainly isn't
the type of experience that's typical, and definitely not what we design for.

Can you do me a favor and shoot me an email so that I can investigate further?
zach@digitalocean.com

Please see this as my personal commitment to our entire userbase that I'm
happy to hear from you as well if your experience was not perfect.

~~~
gravyboat
Sure I've sent you a follow up email. Thanks for the response.

------
ksec
So, Kubernetes basically won? Nomad and Swarm don't stand a chance, but what
about Mesos or DC/OS?

~~~
cytzol
Kubernetes is the Git of container orchestration. It doesn't matter if any
other products are better when k8s has, like, 99% of the mindshare!

~~~
ksec
Sigh.

You like BSD? Let the world have Linux.

You like Hg? Let the world have Git.

You like DC/OS? Let the world have Kubernetes.

But they all lost.

------
timwis
Does digital ocean have any equivalent of aws' RDS? Or do I have to manage my
own database server?

~~~
mejamiewilson
Jamie from DigitalOcean here. A database as a service offering is on our
roadmap and in discovery, and we hope to have more information about this in
the near future.

------
devmunchies
Fantastic. Any plans for a managed DB offering (Postgres) so the whole stack
can be managed?

------
segmondy
Good, I have been debating between running my own k8s on DO vs GKE. I'm glad I
don't have to build my own cluster. I think I'm going to do both for now tho.
If DO is mature and stable I'll kill the GKE cluster.

------
whitepoplar
My only wish is for DO to allow customers to bring-your-own IP blocks. Vultr
has this, but seems less robust to run a business on.

~~~
ryanworl
Check out packet.net if you haven’t already, they offer that and some other
networking features you don’t see typically from VPS providers.

~~~
whitepoplar
Packet is nice, but way more expensive than DO.

------
vegardx
How are you guys sorting out networking?

~~~
mejamiewilson
Jamie from DigitalOcean here. VPC is coming to DigitalOcean and the cluster
will live within a VPC. Past that there are a lot of details! Is there
something specific you're interested in?

------
Rotareti
I'm currently running multiple Kubernetes clusters using StackPoint in
combination with DigitalOcean. This has been working very well. Could someone
tell me how the new DO Kubernetes service compares to StackPoint?

~~~
yebyen
You're paying for master nodes with StackPoint, right? Each droplet you start
has the same cost structure whether it's running your "user-land" slave
workloads or if it's there just for running your cluster.

The big payoff (for small clusters like mine anyway) is that masters won't be
charged, like other managed Kubernetes offerings from Azure and Google. I
don't know enough about StackPoint to compare it to a service I haven't even
seen in beta yet, but I can tell you that much.

I know that StackPoint is supposed to be "like a managed" experience. Maybe
one of the DigitalOcean guys who has been responding in this thread can speak
to the technical details of the new offering.

~~~
Rotareti
> You're paying for master nodes with StackPoint, right?

No, you pay a monthly subscription (starting at 50$/month). The service allows
you to create/update clusters easily. I'm not sure, but I think you can create
as many clusters as you want with a 50$ subscription (at least I never hit a
limit). The procedure to create a new cluster looks something like this, if
you use the web interface:

* click "add cluster"

* select cloud provider (DO, AWS, GKE, etc.)

* configure master nodes. E.g.: 2 master nodes @ 2G Ram, running in region NYC1.

* configure worker nodes. (same procedure as with master nodes)

* submit

If you choose DO, you get a cluster that works with DO load-balancers, DO
block-storage, etc out of the box.

If a new version of Kubernetes is released, you can hit the "update cluster"
button.

They have an API for all the stuff too.

I chose StackPoint in combination with DO, because it felt least bloated and
least locking-in.

Now that DO introduces the Kubernetes service, I can imagine that I won't need
the StackPoint subscription any more.

~~~
yebyen
That's very interesting, thanks for sharing. $50/mo is a bit for a hobbyist,
but not much if you're operating at any serious kind of scale!

But I mean, in addition to the StackPoint subscription, you do also pay for
the master node droplets when you use it, as well as paying for the worker
droplets, right? You won't be paying for those masters anymore with the
managed offering, from any of the cloud vendors I've heard of announcing a
managed offering. I have to imagine this is because they can do (or plan to
do) multi-tenant APIs under the hood.

(Even if you get a pool of worker nodes and the pool is on machines that are
exclusively yours, it seems unlikely that your constellation of masters is
ever going to be exclusively yours unless your bill says "dedicated masters"
and you've paid something for it... and that's fine, as long as it's done
right! I obviously can't afford to give myself as many masters as a multi-
tenant system can allocate a share on for me. We will all wind up getting more
resilient systems out of the deal, and for much cheaper, in this arrangement I
think.)

I'm definitely signing up for this preview, I hope it will include an API for
creating/upgrading/tearing down clusters! I can't imagine it will do anything
but obsolete StackPoint for DigitalOcean customers.

Then again, maybe the bigger value provided by StackPoint is actually that you
can take this K8S cluster orchestrator with you to a different cloud if you
need to move. It is obviously going to be a harder sell though, when all of
the major vendors are coming out with their own managed k8s offerings that
enable cost savings. Next to $50/mo, enough masters to make your cluster
resilient against localized failures on a 24/7 basis are... pretty costly,
right?

It's really going to come down to, are the managed offerings as good, better,
etc than the ones you can install yourself with a tool like kops (or are they
as good as the ones that a service such as StackPoint can help you install for
yourself?)

I wonder, did you try installing Kubernetes for yourself before you tried
StackPoint? If so, what distro(s) did you try and which ones did or didn't
make the cut?

~~~
Rotareti
_> But I mean, in addition to the StackPoint subscription, you do also pay for
the master node droplets when you use it, as well as paying for the worker
droplets, right?_

Yes, you pay for all of them, but therefor you get full control of the entire
cluster.

 _> You won't be paying for those masters anymore with the managed offering,
from any of the cloud vendors I've heard of announcing a managed offering._

Interesting, I didn't know about that. Not sure if I prefer this though. Might
be another "surface" for the cloud providers to lock you in.

 _> It's really going to come down to, are the managed offerings as good,
better, etc than the ones you can install yourself with a tool like kops (or
are they as good as the ones that a service such as StackPoint can help you
install for yourself?)_

I guess things around kubernetes will slow down soon (hopefully) and I'll
probably switch to something like kops/playbooks/etc. But right now things are
still moving too fast for my taste, so I'm happy to abstract away as much as
possible.

 _> I wonder, did you try installing Kubernetes for yourself before you tried
StackPoint? If so, what distro(s) did you try and which ones did or didn't
make the cut?_

Yes, I experimented with different approaches for Kubernetes, Openshift and
Rancher and I tested several cloud providers. In the end I found it wasn't
worth the effort to learn and configure the whole thing from the ground up,
since everything was constantly changing, like I said. Even if you have your
cluster ready there is still _a lot_ of work to be done for the deployment
pipelines, cluster backups, etc.. For now I'm happy that creating/destroying a
cluster is a matter of hitting a button, but I'm also excited to see what the
future brings. Kubernetes is definitely one of the most amazing projects I've
come across so far.

~~~
yebyen
> > You won't be paying for those masters anymore with the managed offering,
> from any of the cloud vendors I've heard of announcing a managed offering.

> Interesting, I didn't know about that. Not sure if I prefer this though.
> Might be another "surface" for the cloud providers to lock you in.

If you want a serious HA-FT kubernetes cluster that is spread across and
resilient against failures in a single AZ, and you don't have something like
Stackpoint or a managed K8S offering to configure it for you, there is a
pretty serious amount of work (and decent number of computers required) in
order for you to get your cluster there.

That being said, I don't know how many "hosted, managed" K8S offerings there
really are in GA right now to compare.

I'm counting GKE on GCP, AKS on Azure, IBM's new managed k8s offering, AWS/EKS
(which is still in preview) and Digital Ocean's offering announced yesterday
(which is still pre-beta.) As far as I know, all of those offerings will give
you as many masters as you need to make a resilient cluster for free, and you
only pay for the workers.

(Except for the offerings that are in preview mode, then I guess you just
don't pay for any of it for now...)

> Platform - Certified Kubernetes - Hosted (21)

I guess there are also quite a few I haven't looked at yet. Those are just the
platforms with hosted offerings.

[https://www.cncf.io/certification/software-
conformance/](https://www.cncf.io/certification/software-conformance/)

I personally used kubeadm for my toy-sized single node cluster, and it's
great, but I'm also still on 1.5!

------
mark_l_watson
I have no experience setting up Kubernetes so at the start of the year I
looked at setting up a cluster on AWS. There were lots of extra expenses for a
small ‘learning cluster’.

I just signed up for early DO access - can’t wait!

------
KenCochrane
Too bad they didn't list the pricing, it would be nice to know how much it
will cost, once released.

~~~
mejamiewilson
Hey KenCochrane, I’m the Product Manager on this product at DigitalOcean.
VonGuard is right, you only pay for the worker nodes (based on our Droplet
pricing, there’s no premium) and we take care of the master. Our standard
pricing lives here:
[https://www.digitalocean.com/pricing](https://www.digitalocean.com/pricing)

~~~
plokiju21
Right now, GKE charges $18/month for a load balancer on top of node costs,
which is costly for small scale/personal projects. Will DigitalOcean have
anything similar?

~~~
mejamiewilson
Jamie from DigitalOcean here. Currently we'll deploy our DigitalOcean Load
Balancer on your behalf, which is $20 a month, but we are also investigating
other options. If you have any thoughts on how this should work, or what
specifically you'd be looking for, I'd love to hear them.

~~~
lucasyvas
Speaking personally, I'd rather opt out of the Load Balancer altogether and
instead have a floating IP automatically set up across the workers. Ingresses
are easy enough to set up so that would complete the picture.

I think having the Load Balancer option is important for simplicity, but I
feel a lot of DO customers (such as myself) opt to use DO for optimizing cost
as well. It's a balance.

------
eulid55
Sounds like DO is going to have the cheapest k8s service out there!

------
technofiend
For some free lab time until the new commercial offerings arrive, try play
with k8s: [https://labs.play-with-k8s.com/](https://labs.play-with-k8s.com/)

