Hacker News new | comments | show | ask | jobs | submit login
Run Kubernetes on top of DC/OS (mesosphere.com)
108 points by tobilg on Sept 6, 2017 | hide | past | web | favorite | 35 comments

I recently reached out to Mesosphere to better understand the value and options available to my company if we switched to Mesos (and thus Mesosphere). We currently run Kubernetes across all major cloud providers and we'd like some help.

I was startled by how dogmatic they are about their pricing model: per node, annually. Our entire business model is running dynamically scaling data pipelines for companies. We spin nodes up and down programmatically based on a pipeline's history and a dozen other factors.

I had a lot of back and forth with them and eventually their CFO and they just could not understand the concept of a "node hour" or anything of the like. Eventually, it was determined if we paid the per-node annual price for our "average" (completely different story, but they calculated this about as wrong as you could have) usage and we went our separate ways.

The product seems really cool but as someone who runs a company that easily spin up thousands of nodes in the middle of the night for (for a grand total of $40/hr on GKE...) this pricing model just seems antiquated. I imagine that all companies will be driving their usage to this practice in due time. I really hope to see the industry change.

Had an interesting pricing experience with them a couple of years ago. We were happy with our Mesos/Marathon cluster of ~100-200 machines but needed some acute support and we were in the same investment fund of DCVC. Got on the phone with ... Sales... No problem; you're a friend of ours because of DCVC; we'll get you special, friendly pricing on DC/OS...

Got an email a few hours late: $100,000 per year. Was fairly puzzled.

Two hours later, I got another email correcting the prior price: the special, friendly price was now $150,000 per year.

We migrated to AWS ECS and all was well.

A lot of software vendors charge $X,000 / server / year in subscription fees.

Redhat charges > $10,000 per physical server for their OpenShift and Pivotal gets for 100s of nodes often 7-figures. Remember just how much VMWare costs...

I was examining a Mesosphere deployment a few years ago and also found the pricing model to be difficult in a cloud environment, but the salesperson I spoke with seemed more clueful than your description.

We chose to go otherwise for a variety of reasons.

Today, I think I would probably suggest to a Serious Company with a fully staffed ops/sre/devops team to run a pure Mesos play for the data systems with k8s on top or side by side for web systems.

> Mesos (and thus Mesosphere)

Mesos is an open source project. Would you consider any other vendors besides Mesosphere?

Mesos and Marathon are open source, but there are features that the enterprise offering has (in addition to support) that the open source one does not.

Vendor wouldn't matter if someone needed those "enterprise" features.

DC/OS is also open source. What are your concerns with it?

Mesos is an Apache project. (DC/OS is not.) Apache projects are required to be vendor-neutral. That's part of the covenant under which contributors participate.

The Apache governance model provides opportunities for competing vendors, so besides Mesosphere, the market is open for other players to provide Mesos support.

Components aren't open source. Still waiting for the new hdfs package to be released see https://github.com/mesosphere/hdfs-deprecated/blob/master/RE...

Yes the package is available but as far as I can see not open source yet.

Excellent. Thanks!

Disclosure: I work for Pivotal, which competes with Mesosphere in this arena. Read with discretion. If shilling persists, consult your techcrunch.

With that out of the way: you should try Kubo.

Google & Pivotal have been working on Kubo[0] to make the management of Kubernetes easier by using BOSH as the deployment/update/repair system. You make deployments by editing a yaml file. Or, better yet, but having a tool edit the file for you as part of a structured CI/CD pipeline (I've seen both and I am a fan of the latter).

BOSH works at the IaaS layer. Insofar as you mean "spin up and down VMs", Kubo is well-suited for that exact problem. You also get a bunch of upgrade and self-healing stuff for free.

Pivotal is also adding a commercial offering based on Kubo, PKS[1], which has been built with Google[2] and VMWare[3]. You might not need it for your exact usecase; the sweetspot is Pivotal Cloud Foundry users who want a relatively seamless side-by-side integration between CF and k8s. There's a bunch of value-added features (Harbor, GCP service brokers), of course, plus the whole throat-to-choke thing people like to pay for.

The pricing model isn't set in stone, but I suspect it will settle in the vicinity of what you have in mind. Our DNA is charging per-instance for Cloud Foundry apps, rather than worrying about sockets or RAM or other metrics which only poorly correlate to user value.

Anyway, I can hook you up with Kubo or PKS folk. Email me: jchester@pivotal.io

[0] https://github.com/cloudfoundry-incubator/kubo-deployment

[1] https://pivotal.io/pks

[2] https://www.blog.google/topics/google-cloud/vmware-and-pivot...

[3] https://blogs.vmware.com/cloudnative/2017/08/29/vmware-pivot...

I'm gonna check kubo out, cause its REALLY relevant to me in the coming months. Can you comment any why kubo might be preferable to something like kops?

You are probably aware of this, but people are starting to get a little fatigued from amount of automated ops/deploy tooling out there for k8s. Don't get me wrong, this is an area that needs improvement so competition is totally necessary, but its starting to become a bit difficult to navigate the landscape, or even keep up with the best choices.

> Can you comment any why kubo might be preferable to something like kops?

I'm unqualified to give a fair comparison, as I'm only skim-the-website familiar with kops. BOSH comes up much more frequently in my work. Most teams working on Cloud Foundry use it in some way, even if only to manage Concourse.

The main advantage that BOSH has over any of the others is maturity and production experience with large, stateful, distributed systems (first release was in 2010). It got the original abstractions right in a way that the alternatives of the time didn't. Chef et al are pet-builders, they excel in wrangling a single server into the target state you want.

BOSH instead says: why do you care about single servers? You're building a distributed system. If components drift or break, replace them with a clean image which was built from source.

As an example of production use, at Pivotal we use it to manage PWS. The Cloudops team use BOSH to roll out changes to a system running 10s of thousands of apps from thousands of users and companies.

In general, unless we land on a bug in the code rolled out, nobody ever notices. When we do have a bug, we can roll it back pretty easily.

BOSH isn't constrained to deploying Kubernetes. It was originally developed for Cloud Foundry and since then people have packaged up all manner of systems for it. We support some ourselves. For example, we have releases for RabbitMQ, MySQL and so on.

These can get used by on-demand service brokers too. Say an app developer wants a private RabbitMQ cluster. They tell the service broker to create a service, it has BOSH setup and monitor a brand new cluster, when it's done the dev can bind it to their app with a single command. Bing bang boom, totally self-service services. Nobody needs to fill out a risk form, file a ticket or pester their inside connection in ops.

One last advantage for Kubo and BOSH generally is that Google has assigned fulltime Googlers to both of them in multiple locations, working in pairs alongside Pivots. We've also become closely engaged with Google's new CRE program and it's been a really great learning experience for us.

This makes sense if DC/OS is targeted exclusively at on-prem.

I was confused by this passage:

>"Running Kubernetes on DC/OS allows you to run different types of workloads (more explicitly, both the stateless and stateful components that make up most modern applications) on the same infrastructure."

Can someoene answer - how does Running Kubernetes on top of DC/OS help you run stateful apps on Kubernetes?

Or is the meaning that DC/OS is better for running stateful services and then you can use K8 to run your stateless services?

Its been a while since I've used Mesos, is the path for running stateful services on complete and very compelling now?


Currently running stateful apps on a DC/OS cluster + HA DBs. Fairly straight forward to get stateful working now + there are libs to migrate data around to chase your apps/services when/if Mesos/Marathon relocates them (e.g. after a service restart/crash) if you need your data co-located with a managed app/service.

The thing I'm most interested in exploring with K8+DC/OS is having DC/OS manage a couple of K8 instances so we can isolate 'virtual' clusters for various envs/apps. I suppose you could do this already with Marathon (the DC/OS built-in container manager), but we're not. Psyched to benefit from the K8 community + have the underlying DC/OS VM control/management plane.

Thanks, can you elaborate on the libs available to "migrate your data around"? Does Mesosphere reschedule your DB to another node that has an equivalent persistent and reserved storage(SSD etc.) volume configured on it?

Are you using Portworx for this?

> "can you elaborate on the libs available to 'migrate your data around'? ... Are you using Portworx for this?"

The tool I had in mind was 'REX-Ray'[https://mesosphere.github.io/marathon/docs/external-volumes....].

That said, we're not actually doing the 'chasing db' config. Instead we run a HA Neo4j DB deployment as a Marathon service pegged to a handful of nodes each with local persistent volumes allocated to Neo4j. I.e. we can allocate a % of a node's resources to 'static' Neo4j deploys, and then let Marathon dynamically manage any remaining free resources on the nodes. Our other services then use the Marathon service DNS to look up the Neo4j service for read/write.

Portworx looks cool too -- will need to investigate.

Also, the DC/OS documentation is quite good in general if you're looking to dig in on this: https://dcos.io/docs/1.9/storage/

> "Does Mesosphere reschedule your DB to another node that has an equivalent persistent and reserved storage(SSD etc.) volume configured on it?"

Yes/it can, but in that config you're booting up new/empty storage volumes. Obviously not what you want for many core persistence requirements though great for caches. We'll probably opt for this config near-term for our web-server SSR cache.

Michael from Portworx here. Thanks for the shout out. Just for some context, we just announced a partnership with Mesosphere today to help accelerate adoption of DCOS for stateful services [0] in fact. We handle the automation of all the state management mentioned above, not just volume provisioning. Our customers include big companies like GE and Dreamworks but also a lot of smaller companies. You can use PX-Dev[1] for free up to 3 nodes. Would love feedback.

[0] http://m.marketwired.com/press-release/mesosphere-partners-w...

[1] https://docs.portworx.com

Cool -- We're a small operation atm so will take a look at that dev tier. For more context, we also do some block storage off-cluster in GCE.

Thanks for the explanation. I've heard of Rex-RAY but I thought that was a vendor-specific solution(EMC.) Maybe that has changed?

>"We'll probably opt for this config near-term for our web-server SSR cache."

What is an SSR cache?

Yes, Rex-Ray is EMC specific. Take a look at Robin Systems (https://robinsystems.com/) for stateful containers. They have examples of running hadoop, cassandra, mongodb, etc all on commodity hardware.

That's not really accurate I think. RexRay is also supporting non-EMC solutions: https://github.com/codedellemc/rexray#storage-provider-suppo...

Mesosphere co-founder here.

This is correct: "DC/OS is better for running stateful services and then you can use K8 to run your stateless services"

Data services run directly on DC/OS via application-aware schedulers. They have the operational logic for how to bring up say a Cassandra cluster correctly, how to upgrade it to a new version without breaking it, change config, scale up, etc. All things you usually have to figure out yourself. When you run k8s on DC/OS you get these same benefits.

Thanks for the clarification.

Could anyone point out the boundary between the open-source Mesos / Marathon stack, and the commercial DC/OS? The official documentation (https://docs.mesosphere.com/1.9/overview/what-is-dcos/) doesn't make it clear. What capabilities does DC/OS have that are not available in the open-source portion of the stack?

> As a distributed system, DC/OS includes a group of agent nodes that are coordinated by a group of master nodes. Like other distributed systems, several of the components running on the master nodes perform leader election with their peers.

This just sounds like vanilla Mesos masters and slaves. DC/OS "runs on top of" that, but what is it actually doing? Is the DC/OS another service that just enables the Mesos masters to initially discover and replace each other? Could it not be replaced with Zookeeper or Consul? That seems like a small piece of the puzzle, what would make it the one expensive thing, while the rest of the system is free? Is the overall stack actually shareware, rather than a community of independent open-source services?

The reason I am so curious, is because after running a small demo a year ago, the Mesos stack looked really promising. The only thing holding me back from proposing a large scale trial, for comparison to our sprawling Heroku/ECS/EBS setup, was the feeling that I was not understanding a crucial part of the architecture, and not understanding the pricing, if that is even applicable (couldn't find price info anywhere! How do I quantify that part?)

As I understand it, DC/OS is essentially a nicer interface for Mesos / Marathon with baked in monitoring, service discovery, security add-ons and a "package" manager-esque framework manager.

DC/OS is actually open source (I think) so I don't think it has any capabilities you could get for free.

DC/OS is a distribution of Mesos similar to how Red Hat is a distribution of Linux.

It includes additional open-source and closed-source components that make running/managing a Mesos cluster and Mesos frameworks (such as Marathon, Spark, Cassandra, etc) easier.

The community edition is free to use.

>"Kubernetes on top of Mesos through DC/OS more closely matches Google’s own architecture; where Kubernetes is a service running within VMs that are managed by Google’s proprietary Borg platform."

My understanding was that Google runs containers in a VM for security. Mesos uses the Docker container executor not a VM. How does this more closely resemble Google's Borg/VM model?

Mesosphere co-founder here.

You're correct in that GCP runs k8s in VMs, DC/OS doesn't. What's similar is that there's a resource manager underneath - Borg for GCP, Mesos for DC/OS. They serve similar purposes like resource management, isolation, and operating the services on top.

>"What's similar is that there's a resource manager underneath - Borg for GCP, Mesos for DC/OS."

Maybe I don't fully understand DC/OS then. I was under the impression that DC/OS was simply a distro for Mesosphere. But your comment make me think that either my understanding is incorrect or else DC/OS has become something more than a Mesos distro. Could you elaborate? Thanks.

Excellent, was asking about this a while ago [1], looking forward to trying this out on Azure.

[1] https://news.ycombinator.com/item?id=14907878

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact