Hacker News new | comments | show | ask | jobs | submit login
Ask HN: Why is Docker so popular in comparison to other containerization tech?
144 points by justaj 11 months ago | hide | past | web | favorite | 103 comments
Has it only been the first mover effect or are there (decisive) features that other technologies don't have?



They have focused relentlessly on making it easy to use for every day development. Solaris Zones, BSD Jails, and LXC were around first, but were (relatively) arcane and finicky, and no one ever looked at bringing them to non-linux (or non-bsd for jails) dev laptops. OTOH, Docker made a good (enough) implementation, and went for developers, developers, developers. They brought it to Mac, to Windows, they engaged a wide audience and mostly listened.

Focusing on day to day development for users came with tradeoffs -- ops oriented features came much later, getting serious about security came later, etc. These are all important things... but less so than ease of use and adoption.

The pattern is not new, the same thing happened with Rails -- the concepts in rails were not novel, the extreme ease of getting started and getting to useful really fast was the killer feature (to get it going). After it got a big ecosystem, it got the quality aspect (mostly by replacing itself with merb).

The lesson: if you want wide adoption, build something good and reduce every barrier to adoption you encounter. The most important features are the ones getting in way of people using it, not the ones people using it are having. You'll need to solve them, but solve adoption first.


Astute. I was trying to use jails for development in kinda a proto-Docker setup, where my apps each ran in their own BSD jail. I struggled to put the pieces together to set up the jails, share a “base jail” with ZFS, forward ports, etc. I had scripts for messing with my pf rules to add port forwards, other scripts to update/apply config inside a jail... it was more frustrating to automate creating and destroying “immutable” jails, than maintaining a pet per app I developed... until I messed up an environment, and just abandoned the project!

I think we’ll see the same sort of thing happening with FaaS frameworks, or any of the layers building on top of Swarm/Kubernetes/Mesos.


Yeah, if you want to use jails or any container-type technology on your own to use for application deployments, then you have to write a ton of scripts for creating, managing, and destroying them. I'd rather use someone's pre-existing set of scripts for that. And I am. It's called Docker.

To me, asking why people are using Docker instead of plain jails combined with a series of other technologies is like asking why people make buildings out of wooden planks instead of trees. The question seems a bit wrong, because Docker is just built on existing container technologies. And I'd rather save a few steps. Planks are the exact shape and size that I want to deal with when building an application anyway.


Really? I've found iocage to work very well and combined with Ansible's jail support creating new jails is pretty darn easy. In terms of networking I created another loopback interface and assign each jail its own IP at creation time. It's not particularly elegant but it simplifies ipfw rules.

I've automated doing some updates with jenkins+ansiblee+git, but overall maintaining them is a bit tough because ansible's pkgng support is lacking.


What you're describing is exactly the reason people use docker. No need to use ansible, or to manage interfaces if you don't want to, it just work out of the box, and you run just bash commands to configure it.


I'd argue no. Dockerfiles are riddled with gotchas and limitations. In this case using Anisble initally is more akin to using Packer to build an image. For ongoing maintenance it's because I maintain state in the jails.

In terms of managing interfaces, iocage does that for you. The IP address has to be assigned manually, but that would be trivial to automate. A couple of IPFW rules work just fine to NAT the loopback aliases for a dynamic set of jails.


You sound like that comment on the Dropbox HN annoucement page: https://news.ycombinator.com/item?id=9224

You've just added at least 2 more concepts to the thing you have to use: iocage and ansible. On top of the jails thing. That's the exact opposite of "easy to adopt". Especially since most people that use something like Docker already don't care or want to learn Docker, they just want the benefits of using containers.


That's a bit like saying that Docker is one more thing on top of LXC that you have to learn.

jails : iocage :: lxc : docker

ansible : iocage :: packer : docker

The workflow and terminology is somewhat different but that doesn't make it inherently more difficult to adopt. The "slick" DSL that Dockerfiles provide become quite limiting in short order especially as each line bloats the eventual Docker image quite significantly. Case in point, you can't set file ownership when you copy files into the image so that extra step adds that much more bloat.


> Case in point, you can't set file ownership when you copy files into the image so that extra step adds that much more bloat.

I think you could solve that problem easily with multi-stage Dockerfiles.

   FROM alpine:latest
   COPY . /app
   RUN chown/chmod
   FROM alpine:latest
   COPY --from=0 /app /app


That looks suspiciously like an additional layer beyond the copy step to me (either that or a terrifically non-intuitive syntax).


You only learn about docker...

Regarding your later edit about bloat, well, it seems most people don't really care about that.


You only learn about iocage… (although even with Docker you should have at least a little knowledge of how things work under the hood for when things invariably break).


I was curious about iocage, is it this thing? https://github.com/iocage/iocage

It's FreeBSD only, why are we even having this discussion? :)


Take a look at the post I originally replied to. It was commenting on docker versus the alternatives.


What do you mean, in reference to layers on kubernetes, mesos, etc?


> making it easy to use for every day development

Can you elaborate here? I've started using Docker for development recently (pretty much just to avoid local mySQL troubles in a Rails project) - we use a docker-compose file to start an official mySQL image and the local rails repo.

Seemed easy enough, until I realized I have to restart my container every time my code changed. Then I had to integrate docker-sync, which is a great library but not official. I'm still surprised code sync is not included out of the box from Docker yet - am I missing something?


Yes. Volumes: https://docs.docker.com/engine/admin/volumes/volumes/

Mount a directory from your local filesystem as a volume and file changes within that directory will appear immediately in the container.


This even works when one uses docker build to compile things. Just have a shell script that after the build copies the compilation result to a local directory mounted as a volume in the docker container.


Why not compile inside the volume already?


This is what I have used before docker build started to support multiple images. That is, I had a shell script that run an image with a compiler that put the executable to a local directory. Then normal docker build would pickup the executable into the final image. But this does not take advantage of caching with docker build and made the infrastructure more complex.

Dockerfile with multiple images addressed that nicely. Now single docker build command takes care about everything and generates production ready images. A shell script is only necessary during development to extract some files from the image to avoid restarting already running container.


my understanding is that volumes are more for persisting data on your host's filesystem, right?

I'm talking more about having code you write automatically sync'ed inside the container - do volumes offer that?


Yes, just mount your /../../../Projects/MyProject onto /mnt/MyProject in the docker container and you'll have your current project files there all the time.


omg this sounds awesome, will try, thanks a lot!


Just a heads up to save you time down the road:

Docker works well for small deployments and extremely large deployment, it is extremely mismatched in bringing your average three tier app into medium size cluster because swarm doesn’t do routing well for that and because the underlying assumption is that each service is perfectly identical across each node - so even having a cluster id to ddcide who’s master in a nosql development is extra weird.


As others have already mentioned shared volumes, here's a commented little command-line demo of the principle:

https://github.com/DHager/docker-php-library-demo

It should be easy enough achieve the same results with your docker-compose configuration.


Code reload in a container has always been pretty painful. I don’t use Docker for the application I’m developing on, relying on virtualenv (in your case RVM) to keep a clean, stable, and contained environment. I use Docker for the dependencies (which is a huge boon) and when I’m done developing, I package the app into a Docker container so I can easily distribute it.


Why is code reload painful? Just mount a volume from the code you're editing in the host.


If you’re using Docker on Windows, bind mount propagation isn’t a thing. Adding a new file (or maybe a folder) on the host machine means you have to restart docker.

This is especially bad in node development when you are trying to work on your npm dependencies.


Are you sure your information is up to date? 'Cause I'm pretty sure that's not an issue anymore.


Docker is easy enough that I can get the designers up and running with the app. Now my coworkers are excited for more projects to be set up this way.

Other container solutions don't allow me to work as effectively with the team.


- It is the complete solution: the image format is easy to understand and well agreed upon, they also offer an image registry (they host it or you can host yourself)

- Great UX: Docker Inc is probably one of the small companies that provide great developer experience. They have a natural skill for it. Another one would be HashiCorp tools. These companies understand developers well. When a tool naturally speak the same language as you, you fall in love with it.

- They have a great development story: lxc, jails, zones, ... you can't get them to work on Windows or Mac. But Docker for [Mac|Windows] offers a complete development story (you don't have to be on Linux to develop Linux containers).

- Solid image ecosystem: They paid external parties to keep the official images on Docker Hub (like mysql, nginx) solid i.e. up-to-date, free of critical security vulnerabilities. I don't know of any solid base-image ecosystems in other containerization ecosystems.

- Big companies got behind it: This is a bit backwards, but since docker got popular, companies like Red Hat, IBM, Microsoft, Google started to contribute to it early on. This brought in a lot of mind share and publicity to project (and disagreements).

- They got a solid PR machine: They do great community management. Docker Captains program is a solid one. They had many DockerCons they're all pretty good (although one would argue past few ones got kind of corporate/enterprise-y maybe even boring). But they had a few good ones.

- The tech has merit: "docker run", "docker ps", "docker rm" the whole docker CLI is very intuitive overall. This is a bit of an supporting argument to UX, but it hides a lot of the details of how the docker-engine works.


Indeed, Docker was not perfect, but was easier to use than LXC at the time it got popular. There is a certain barrier in usability(cost) and usefulness(reward) that must be passed so that hype (natural or induced) can do the rest. Looking from a reward/cost perspective, this is how Docker appealed to me:

Rewards: - Hygienic build&deployment - being able to specify your requirements down to the OS lib versions - Lower barrier to deployment (easy for users to try out my app and for me to try theirs)

Costs (mostly time): - LXC was something that I understood, but could't make good use of to be worth it the time spent. Docker on the other hand required less time to do something useful (e.g. pull and run an image, turn an app to an image).

Time to learn a new tech depends on its complexity (concepts and relationships) and how it is revealed to the user (docs, tutorials etc). Willingness to learn the complexity depends on the distribution of rewards along the time axis. I think Docker guys did a great job improving the latter.

> But Docker for [Mac|Windows] offers a complete development story

Docker on Windows has been a nightmare for me (crashes all the time) and I have abandoned altogether. I'd happily get rid of Windows too if it wasn't the required in my work environment.


I wouldn't call docker easier to use than say libvirt. (Which it had as a backend until they dumped it.)

Docker setup is especially bad at employing custom images easily and composing small pieces of software instead of huge packages... at least that was my experience.


Docker based on LXC and added a few things. The fact that they were well funded made outreach easier compared to the open source LXC project and Docker became many peoples first introduction to containers.

Once they got traction they quickly dumped LXC around version 0.9 and created their own engine in GO. All the while misleading and letting misconceptions and plain untruths about LXC being 'difficult', some 'low level tool' or 'bash scripts' spread unchecked by their ecosystem. So the model is not only take open source tech, but bash them in public and forget about giving anything back.

LXC is much simpler to use and understand both conceptually and in practical use. Docker is basically a hijack of an open source project supported by the SV crowd and the level of support and misinformation can be verified by HN posts in the last 3 years. Docker posts routinely clock 600-1000 votes in short periods of time and are full of marketing and misinformation.

That's history but the culture continues. Docker and other related projects continue to liberally wrap open source projects while downplaying them, ie overlayfs, aufs, kernel namespaces, cgroups, Linux networking tools, nginx, consul, haproxy and more. Someone had to create those projects and make them usable. This would not be a bad thing if they actually made things simpler, but they keep adding more and more layers of complexity.

What incentive is there for people to create these high value open source tools if vc funded companies like Docker and corporate funded projects like Kubernetes simply wrap them, downplay them and hog the attention and funding? These kind of projects basically cut the tree they sit on for their own short term benefit.


I can tell you what attracted me to Docker as an early adopter (while @Mailgun):

* Very good user and developer experience. Docker felt like git for containers. Docker run, docker ps and other commands are awesome.

* Very good and innovative build system. Dockerfiles are great and allow to build complex containers and efficiently package software in a convenient way.

* Solving image distribution problems. Docker pull/push are great.

* Layering system leading with build and run optimization.

Overall docker not only provided a great UX, but combined all the patterns above in a very expressive system, great job done by the team.


I strongly believe it's the ease of use factor. If you know nothing at all about containers, but have just installed docker, you can immediately "docker run httpd" and watch a web server install itself and do something useful in seconds.

That's really powerful, and the surprisingly diverse selection of pre-built containers on the docket hub makes it even easier to bootstrap your project and figure the details out later. I don't have the necessary expertise to compare it meaningfully to other container systems so I can't speak to it's implementation details, but the first time user experience is more than enough to explain it's popularity.


Just my cynical view, but I'm thinking it's because is got a friendly looking whale as a logo and the developers took the approach of making it friendly to use for someone who doesn't understand the underlying quite complex technology.

I was involved in making the decision about introducing containers to my workplace, my original recommendation was to just go with chroots or buy into something that we could get vendor support for on RHEL6 (so not docker) and when asked to explain how this would benefit out desired outcome, the docker website basically had copy-pasteable words that did that job for me, chroots however just described he technical implementation of a chroot.

So eventually I just went "fuck it, we'll run unsupported and take the risk, then well move N thousands of servers to RHEL7 because I don't want to have to explain to managers how cgroups and Linux kernel constructs work". About a year later most of the enterprise has a plan to move all their existing EL5/EL6 workloads to EL7 just in case they need to use docker some time in the future.

Which, if I'm honest, I kind of regret, but it doesn't impact me that much so meh.


Which bank was it?


Some banks aren't doing this :p


Because they spent two hundred million dollars promoting it.


They raised the money after, and largely because, it had popular groundswell amongst developers.


I came from using Vagrant at a previous company and from my experience, Docker is just more lightweight, portable, and user friendly. Share a file/folder? Expose a port? Get an image and run a container? You can do it easily using Docker cli. You want a UI tool that makes that task even easier for you, get Kitematic.

Also, Docker just has more development and maturity at this stage that it makes deployments easier.


I wrote something a few years ago about why docker matters: https://circleci.com/blog/it-really-is-the-future/ (excuse the slightly flippant tone). I think the answer to why it succeeded instead of the others is that it solved those problems (in particular the container format and the UX around it) much better than the alternatives.


Despite what everyone is saying, Docker is not great UX, not easy to use etc.

It has greater UX and it’s easier to use than the available alternatives.


Do you not think that if everyone is saying it has a great UX and it's easy to use then perhaps it does have a great UX and is easy to use?


We as developers are not spoilt for choice. The tools we use are universally crappy, horrible etc.

Any improvement is hailed as great UX/DX even if the improvement is only marginal. Or if the improvement only touches on certain aspects and totally ignores all other aspects.

Docker is an improvement over the cesspool of tools that existed in the space before it. It's UX is ok at best, and is hailed as awesome because it just barely covers the basic needs of a developer: list a thing, build a thing, remove a thing. Outside of really a handful of basic things it fails miserably and relies on multiple disparate hacks by desperate developers.

The only reason it's being praised is that all other tools are so much worse.


Oh. Another thing that plays in Docker's favour: it does improve, albeit slowly and incrementally.

Once again, it's only because everything else is worse + dev mindshare has been captured by Docker


Chef Ramsey makes a cake, everyone has a slice and says it's the tastiest and best looking cake they've ever had.

Jamie Oliver comes along and says "No, no! You're wrong, it's not a tasty cake and it looks terrible, it's just tastier and prettier than you've had before".

Does the fact that everyone, other than Jamie Oliver, is claiming the cake to be incredibly tasty and the best looking not by default make the cake both tasty and great looking?

Surely the proof is in the eating?


>it's the tastiest and best looking cake they've ever had.

I've italicised the relevant part.

Let's say, all the cakes they've ever had were baked dirt. Now Chef Ramsey has come along and baked soft dirt with sugar and sprinkles on top.

- Is it still dirt? Yes.

- Is it the the tastiest and best looking cake they've ever had? Also yes.

- Is it not a tasty cake and it looks terrible, it's just tastier and prettier than you've had before? Yes.

This also goes to show that arguments from analogy are inherently false, including mine ;)


>The only reason it's being praised is that all other tools are so much worse.

So you're saying docker is the best tool in this category?


It's a tool. Better than most others in some aspects.

Also, "Best tool" !== "amazing UX" etc. etc.


They nailed the developer experience which allowed so many people to build upon their interface.


The main reason is that the underlying Linux container tech has (especially in the past) such terrible usability. Try to use cgroups or other older tools for running your browser, for example.


Can you list some alternatives you’d like to see it compared to?


Sure:

- LXC

- Rkt

- *BSD jails


Isn’t it simple? It was the first and only easy interface to LXC, which I think was the first and only Linux containerization framework, and Linux is the most popular server OS.


I'm not even aware of any other container system that lets me run something like "docker run node" right now. So I vote for the infrastructure and hosted images which made it kind of like the GitHub of containers.


https://linuxcontainers.org/lxc/manpages/man1/lxc-execute.1.... -> lxc-execute is kind of similar to docker run, UI wise.

But it's not the same thing as the Docker "package".


This may be a good forum to ask. For an absolute newbie to Docker but someone who has used LxC a few times in non-serious situations, what would be a good learning resource? Just the docs? Any good book? Any blog post series?


Read they're docs, they're quite good.

Then just start using it. docker --help, look at the commands available, see what they do.

It truly is quite intuitive and as you hit some barriers just google for answers, which you will find (yay, popularity!).

Then create some Dockerfiles.

After a few weeks you should know more than enough to use it to cover local development needs. If you want to go to prod with it then that can get quite tricky, depending on your scale. But, baby steps :)


I would say UX. Things like this need lots of people to adopt it to hit critical mass.

UX lowers the friction level. It's more likely to be accepted by the simplest decision maker. This is why companies like Slack do so well.


Because it is robust and lightweight. It tackled all the facets of apps isolation.


It most certainly does not. If you are using Docker containers for security or QoS you're doing it wrong, and I think even Docker the company would concur.

It's a fantastic packaging and deployment format, but it's not a universal solution.


Do you have any current sources for the security part?

Everything I read is along the lines of "this is not secure, we think, but do not have any specific examples. But be careful"



For me the main reason to stick with docker is the fast build. Other solutions just do not have anything that usable to create images during development.


for me, because when i first heard about it my options were Chef/Puppet (with Vagrant)... so Docker was much, much simpler to get started.

in the past years they've been adding a lot of complexity though... :meh:


It's not only a container tech, it's the container format and distribution.


What other containerization technologies?


- chroot

- Zones (Solaris)

- Jails (FreeBSD)

- OpenVZ (Linux)

- LXC (Linux)

- VServer (Linux)

- lmctfy (Linux)

- WPARs (AIX)

... and more; see https://en.wikipedia.org/wiki/Operating-system-level_virtual...


Docker images is a simple quick-and-dirty packaging solution with all the pieces necessary (building, versioning, storage and deployment). I believe this is why it won.

Don't think any of those other options had anything similar. The alternatives, like building and deploying apps with native OS packages (e.g. .debs) were significantly more complicated and less portable. (While it's not that Docker images are better than native packages, they're doubtlessly generally easier.)


Most people use LXC to run containers that run full operating systems right? (Where it loads init/systemd and then processes .. basically a full VM but lightweight?)

The others are all more single process isolated, but how many others have the image building/hashing/layering system of Docker.

I personally wish Docker was more of a first class citizen on FreeBSD, but the official port was way out of date the last time I checked.


I'm not sure about the rest but OpenVZ never got into mainline kernel and required custom extensions while Docker did not. It's probably very contributing factor.


Being a noob at this, do any of those:

* have a image repo, image layering and versioning so that the community can contribute and reuse existing stuff? If not, that's already a big downside

* do they configure networking by default so that you can access the container? This is very nice for usability


Probably should add systemd-nspawn to that list.

Regular systemd that you probably need to know how to use anyway can launch things in namespaces and even use common image formats. Very useful.


the best part of docker isn't the containerization tech but the networking IMO.


first mover advantage


It's got a good name, and was an early mover in the field.


lxc and jails are both good names (in fact, I think bsd jails is the best name of those).

lxc and jails both predate docker by quite a while.


Jails is an absolutely terrible name when you need to sell your company, rather than your friends, on using this technology.

"Pff, yeah right, no one rejects a technology just because the name has negative connotation in a different context", except that is literally what happens.


Ah, the cockroachdb argument. We're rapidly approaching the point where any meaningful names are already in use, and everything uses nonsense names anyway.


Nonsense names are great, like "MySQL" or "Kubernetes". Names that the people you're trying to sell it to recognize from normal language like "jails", "cockroachdb", not so much. If you're in a startup and you need to sell it to your CTO: whatever. If you're in a 2k+ employee company and you need to sell it to general management: you are not going to be using jails, or cockroachdbs, or anything else that makes non-tech people raise their eyebrows in the bad way when they hear/read them.


Kubernetes isn't a nonsense name. Κυβερνήτης is Ancient Greek for helmsman. Which is not unrelated to Docker.


If someone needs to know ancient Greek to understand a name, the name in modern English is almost guaranteed a nonsense name, and that's great.


Everyone keeps mentioning Windows support, but - and someone please correct me if I’m remembering wrong here - wasn’t it the new Microsoft with Nadella’s open source friendliness that went the extra mile to get Docker on Windows to happen?


"Docker for Windows" runs Linux containers in a Linux VM, with a thin Windows CLI and quite a nice install screen. It required no special help from Microsoft.

The product that runs Docker containers natively is called "Windows Server 2016", and this required a vast amount of work from Microsoft. Native Windows containers are not seen much in the wild.


> Native Windows containers are not seen much in the wild.

Cloud Foundry has them for Windows 2012, though they're quite leaky. Windows 2016's support is much better and the resultant containers have much of what you want in terms of proper process isolation and dense utilisation.

Disclosure: I work for Pivotal, we contribute to Cloud Foundry. The Windows container team sits two desks over from mine.


Actually, they're on windows 10 too for some time already.


You may be right but it doesn't matter who did it. It's still possibly a factor in its success.


Docker is primarily a packaging framework + a runtime mechanism.

It's like VM + hypervisor in one box.

Thinking this way, it's easy to understand why it's popular. It did things that no other have done.


It's not like that at all in any way.

You're using existing OS packages in your base (apk in Alpine or apt in Debian/Ubuntu) to pull in all your dependencies and then placing your app on top of that. (Or using the base go, python or ruby container and letting it use pip/bundle to pull in dependencies and put your app on top of that).

It only runs a single process, unlike a VM (or even an LXC container which is more similar to a VM).

Containers are (mostly) immutable, so you're typically not running security updates on the same thing that provides all your dependencies. You should rebuild your containers when there are security updates .. but people rarely scan for those or do that; relying more on isolation and cgroups to keep those containerized apps safe.

There's very little tooling within the container, so when things go wrong, you're either execing into a single container and apt-getting all the tools you need (which you gotta do each time you rebuild it) or you need to use something like sysdig.

It's popular because you can create an easy to replicate artifact that doesn't rely on a ton of system dependencies with a really basic (although kinda primitive) package building system (Dockerfile) and that is way more lightweight than a VM while providing a pretty heavy layer of isolation.


Small correction: Docker containers can run any number of processes. For example the standard apache container runs five.


They can, but Docker-as-Docker won't support multiple processes. Instead you have to provide your own logic to spin up multiple processes.


It’s a packaging mechanism plus runtime, right? It’s just oriented onwards container...


My guesses: Docker has been around a while, is mature and is battle tested; it was a bit ahead of other options when critical adoption decisions were being made. I remember using LXC in Debian (and Ubuntu) around 2013 and hitting a lot of disturbing bugs, due to immaturity...


It's not mature and not battle tested. There is barely anyone using it.

They few who do are walking on eggs. it's hardly possible to find two experiments running the same ecosystem between all the distributions and the commercial services. Centos, Debian, RedHat, CoreOS, kubernetes, AWS ECS, Google container, cloud foundry, openshift...


Docker is popular because: 1. containers are technologically better than virtual machines 2. Docker has added a popular cloud container image repo which is used for accessing container images. 3. this Docker cloud repo has morphed into a popular platform for selling/distributing application software which is built into container images.


I don't know of any other containerization technology except really small linux distros that you can run in virtual box. Maybe that's why dropbox is so popular. They marketed the hell out of it.


Docker is very popular because: 1. containers is technologically superior to virtual machines. 2. Docker has create a public cloud repo for sharing/selling container images. 3. The Docker cloud has become very popular way of distributing application software which is built into container images. Number 1 is technological reason, number 2,3 are non technical features which Docker added to containers and have become popular for enterprise software.


Your premise is fatally objective and missing crucial nuance. There is by no means a consensus that Docker containers are “technologically superior” to virtual machines.. just lighter.


Not even much lighter really. Compared to paravirt enhanced KVM at least. Maybe even Xen. Especially when you have cleancache and frontswap on.

I would like to see actual facts for the alleged performance benefits. :)


Oh I very specifically said “lighter” and not faster, referencing only the distribution size. But not much lighter, even then.

It just makes no sense to me to throw away an actual virtual machine for a mere abstraction of one.


It depends on what you mean by Docker. As a container, it’s adds a lot of conveniences, despite that at the end of the day it all boils down to cgroups and namespaces.

If you mean as a container orchestrator, I have no idea. Honestly Mesos seems much better than Docker or Kube in this regard. It scales better, is more reliable, and is more flexible. The one downside as I see it is there’s a steeper learning curve.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: