Focusing on day to day development for users came with tradeoffs -- ops oriented features came much later, getting serious about security came later, etc. These are all important things... but less so than ease of use and adoption.
The pattern is not new, the same thing happened with Rails -- the concepts in rails were not novel, the extreme ease of getting started and getting to useful really fast was the killer feature (to get it going). After it got a big ecosystem, it got the quality aspect (mostly by replacing itself with merb).
The lesson: if you want wide adoption, build something good and reduce every barrier to adoption you encounter. The most important features are the ones getting in way of people using it, not the ones people using it are having. You'll need to solve them, but solve adoption first.
I think we’ll see the same sort of thing happening with FaaS frameworks, or any of the layers building on top of Swarm/Kubernetes/Mesos.
To me, asking why people are using Docker instead of plain jails combined with a series of other technologies is like asking why people make buildings out of wooden planks instead of trees. The question seems a bit wrong, because Docker is just built on existing container technologies. And I'd rather save a few steps. Planks are the exact shape and size that I want to deal with when building an application anyway.
I've automated doing some updates with jenkins+ansiblee+git, but overall maintaining them is a bit tough because ansible's pkgng support is lacking.
In terms of managing interfaces, iocage does that for you. The IP address has to be assigned manually, but that would be trivial to automate. A couple of IPFW rules work just fine to NAT the loopback aliases for a dynamic set of jails.
You've just added at least 2 more concepts to the thing you have to use: iocage and ansible. On top of the jails thing. That's the exact opposite of "easy to adopt". Especially since most people that use something like Docker already don't care or want to learn Docker, they just want the benefits of using containers.
jails : iocage :: lxc : docker
ansible : iocage :: packer : docker
The workflow and terminology is somewhat different but that doesn't make it inherently more difficult to adopt. The "slick" DSL that Dockerfiles provide become quite limiting in short order especially as each line bloats the eventual Docker image quite significantly. Case in point, you can't set file ownership when you copy files into the image so that extra step adds that much more bloat.
I think you could solve that problem easily with multi-stage Dockerfiles.
COPY . /app
COPY --from=0 /app /app
Regarding your later edit about bloat, well, it seems most people don't really care about that.
It's FreeBSD only, why are we even having this discussion? :)
Can you elaborate here? I've started using Docker for development recently (pretty much just to avoid local mySQL troubles in a Rails project) - we use a docker-compose file to start an official mySQL image and the local rails repo.
Seemed easy enough, until I realized I have to restart my container every time my code changed. Then I had to integrate docker-sync, which is a great library but not official. I'm still surprised code sync is not included out of the box from Docker yet - am I missing something?
Mount a directory from your local filesystem as a volume and file changes within that directory will appear immediately in the container.
Dockerfile with multiple images addressed that nicely. Now single docker build command takes care about everything and generates production ready images. A shell script is only necessary during development to extract some files from the image to avoid restarting already running container.
I'm talking more about having code you write automatically sync'ed inside the container - do volumes offer that?
Docker works well for small deployments and extremely large deployment, it is extremely mismatched in bringing your average three tier app into medium size cluster because swarm doesn’t do routing well for that and because the underlying assumption is that each service is perfectly identical across each node - so even having a cluster id to ddcide who’s master in a nosql development is extra weird.
It should be easy enough achieve the same results with your docker-compose configuration.
This is especially bad in node development when you are trying to work on your npm dependencies.
Other container solutions don't allow me to work as effectively with the team.
- Great UX: Docker Inc is probably one of the small companies that provide great developer experience. They have a natural skill for it. Another one would be HashiCorp tools. These companies understand developers well. When a tool naturally speak the same language as you, you fall in love with it.
- They have a great development story: lxc, jails, zones, ... you can't get them to work on Windows or Mac. But Docker for [Mac|Windows] offers a complete development story (you don't have to be on Linux to develop Linux containers).
- Solid image ecosystem: They paid external parties to keep the official images on Docker Hub (like mysql, nginx) solid i.e. up-to-date, free of critical security vulnerabilities. I don't know of any solid base-image ecosystems in other containerization ecosystems.
- Big companies got behind it: This is a bit backwards, but since docker got popular, companies like Red Hat, IBM, Microsoft, Google started to contribute to it early on. This brought in a lot of mind share and publicity to project (and disagreements).
- They got a solid PR machine: They do great community management. Docker Captains program is a solid one. They had many DockerCons they're all pretty good (although one would argue past few ones got kind of corporate/enterprise-y maybe even boring). But they had a few good ones.
- The tech has merit: "docker run", "docker ps", "docker rm" the whole docker CLI is very intuitive overall. This is a bit of an supporting argument to UX, but it hides a lot of the details of how the docker-engine works.
- Hygienic build&deployment - being able to specify your requirements down to the OS lib versions
- Lower barrier to deployment (easy for users to try out my app and for me to try theirs)
Costs (mostly time):
- LXC was something that I understood, but could't make good use of to be worth it the time spent. Docker on the other hand required less time to do something useful (e.g. pull and run an image, turn an app to an image).
Time to learn a new tech depends on its complexity (concepts and relationships) and how it is revealed to the user (docs, tutorials etc). Willingness to learn the complexity depends on the distribution of rewards along the time axis. I think Docker guys did a great job improving the latter.
> But Docker for [Mac|Windows] offers a complete development story
Docker on Windows has been a nightmare for me (crashes all the time) and I have abandoned altogether. I'd happily get rid of Windows too if it wasn't the required in my work environment.
Docker setup is especially bad at employing custom images easily and composing small pieces of software instead of huge packages... at least that was my experience.
Once they got traction they quickly dumped LXC around version 0.9 and created their own engine in GO. All the while misleading and letting misconceptions and plain untruths about LXC being 'difficult', some 'low level tool' or 'bash scripts' spread unchecked by their ecosystem. So the model is not only take open source tech, but bash them in public and forget about giving anything back.
LXC is much simpler to use and understand both conceptually and in practical use. Docker is basically a hijack of an open source project supported by the SV crowd and the level of support and misinformation can be verified by HN posts in the last 3 years. Docker posts routinely clock 600-1000 votes in short periods of time and are full of marketing and misinformation.
That's history but the culture continues. Docker and other related projects continue to liberally wrap open source projects while downplaying them, ie overlayfs, aufs, kernel namespaces, cgroups, Linux networking tools, nginx, consul, haproxy and more. Someone had to create those projects and make them usable. This would not be a bad thing if they actually made things simpler, but they keep adding more and more layers of complexity.
What incentive is there for people to create these high value open source tools if vc funded companies like Docker and corporate funded projects like Kubernetes simply wrap them, downplay them and hog the attention and funding? These kind of projects basically cut the tree they sit on for their own short term benefit.
* Very good user and developer experience. Docker felt like git for containers. Docker run, docker ps and other commands are awesome.
* Very good and innovative build system. Dockerfiles are great and allow to build complex containers and efficiently package software in a convenient way.
* Solving image distribution problems. Docker pull/push are great.
* Layering system leading with build and run optimization.
Overall docker not only provided a great UX, but combined all the patterns above in a very expressive system, great job done by the team.
That's really powerful, and the surprisingly diverse selection of pre-built containers on the docket hub makes it even easier to bootstrap your project and figure the details out later. I don't have the necessary expertise to compare it meaningfully to other container systems so I can't speak to it's implementation details, but the first time user experience is more than enough to explain it's popularity.
I was involved in making the decision about introducing containers to my workplace, my original recommendation was to just go with chroots or buy into something that we could get vendor support for on RHEL6 (so not docker) and when asked to explain how this would benefit out desired outcome, the docker website basically had copy-pasteable words that did that job for me, chroots however just described he technical implementation of a chroot.
So eventually I just went "fuck it, we'll run unsupported and take the risk, then well move N thousands of servers to RHEL7 because I don't want to have to explain to managers how cgroups and Linux kernel constructs work". About a year later most of the enterprise has a plan to move all their existing EL5/EL6 workloads to EL7 just in case they need to use docker some time in the future.
Which, if I'm honest, I kind of regret, but it doesn't impact me that much so meh.
Also, Docker just has more development and maturity at this stage that it makes deployments easier.
It has greater UX and it’s easier to use than the available alternatives.
Any improvement is hailed as great UX/DX even if the improvement is only marginal. Or if the improvement only touches on certain aspects and totally ignores all other aspects.
Docker is an improvement over the cesspool of tools that existed in the space before it. It's UX is ok at best, and is hailed as awesome because it just barely covers the basic needs of a developer: list a thing, build a thing, remove a thing. Outside of really a handful of basic things it fails miserably and relies on multiple disparate hacks by desperate developers.
The only reason it's being praised is that all other tools are so much worse.
Once again, it's only because everything else is worse + dev mindshare has been captured by Docker
Jamie Oliver comes along and says "No, no! You're wrong, it's not a tasty cake and it looks terrible, it's just tastier and prettier than you've had before".
Does the fact that everyone, other than Jamie Oliver, is claiming the cake to be incredibly tasty and the best looking not by default make the cake both tasty and great looking?
Surely the proof is in the eating?
I've italicised the relevant part.
Let's say, all the cakes they've ever had were baked dirt. Now Chef Ramsey has come along and baked soft dirt with sugar and sprinkles on top.
- Is it still dirt? Yes.
- Is it the the tastiest and best looking cake they've ever had? Also yes.
- Is it not a tasty cake and it looks terrible, it's just tastier and prettier than you've had before? Yes.
This also goes to show that arguments from analogy are inherently false, including mine ;)
So you're saying docker is the best tool in this category?
Also, "Best tool" !== "amazing UX" etc. etc.
- *BSD jails
But it's not the same thing as the Docker "package".
Then just start using it. docker --help, look at the commands available, see what they do.
It truly is quite intuitive and as you hit some barriers just google for answers, which you will find (yay, popularity!).
Then create some Dockerfiles.
After a few weeks you should know more than enough to use it to cover local development needs. If you want to go to prod with it then that can get quite tricky, depending on your scale. But, baby steps :)
UX lowers the friction level. It's more likely to be accepted by the simplest decision maker. This is why companies like Slack do so well.
It's a fantastic packaging and deployment format, but it's not a universal solution.
Everything I read is along the lines of "this is not secure, we think, but do not have any specific examples. But be careful"
in the past years they've been adding a lot of complexity though... :meh:
- Zones (Solaris)
- Jails (FreeBSD)
- OpenVZ (Linux)
- LXC (Linux)
- VServer (Linux)
- lmctfy (Linux)
- WPARs (AIX)
... and more; see https://en.wikipedia.org/wiki/Operating-system-level_virtual...
Don't think any of those other options had anything similar. The alternatives, like building and deploying apps with native OS packages (e.g. .debs) were significantly more complicated and less portable. (While it's not that Docker images are better than native packages, they're doubtlessly generally easier.)
The others are all more single process isolated, but how many others have the image building/hashing/layering system of Docker.
I personally wish Docker was more of a first class citizen on FreeBSD, but the official port was way out of date the last time I checked.
* have a image repo, image layering and versioning so that the community can contribute and reuse existing stuff? If not, that's already a big downside
* do they configure networking by default so that you can access the container? This is very nice for usability
Regular systemd that you probably need to know how to use anyway can launch things in namespaces and even use common image formats. Very useful.
lxc and jails both predate docker by quite a while.
"Pff, yeah right, no one rejects a technology just because the name has negative connotation in a different context", except that is literally what happens.
The product that runs Docker containers natively is called "Windows Server 2016", and this required a vast amount of work from Microsoft. Native Windows containers are not seen much in the wild.
Cloud Foundry has them for Windows 2012, though they're quite leaky. Windows 2016's support is much better and the resultant containers have much of what you want in terms of proper process isolation and dense utilisation.
Disclosure: I work for Pivotal, we contribute to Cloud Foundry. The Windows container team sits two desks over from mine.
It's like VM + hypervisor in one box.
Thinking this way, it's easy to understand why it's popular. It did things that no other have done.
You're using existing OS packages in your base (apk in Alpine or apt in Debian/Ubuntu) to pull in all your dependencies and then placing your app on top of that. (Or using the base go, python or ruby container and letting it use pip/bundle to pull in dependencies and put your app on top of that).
It only runs a single process, unlike a VM (or even an LXC container which is more similar to a VM).
Containers are (mostly) immutable, so you're typically not running security updates on the same thing that provides all your dependencies. You should rebuild your containers when there are security updates .. but people rarely scan for those or do that; relying more on isolation and cgroups to keep those containerized apps safe.
There's very little tooling within the container, so when things go wrong, you're either execing into a single container and apt-getting all the tools you need (which you gotta do each time you rebuild it) or you need to use something like sysdig.
It's popular because you can create an easy to replicate artifact that doesn't rely on a ton of system dependencies with a really basic (although kinda primitive) package building system (Dockerfile) and that is way more lightweight than a VM while providing a pretty heavy layer of isolation.
They few who do are walking on eggs. it's hardly possible to find two experiments running the same ecosystem between all the distributions and the commercial services. Centos, Debian, RedHat, CoreOS, kubernetes, AWS ECS, Google container, cloud foundry, openshift...
I would like to see actual facts for the alleged performance benefits. :)
It just makes no sense to me to throw away an actual virtual machine for a mere abstraction of one.
If you mean as a container orchestrator, I have no idea. Honestly Mesos seems much better than Docker or Kube in this regard. It scales better, is more reliable, and is more flexible. The one downside as I see it is there’s a steeper learning curve.