- Fully embrace Kubernetes for orchestration
- Drop Swarm
- Roll Docker Engine back to its pre-1.12 scope
- Get on board with standardizing the image format, now
- Stop fighting Google and instead let them help you succeed
A Docker distro of Kubernetes would do very well in enterprise on-prem or private cloud environments. They already have a great developer experience. Companies will pay for support on both.
Continuing to oppose Kubernetes risks damaging the significant brand equity they've accrued as containers in production become mainstream.
K8s makes docker commoditized. For example, I am running GKE k8s cluster and I only use docker directly to take Dockerfile and produce an image. I push images to google replacement of docker hub - google container registry. I run things locally via minikube. If GKE would be using something custom to run images I would not even notice. I am close to the point that switching from docker to rkt would be barely noticeable.
Some things like minikube are somewhat bleeding edge, so maybe they are worth describing.
Consider Vagrant -- like Docker, it's essentially a wrapper around other people's facilities and libraries. Mitchell and co appear to be trying to escape the trap by moving into nearby areas of real value, but there's no meaningful way to monetize Vagrant no matter how popular it is.
Then it is indeed unfortunate that Docker is building infrastructure code which is tainted by the game theory surrounding the need for a unicorn valuation. Maybe that's the problem, instead of Docker itself.
Well, the VCs lost. Presumably, the people at Docker -- including the people who would do fantastically well with an exit that also made VCs happy -- are drawing paychecks, in some cases substantial ones, and will have in many cases done tolerably well (rather than have "lost") if no VC-friendly exit ever occurs.
I feel like this is just another step into the corporate OSS world which is a far departure of what a lot of the original OSS architects envisioned.
Trying to monetize OSS often leads to rushed features and bad decisions like in most traditional non-OSS/enterprise products. If you strip off the polish and the fancy website, you should still have a usable, well documented thing that people need. That may also be a product you can sell support for or host, but it doesn't have to be.
Unfortunately, there aren't as many grants and not as much developer free time to go around anymore.
Many people believe that the spirit of the GNU Project is that you should not charge money for distributing copies of software, or that you should charge as little as possible—just enough to cover the cost. This is a misunderstanding.
Actually, we encourage people who redistribute free software to charge as much as they wish or can. If a license does not permit users to make copies and sell them, it is a nonfree license. If this seems surprising to you, please read on.
What they don't mention is that, when every recipient of the software is free to redistribute it and compete against you, as much as you can rapidly becomes zero (or, at most, just enough to cover the cost of the most-efficient distributor), since free competition drives prices down to the marginal production cost.
Companies make money charging for free software all the time, and I really wish we would stop having to go through this discussion every time that free software shows up as a point. Companies have been making money from free software for more than 20-30 years at this point, and none of it required taking away the freedom of their end-users.
Its charging for support (not support-license since the support contract isn't a license when the software is actually delivered on a Free license) because that's an alternative to charging for software, because, practically, you can't charge for Free software, for the reason discussed in GP.
> Companies make money charging for free software all the time
No, they don't. They often make money charging for ancillary services related to Free software, which their involvement in contributing to the Free software may have positioned them to provide at an advantage to competitors, but not many make money charging for Free software.
The industry calls them support licenses (since generally you only support X machines running your software -- though of course you can have alternative models), so that's what I'm going to call them.
As for not being able to charge for Free Software, it is true that unless you have some value-add (preinstalling or burning to physical media) then yes, youre going to have trouble selling the bits that make up a piece of software. But then again, why does is that model taken as being the "right model" with the free software model being the "alternative". In fact, many proprietary companies have the same model (Oracle will charge you for support too). How is it a better model that you buy a set of bits and that's all you get (no promise of updates, no support if something breaks, nothing other than the 1s and 0s that make up version X.Y of the binaries you built). In fact, I'm having trouble of thinking what companies have such a model, because it's so user-hostile (even for proprietary so software).
OSS is a path bound to failure.
On the one hand, going OSS means the project will never have the resources because it ain't charging money. On the other hand, it will never be able to charge money because people don't want to pay for something that's advertised as free. That's a vicious circle.
The average non-crap software is taking astronomical effort to make and it's getting harder and more complex over time. A bunch of guys working for free in their spare time just don't have the resources to keep up.
P.S. When in doubt, remember that the difference between the great pyramid of Kheops and Microsoft Windows, is that Windows took more effort to make.
Free software refers to freedom, not price. Companies have been making money selling free software for more than 20-30 years at this point (even free software that is developed in the open).
Philosophically speaking, maybe. In practise though, free software are considered because of pricing and not freeness.
Bring a PAID open-source software for a comparison in front of your directors and/or fellow engineers. I did that many times, from $1000/year to $3M/year subscription.
Even the most open-source aficionados will quickly reveal that he doesn't care about open-source. All he thinks about is pricing.
In addition, I work for a free software company -- everything I work on is done in the open and all of our customers have freedom when they use our software. So I really do practice what I preach, and encourage others to follow suit.
You're mixing quite a few things. "Open source" never meant that, and was always geared towards corporations. Free software does mean providing "free as in speech" alternatives to things, but it never meant stopping people from charging users (you having to "pay crap").
It may not justify the valuation they want.. which is another issue.
Then you better don't contribute to anything that's not exactly GPL.
TL;DR is that we have a lot of corporate open source now for infrastructure, but the golden age OSS end products people hoped for in the 90's/00's never really happened.
rkt is getting into Kubernetes as an alternative runtime engine:
Back then it was, "It's sad to see so much drama; Docker engine has much more traction." Remember all of that?
The story is now, "hey, rkt lets us test out our framework for supporting alternative runtime engine".
There's a shifting attitude there. Goodwill from the community is eroding. I don't know how long this window will remain open, but it won't remain open indefinitely.
Also, Mesos have started shipping their own container runtime.
This feels like an inflection point in the history of containers.
Actually, we're working on getting generic OCI runtimes supported (runC being an example of such a runtime). In principle, you should be able to swap out the OCI runtime and still get Kubernetes to work with that runtime. It's an exciting time. :D
When I read the article announcing Docker v1.12, it was riddled with enthusiasm - That's a red flag - This rarely happens in our industry.
Working on open source is not like working in a startup; it's not like riding a roller coaster; it's more like riding the metro - You spend most of your time underground and when you finally arrive at your destination, the slow ascent up the escalator is rarely that exciting.
Some early data here: https://twitter.com/dberkholz/status/720686568671940608
They could never do that and expect to survive as a company as it would be "giving away" money on the table so to speak.
We are really concerned about 'writing letters from the future' to the development community and trying to sell people on a set of solutions that were designed for internet scale problems and that don't necessarily apply to the real problems that engineers have.
I spent a lot of time early on trying to figure out whether we wanted to compete with Docker, or embrace and support Docker. It was a pretty easy decision to make. We knew that Docker solved problems that we didn't have in the development domain, and Docker provided a neat set of experiences. We had 'better' container technology in the market (anyone remember LMCTFY?) but the magic was in how it was exposed to engineers and Docker put lightening in a bottle. The big things were creating a composable file system, and delivering a really strong tool chain are the obvious two, but there were others. I remember saying 'we will end up with a Betamax to Docker's VHS' and I believe that remains true.
Having said that there are a number of things that weren't obvious to people who weren't running containers in product and at scale. It is not obvious how containers judiciously scheduled can drive your resource utilization substantially north of 50% for mixed workloads. It isn't obvious how much trouble you can get into with port mapping solutions at scale if you make the wrong decisions around network design. It isn't obvious that labels based solutions are the only practical way to add semantic meaning to running systems. It isn't obvious that you need to decentralize the control model (ala replication managers) to accomplish better scale and robustness. It isn't obvious that containers are most powerful when you add co-deployment with sidecar modules and use them as resource isolation frameworks (pods vs discrete containers), etc, etc, etc. The community would have got there eventually, we just figured we could help them get there quicker given our work and having been burned by missing this first time around with Borg.
Our ambition was to find a way to connect a decade of experience to the community, and work in the open with the community to build something that solved our problems as much as outside developers problems. Omega (Borg's successor) captured some great ideas, but it certainly wasn't going the way we all hoped. Kind of classic second system problem.
Please consider K8s a legitimate attempt to find a better way to build both internal Google systems and the next wave of cloud products in the open with the community. We are aware that we don't know everything and learned a lot by working with people like Clayton Coleman from Red Hat (and hundreds of other engineers) by building something in the open. I think k8s is far better than any system we could have built by ourselves. And in the end we only wrote a little over 50% of the system. Google has contributed, but I just don't see it as a Google system at this point.
Kubernetes came around exactly at the time when people tried to start using it for something serious and things like overlay networking, service discovery, orchestration, configuration became issues. For us, k8s, already in it's early (public) days, was a godsend.
Also, i can confirm: the k8s development process is incredibly open. for a project of this scale, being able to discuss even low-level technical details at length and seeing your wishes being reflected in the code is pretty extraordinary. It will be interesting to learn how well this model will scale in terms of developer participation eventually, i figure the amount of mgmt overhead (issue triaging, code reviews) is pretty staggering already.
The main thing that I think makes Kubernetes the perfect example of a modern free software community is the open design process and how the entire architecture of Kubernetes is open to modification. You don't see that with Docker, and you don't see it with many other large projects that have many contributiors.
I myself would feel more confidence in K8s if there was usage of K8s outside of GKE and the dogfood being created was eaten by the rest of google as well (or at least there was a publicized plan to make that happen), because otherwise it feels more like K8s is a experiment that google folks are waiting to see what happens before the rest of google adopts it.
Kubernetes, Mesos, etc. appear to be great orchestration tools for an organization with a few [or many] engineers dedicated to operations. Their not so great for a small team [or individuals] who are just trying to deploy some software.
As I see it, Swarm seeks to solve orchestration analogously to the way Docker seeks to solve containers. Before Docker, LXC was around and the Google's of the world had the engineer-years on staff to make containers work. Docker came along and improved deployment for the ordinary CRUD on Rails developer who just wants to go home at night without worrying about the pager going off.
To put it another way, it looks to me like the intent of Swarm is to provide container orchestration for people who don't run a data center. Like Docker, it is an improvement for those scaling up toward clusters not down from the cloud.
None of which is to say that moving fast with Swarm isn't a business strategy at Docker. There's a whole lotta' hole in the container market and part of that is because the other organizations currently supporting development of container orchestration tools has business interests at a much larger scale...Google doesn't see a business case for pushing Kubernetes toward the Gmail end of the ease of use spectrum.
The desire to fork is based on the needs of the cathedral not those in the bazaar.
In my opinion, the opposite is true about Kubernetes (though it may be true about the Mesos stack). It's a great way to reduce the need for dedicated ops engineers.
Once you have a Kubernetes cluster running, it's essentially a "platform as a service" for developers. They can deploy new pods and configure resources all with a single client. You generally no longer need to think about the servers themselves. You only need to think in terms of containers. No package management, no configuration management (in the Salt/Puppet/Ansible/Chef sense), no Unix shell access needed.
It's hugely liberating to treat your cluster as a bunch of containers on top of a virtualized scheduling substrate.
Kubernetes itself is very lightweight, and requires minimal administration. (If you're on GKE, K8s is managed for you.)
Swarm Engine is very much knocking off Kubernetes' features. Swarm offers the convenience of building some orchestration primitives into Docker itself, whereas Kubernetes is layered on top of Docker. Other than that, they're trying to solve the same thing. I'd say Kubernetes' design is superior, however.
There aren't a lot of "Here's an API to throw up your containers and provision IPs" services out there. I see that Amazon has one (I avoid Amazon; they're the new Wal-Mart), but no one else really. I mean DigitalOcean provides CoreOS, so in theory you can provision one of those droplets and toss up some containers, but there's not a real API for "use this endpoint to deploy your container."
In the corporate world, yes .. we have dev ops teams to try out and create provisioning systems to throw up Mesos or CoreOS or OpenStack clusters. Once they're up and your devs are using them, they're lower maintenance that provisioning individual servers for individual services. For home devs, it'd be nice to have a plug an forget for docker containers (other than Amazon).
The other thing I still don't get about Docker: storage. MySQL containers have instructions for mounting volumes, but it doesn't feel like there's a good, standard way to handle storage. I'm sure that makes it more versatile, but like I said previously: plug and forget for home devs and startups. You want a databases as a service, get ready to shell out $80/month minimum for Amazon RDS or create a set of scripts around your docker containers that ensure you have slaves and/or backups.
> a real API for "use this endpoint to deploy your container."
I've been using Triton since as early as possible and it's fantastic.
You can tag/label containers and they show up in the SDN DNS by tag (called CNS), no need for Consul or other service discovery.
I do still use Consul for many reasons but CNS is awesome for bootstrapping a cluster as well as public dns routing/failover.
Also true about Docker Swarm.
> There aren't a lot of "Here's an API to throw up your containers and provision IPs" services out there.
I strongly believe this (in particular, "hosted Kubernetes as a service") will start arriving soon. It's a great business opportunity.
> The other thing I still don't get about Docker: storage. [...]
Kubernetes solves this. For example, if you're on GCE or AWS, you can just declare that your container needs a certain amount of disk capacity, and it will automatically provision and mount a persistent block storage for you. Kubernetes handles the mounting/unmounting automatically. Kill a pod and it might arise on some other node, with the right volume mounted.
Either run a seperate vps for MySQL or use a hosted solution.
Containerization will get there but i think putting old some types of old school apps in containers is a bad idea.
E.g. redis, beanstalk, and similar fit nicely I to clusters and restarting nodes without too much issue or downtime.
MySQL, if your install is large enough, you never want it to go down. This pretty much goes against everything to do with containerization, your developers also gain no benefit of it being in a container (apart from maybe running a small db in developent only where Provisioning a vm would be too painful)
Maybe that's the difference between a product and a solution. I don't know.
Kubernetes admittedly takes a slightly different, modular, layered approach, whereas Swarm is simple to the point of naivety.
This simplicity is potentially a threat to its future ability to adapt to different requirements, whereas Kubernetes offers a separation of primitives that allows it to scale from "simple" to "complex".
For example, in Kubernetes, a deployment is a separate object from the pods themselves. You create a deployment, which in turn creates a replica set, which in turn creates or tears down pods as needed.
But you don't really need to work directly with replica sets or know exactly how they work, but they're there, and can be used outside of a deployment. If all you care about is deploying apps, then you only need to deal with the deployment.
Exactly. To me, that approach is what allowed Docker to make containers on the laptop ubiquitous. Kubernetes is unlikely to be the tool that takes schedulers to that universe. Swarm might and I think that's the goal.
I mean, I don't really want to care about pods and replica sets. They're obstacles in between me and what I want when what I want is more horsepower behind an app. For the same reasons I probably am better off with garbage collection than malloc . I've only got so many brain cells.
I'm sure Swarm has its place, though.
(1) Write your own custom deployer https://github.com/matsuri-rb/matsuri
(2) Write your own controllers https://deis.com/blog/2016/scheduling-your-kubernetes-pods-w...
Those two are just examples. My point is that if you need to dig in, you can. You can do this because those higher-level behaviors are built on a solid foundation of well-thought-out primitives.
What I expect to happen is that we will have a diverse set of things built on top of Kubernetes. Some of it will get folded into Kubernetes core. Some will not. I think the center of gravity for containers has been shifting towards Kubernetes.
My experience of K8s so far is that GKE is happy path and most of the demonstrations/documentation is focused on that use case. When you step off that path, you can either go for something scripted which does a lot of things in the background, or what seems like quite an involved manual setup (etcd, controller node setup, Certificate setup, networking etc)
We _absolutely_ want to push Kubernetes to be "gmail-easy" - here's a recent demo:
If you don't feel like watching a video, here's the commands to get a fully production-ready cluster going:
master# apt-get install -y kubelet
master# kubeadm init master
Initializing kubernetes master... [done]
Cluster token: 73R2SIPM739TNZOA
Run the following command on machines you want to become nodes:
kubeadm join node --token=73R2SIPM739TNZOA <master-ip>
node# apt-get install -y kubelet
node# kubeadm join node --token=73R2SIPM739TNZOA <master-ip>
Initializing kubernetes node... [done]
Bootstrapping certificates... [done]
Joined node to cluster, see 'kubectl get nodes' on master.
If you have thoughts on how we should do it better, PLEASE join us! https://github.com/kubernetes/community/tree/master/sig-clus...
Obviously there's Kelsey Hightower's great Kubernetes the hard way tutorial, but even there some details are quite GKE specific.
It'd be nice to see some tutorials that make no assumptions about running on a specific cloud provider or using automated tools to bootstrap the cluster.
My suggestion is to design an interface for Kubernetes in ways less reflective of Conway's Law. That's how the needle moves from 'solution' toward 'product'. In essence, that's what I think 'Gmail Easy' boils down to. Gmail has user stories that are based on people (much)+ less technically sophisticated than a new fresh from school Google SRE.
I think that the call to action suggests why Docker is pursuing Swarm.
In a decade of container orchestration development, creating a good onboarding experience with the diverse demographic of Docker users has never been a priority within Google. If it had been a priority, Gmail-easy Kubernetes integration might have long since been a pull request from Google. That's what it says right on the label of the open-source software can.
The talk about forking Docker seems consistent with the absence of pull requests. It's just another business decision.
Deployment of a container to a node is roughly an atomic transaction. If it fails due to a network partition or server crash etc. The container can just be deployed again to the node again. By comparison, a partially executed provisioning script can leave the target node in an arbitrary state and what needs to happen next to bring the node online depends on when and how the deployment script failed as well as the nature of any partial deployment...and whatever state the server was in prior to deployment.
But if you use your production deployment scripts for development deployments, then that problem has been dealt with one way or another.
We're still hesitant to deploy docker in production though, but its on our list. For now we rebuild the production environment from the recipes in the Dockerfile and docker-compose.yml.
Before that we were using VMs with vagrant. I never really liked how ressource intensive and slow they where. And without VM I also don't see the need for Vagrant anymore if I can use docker directly.
So after 2 years of using docker I'm really very happy with our development setup.
It's just adding yet another layer to the stack, so another interface to learn. And if things don't work you have to deal with the “low level“ docker setup anyway.
Seriously, I've never looked back at Vagrant ever since.
But, that goes against the main hype driven development tenant. If it is on the front pages of HN, CTOs should quickly scoop up the goodies and force their team to rewrite their stack without understanding how technology works. Things will break, but hey, they'd be able to brag to everyone at every single meetup how they are using <latestthing>. Everyone will be impressed, and it also looks good on the resume.
/s (but I am only half-joking, this is what usually happens).
I also like using multiple vendors for a tool stack. Helps keep concerns separated.
For internal apps, that might be way better and cheaper than to run on AWS or GCP.
On the other hand, I don't think that roll-my-own shell scripts are going to solve the problem Swarm/Kubernetes/Mesos address -- scheduling -- as well as those tools will. The reason I think that is that many scheduling problems are NP-complete  and dynamically optimizing a scheduler for a particular workload is a non-trivial algorithmic challenge even at the CS Phd level.
: Wikipedia lists five 'scheduling' problems: https://en.wikipedia.org/wiki/List_of_NP-complete_problems
I disagree. I worked with Kubernetes on a small team, and its core abstractions are brilliant. Brilliant enough that Docker Swarm is more or less copying the ideas from Kubernetes. Even so, Docker Swarm just isn't there yet.
> As I see it, Swarm seeks to solve orchestration analogously to the way Docker seeks to solve containers.
That's how I see it too. However, multi-node orchestration is MUCH harder. The Kubernetes folks know that. Setting up K8S from scratch on a multi-node setup is still difficult, and the community acknowledges it. Contrast that with the way Docker is handling it. Docker seems to be trying to sweep difficult issues under the rug. Look at the osxfs and host container threads on the Docker forum as examples. People had to ask the developers for more transparency before it started showing up.
I don't blame Docker for how they got to "sweep difficult issues under the rug", since that is part of their product design DNA. It's just that things are coming apart at the seams.
> Google doesn't see a business case for pushing Kubernetes toward the Gmail end of the ease of use spectrum.
That's correct. That is what GKE is for. On the other hand, that is what Deis/Helm is for. Turning the project over to the Cloud Native Foundation, listening to the developer community ... there is a lot of things Docker can learn from the Kubernetes project when it comes to running an open-source project.
They're [Kubernetes, Mesos, etc] not so great for a small team [or individuals] who are just trying to deploy some software
Since the late 90's early 00's when Linux won the data center. Most people became really enjoyed the we don't break user land motto.
Once a kernel interface went live, it stayed that way. Ugly spots and all. Containers are starting to become a fairly important part of IT/Cloud infrastructure. Easily compariable to the OS itself. Logically those involved with maintainence would demand the same.
Yes I'm aware Docker is more a control program for interacting with Cgroups, setting quotas, installing packages, and isolating processes. Not the OS itself. It is an abstraction over the OS, hence for most developers it feels like part of the OS. So logically they'd demand it be as robust as the OS.
These days some new technologies become "trendy" (on HN and elsewhere) and start being used by developers. That in itself is great, but many of those developers are also young and inexperienced and do not realize that production systems have different requirements.
There are many symptoms: the docker situation, npm (need I say more), libraries like Semantic UI that can't be built in a CI environment from the command line (require user interaction). Even small things, like tools that change their behavior based on files in one of the parent directories (npm again, but not only), or the proliferation of fancy progress bars and useless drivel being spewed to the terminals, are symptoms. Those are tools designed by developers working on their laptops, for developers working on their laptops, and do not (at this stage) fit the requirements of a production server environment.
Maturity and stability are valuable traits in software, especially in larger systems.
We see these fads come and go, wait for the dust to settle and use what is actually mature to be deployed in real production environment.
It also helps that our customers don't sell software, as such they view of what technology stack is supposed to look like is quite different from what the typical Starbucks developer thinks about software.
Selling software to people who use software vs those who develop software is a vastly different experience. As are the questions and rigor they'll put a consultancy agency though. As well as what they expect out of a product.
>We see these fads come and go, wait for the dust to settle and use what is actually mature to be deployed in real production environment.
Yup nothing like Oracle SQL + Java App driven by Tomcat or Spring. Hell you can even get an Oracle Rep to appear in your sales pitch. The suits really like that.
I personally think using it was the right call regardless (the alternatives are worse in the long run), but it's such a massive upfront investment, I couldn't recommend it.
One thing though:
> tools that change their behavior based on files in one of the parent directories
You mean like git? I agree with your general sentiment but you're backing it up very weakly, with essentially irrelevant things. Terminal interactivity is not a bad thing; TUI software is built for humans as well as machines and well behaved components will detect when they are not in a tty.
> You mean like git?
In case of interactive tools like git it is something you would expect. But it came to me as a rather huge surprise that if you type "npm install" in a directory where you just unpacked sources, what will actually happen depends on what exists in the PARENT directory. Not to mention writing to the parent directory.
I am not alone in being caught off-guard by this: https://github.com/npm/npm/issues/2896
I'd say this is unacceptable in case of tools that are "make-like".
When I use git in a subdirectory of my project, it looks for .git/ up the directory tree until it finds it.
The state file is atrocious, a constant source of pain. It is hard to set up in the first place, it is hard to import resources into it, it is hard to rename resources within it.
Sometimes, resources don't canonicalize correctly and will always tell you there's something to modify, even though there isn't. Other times, resources don't destroy properly because you are using some untested settings within them.
"It's alpha software" is the best description I can give. It has a ton of rough edges.
The pros: It's decently fast, and a lot more workable than the alternatives (cloudformation or simply tracking stuff by hand). It supports more than just AWS, including fairly obscure stuff like Cloudflare DNS records. It's conceptually solid.
Don't say no to it outright, but you should know what you are getting into.
Btw, Terraform 0.7 comes with initial support for import and state manipulation commands.
Also you mentioned that Terraform was better than the alternatives. Could I ask what other alternatives you looked at and why you ended up choosing Terraform over them?
The reason I ask is that I am also considering Terraform. However other providers also seem to support more than just AWS and are fast enough.
Use the AWS API and your config management API ( chef ) to manage the state of the systems.
That is pretty much it right there.
I've yet to see another solution that beats it. Warts and all, it's the best I've seen.
"You old developers just don't get it! The young are building the future! We LIVE on the bleeding edge, man. Who cares if there's a few bugs, that's what you MAINTENANCE developers do, fix the bugs that are beneath me. Besides, I'm out in like six months anyway. On to my next gig."
There are two extremes. The one you point out leads to developers with lots of breadth and no depth. But the other is still the industry standard today: one that values depth in a particular (and possibly completely irrelevant outside of the company) domain.
I hope we keep moving towards the middle of the two.
Yes, this happens and yes it is very immature.
I think that's because the entire industry devalues both QA/testing and maintenance. Even in places that aren't the Valley these two roles are given a lower social status than other developer roles. The budgets and recognition for teams in both domains is typically much smaller than others, and QA especially is usually among the first on the chopping block during a retrenchment.
We as a whole have made these roles seem less valuable and less values, so to me it's not surprising that folks don't want to do them.
If this went bad, you see something like MongoDB happen.
What did we see? There's still buttloads of software being written whose authors flat out refuse to support a sane database.
Also it does seem a little odd to me to see people suggesting that Docker needs to be more stable and "boring" (from this article https://medium.com/@bob_48171/an-ode-to-boring-creating-open... referenced in the main link) to fit in with other projects in this space like Kubernetes, when it seems that most/all of these projects including Kubernetes are moving as fast as each other...
I don't think people are suggesting that Docker Inc. should be boring. The criticism is more targeted towards Docker Engine, the container runtime.
Stability is more the attribute of more mature services like Virtual machines, which have been around as a heavily used technology for quite a long time now...
> Spokespeople from ... CoreOS denied any knowledge of Docker-related talks.
I believe an independent project managed by a board of shareholders would be better for us all.
Also, rkt has had significant external contribution like the virtual machine isolation work from Intel. We have spun out independent projects like the Container Networking Interface (CNI) that is now used in Kubernetes, Cloud Foundry, and Mesos.
Stable and robust components are necessary to make this ecosystem work for people operating them. And we built rkt to help the entire ecosystem (and yes ourselves) be successful with that. The product strategy ends there. If a community of folks felt putting rkt into an independently managed foundation would help achieve that goal we would be happy to do that work with others.
But, I think the project has a good track record of working with others and focusing on a narrow feature set.
Whilst I'm more familiar with Docker than rkt it seems that both projects are still changing quite quickly and have a lot of movement in terms of features being added & changed...
That said there were containerization efforts in this area in the past (e.g. LXC) which never really seemed to attract the same kind of traction as Docker.
Ultimtely if people want to create containers, it's entirely possible to do with pure linux commands and no upper-level software at all, it's just Docker makes it much easier/nicer to achieve :)
It's absolutely possible (see: Bocker), but not a very productive way to go about things. It's much more productive to take an existing technology and make it better than to start from scratch (yet again), especially with permissive licenses like those used in Docker Engine.
Not sure I can think of a case where a fork has been successful without a decent percentage of the original developers supporting it.
nova-lxd adds lxd as a "hypervisor" choice for openstack, but it's a completely optional way to use it.
Try it on ubuntu right now: sudo apt-get install lxd; lxc list (yes, the client for lxd is called lxc, it's kind of confusing. Sorry.)
Disclaimer: I work for Canonical, but not on LXD (I use it daily outside of openstack though)
The tiny amount of existing early adopters can be made to adopt something else one more time.
One of the reasons to avoid a fork is the confusion caused by two kind of similar but not quite the same implementations that diverge slowly over time.
rkt is unlikely to just copy the feature set of Docker, but they provide competition in implementing new ideas and as it's not intended to be identical there's less confusion.
I don't think a fork will help - I think a fork would make sense if there were concerns about Docker's level of 'openness' but I think that's not the problem here.
Forking/duplicating a technology whose main premise is being "a single consistent environment for running apps" sounds like a contradiction to me.
Ultimately, forking is what happens when the contributors can't work with the maintainer, and it seems pretty clear that Docker, Inc. can't act as responsible maintainers.
- Developers interested only in the docker engine would consider using it for higher reliability, less breaking and rushed changes or force-bundling of things they don't want.
- If enough developers are using it, docker inc. would be compelled to maintain compatibility to avoid "mainstream" docker ending up as the tool only for dev/CI.
So Unix philosophy?
That's not an indictment of systemd... it's just an example of how I'm not sure everyone has glommed on to the fact that just a GNU isn't UNIX neither is Linux.
Then we had the whole dot-com bust that freed up a whole lot of hardware to run LAMP stacks on, and things really got rolling.
Concerning composability and interoperability I'm with you, and I think it's a shame that Docker decided to build their own stack instead of improving LXC.
UNIX had this problem and look how long it took before things settled. Linux was the result. Maybe this is a good thing?
In the 90's we had the UNIX wars.
Nowadays we have the GNU/Linux wars.
Each distribution does its own thing and every disagreement leads to yet another fork.
What i do mind is how existing, well established distros are being hijacked by stary-eyed techies rather than spinning off their own proof of concept that others can adopt if they want to or not.
Especially when said hijacking happens not overtly but subversively by tightly integrating previously separate projects to create a TINA situation.
I upvoted your question because as far as I can tell, you are the only person here to ask it.
The real sticking point were distro package managers and the odd file placement difference (mostly regarding where to put the rc script to have things start on boot).
This meant you could not just put a package out there and tell some clueless web monkey to go curl it straight into production as root. And IMO that was a good thing.
The wide acceptance of systemd was an example of the convergence of the main distros. I didn't mean you to infer it was some sort of prerequisite.
If you're using CentOS 6 you're stuck with Docker 1.7. There are a lot of enterprise companies out there (I'm looking at you big banking) that aren't ready to move to CentOS / RHEL 7 and trying to get stable usage out of Docker 1.7 doesn't "Just Work" in my experience.
Anyone here use rkt with CentOS 6??
Whoa, citation needed! Not activating swarm features has the same probability of causing problems as relying on fork() to create a process. Maybe not, but I don't see Docker suddenly forcing everyone into using swarm. It seems unreasonable to even suggest this, and a bit of a scare tactic.
That said, it's a hard problem, and I certainly don't have the time to work on it myself; nor can my employer spare me to work on them either.
1. Who is running docker in production.
2. If you are running docker in production what sort of money are you making with it ?
For example: I ended up writing this to ask the Docker team for more transparency on this issue: https://forums.docker.com/t/file-access-in-mounted-volumes-e...
And they responded with an awesome reply addressing it: https://forums.docker.com/t/file-access-in-mounted-volumes-e... and it went a long way towards helping the community understand the issue what to do about helping.
However, there are also threads like these that asks for the same issue:
They kinda left the community in a limbo here, and quietly added a line in the documentation saying it won't happen. But without the transparency, we don't really know what's going on here.
Back then with the rkt split, the Docker design was to gear towards users so that there is as little friction as possible. It worked all right when it was just Docker Engine on Linux. Over time, due to differences in distros, you can see the container abstraction leaking here and there. Generally manageable.
In the 18 months since, it's becoming clearer that Docker is drunk on their story. Seems like more and more of the leaks from the abstraction are getting swept under the rug while they are trying to make a land grab for the orchestration. Yet Docker is doing it in a way that is sacrificing the goodwill from the community. It first starts with the third-party vendor relationships, but as you can see from these forum posts, it is starting to leak into the relationship with end-users as well -- the developers.
There's still time to turn the (ahem) ship around. But a big part of what is driving Kubernetes success isn't that their abstractions are brilliant, but rather, that project is very transparent and communicative of what they are doing and what they are intending with the community. I get Docker is trying to do that with Docker Swarm, yet I think they missed a critical part of why and how Kubernetes gained so much traction so quickly.
Could you elaborate on that, please?
I'll be damned if I understand why someone would continue to cling on to broken software when there is a working alternative. Can someone explain this irrationality in terms which I can understand?