Docker reminds me a lot of the PKZIP utilities. For those who don't remember, back in the late 80s the PKZIP utilities became a kind of defacto standard on non Unixes for file compression and decompression. The creator of the utilities was a guy named Phil Katz who meant to make money off of the tools, but as was the fashion at the time released them as basically feature complete shareware.
Some people did register, and quite a few companies registered to maintain compliance so PKWare (the company) did make a bit of money, but most people didn't bother. Eventually the core functionality was simply built into modern Operating Systems and various compatible clones were released for everything under the sun.
Amazingly the company is still around (and even selling PKZIP!) https://www.pkware.com/pkzip
Katz turned out to be a tragic figure http://www.bbsdocumentary.com/library/CONTROVERSY/LAWSUITS/S...
But my point is, I know of many many (MANY) people using Docker in development and deployment and I know of nobody at all who's paying them money. I'm sure they exist, the make revenue from somewhere presumably, but they're basically just critical infrastructure at this point and just becoming an expected part of the OS, not a company.
It's has letsencrypt built in along with a few options for file storage, including AWS/gcloud.
Getting up and running with your own registry shouldn't really take longer than half an hour. The config docs for the registry image are really helpful with this too.
But for a company it's almost always cheaper to pay for the service.
That way you don't have to deal with maintenance, setting up redundancy, mirroring, etc.
That said, aws, google and quay are all competitors in this space.
For an example of just how good their image is:
I set my companies up almost a year ago, had to deal with it about once.
The docs cover a HA setup (if you really need it - most won't), data backups aren't really needed if you're using a storage backing, and I just use local file system - absolute worst case I have to wait half an hour to rebuild and push everything.
The downside risk of running your own is very low. The upside is a few bucks and you have your own private registry local to your network.
There are a lot of heavily used images out there with a bus factor of 1.
We also have a few issues about this, see https://gitlab.com/gitlab-org/gitlab-ce/issues/25322 and https://gitlab.com/gitlab-org/gitlab-ce/issues/20247.
I hope it helps!
I also contribute to Docker and other container technologies, and I cannot express in words how horrifically long the Dockerfile-based build and integration testing process takes. It takes about an hour in America, and more than double that in Australia.
If the creater of "ls" or "cd" asked for donations, would you even realize? If docker asked for donations, would you realize?
All Docker images are glorified tar layers that are compressed with gzip. By definition, all users of Docker are users of gzip and tar. I understand what your point is (it's not as visible) but I don't agree with saying it's "core to more businesses".
Also, if someone is using Kubernetes they soon might not be aware (or even care) whether they're using Docker thanks to the CRI.
Definitely. I was a little unclear, I meant core as in visibility, which you pointed out.
Also about Kubernetes, I can't comment on that because I've heard used it (but I've heard of it). I think it' s possible that there are a lot of businesses out there (such as the one I work for) that use docker but not Kubernetes.
The more visible an open-source project is, the more likely people are to donate to it. I could definitely see companies supporting docker in the same way that they support the Linux foundation or various Linux distros.
Things are only valuable if they're scarce and in demand.
I old enough to remember PKZIP, ARJ and many other tools.
In all these years, there was only one employer that ever bothered to contribute anything back from what they got for free.
Does anyone have an insight into this?
Looks like Github's last valuation was at 2BUSD. That also seems high, but I can understand this somewhat better as they have revenue, and seem to be much more widely used/accepted than Docker. In addition to that I can see how Github's social features are valuable, and how they might grow into other markets. I don't see this for Docker...
I think that addition to Linux paved the way for containers to enjoy wider adoption than was previously possible with other less popular container tech in OSes like Solaris (zones) or BSD (jails).
But my general point is that people talk about Linux as being out-of-touch when it comes to the history of containers, but if you look at the timeline that simply isn't true.
If containers impact VMware, it's because they are free and most of the revenue will not flow to docker.
Which is docker for vsphere.
If anything I'm surprised that their valuation isn't higher.
I've been using Docker since early 2014 and they haven't gotten a dime from me. If you count the storage and bandwidth on the public registry I've cost them money.
Where does the money go.
What you described is a services/support company, something Red Hat has worked really really hard not to be over the years (to debatable degree of success). Being their for support vs OSS is not how you get great margins.
Anyway, at this level of the stack in prod settings brand won't take you that far.
That's simply not true. Having a support team they can call is, rightly or wrongly, a large part of the reason some companies consider buying Docker Enterprise Edition at all.
Exactly. Build your docker image with the free open source tools then push it onto Azure, for example. How does Docker the company see any revenue here? Or if you are running on-prem with DC/OS in a private cloud. I don't know anyone using Docker's own cloud, and why would you? They need to sell either services or an "Enterprise" version that is better than what you can do without them. I think containers are definitely the future, but I also see containers as being just a commodity, no more exciting that Makefiles and RPM are now. The money will be in running them, and Azure, AWS et al will have that stitched up.
Also, a lot of the value of Github is social. You can only get that (after a certain point) by paying Github money.
By contrast, you can get 100% of the value of the Docker community without paying a cent to Docker, Inc.
Wrong. The built-in Docker Swarm has currently the easiest UI for container orchestration (there is still stuff which could be done better though but still). This paired with sensible defaults and batteries included, such as a load balancer make Docker the clear winner and apparantly nobody has been able to replicate the UX. I know k8s has a bigger market share but is also way more complex.
Edit: Why the downvotes? Afraid that your k8s know-how will drop in market value? Please reply with valid counter arguments instead of this maddening silent downvoting. If I was wrong let me know where and why.
Honest answer: It's rude to open with the one-word sentence "Wrong." You compounded it by implying that anyone downvoting is doing so in bad faith. Neither is a good look. The rest of your comment was fine.
> Please reply with valid counter arguments
Sure! So, Kubernetes is Google's attempt to make real, full-strength Google-ish infrastructure "as simple as possible, but no simpler". This kind of infrastructure is really hard, so "as simple as possible" is still quite complicated. This makes k8s a pain in the ass to understand and use.
Docker swarm comes from the opposite end - it's dead simple to use, and seems to be aiming for the 80% use case. After all, most companies are not Google, and can work with a less complicated solution that offers a "Just Push Go" experience. The downside is that it's less flexible and less robust. (I also get the distinct sense that the engineering was rushed. But that can be fixed if it stays popular for long enough - eg I hear MySQL is decent these days.)
The potential problem, as kuschku is pointing out, is that the bigger, more Enterprise-y and more lucrative customers become, the more likely they are to want the power and robustness of Kubernetes. This presents an existential threat to Docker Inc. They could end up fully commoditised, building a vital platform that provides tons of value, but which they can't charge for because all the big support contracts go to Kubernetes Managed Services Providers or whatever.
Kubernetes is the magic sauce, not Docker.
Is docker perfect? no. Swarm has been an absolute disaster. API stability has been dramatically undervalued. There is room for containers to grow, and their power is nowhere near tapped out.
There still a lot of room for improvement on the containerization space, and docker is going to drive that, whether the kubernetes crowd likes that or not.
Kubernetes is amazing and gives us a real chance to have an operating system for the data center.
Kubernetes itself, despite years of investing is still ridiculously difficult to install. It still has an adoption that’s at least two orders of magnitude below baseline docker. The learning curve is still way higher than it should be.
Frankly I hope that all involved get their heads out of their asses and build something great rather then continue to muscle in on each other. There is no reason for this pissing match given that it’s the combination of borg and Docker that makes solutions deployed on top of kubernetes amazing.
You're also starting to see those startups realize how much money they're wasting on cloud services once they hit scale. It is EXTREMELY expensive to do cloud if you're even remotely efficient with your infrastructure unless you've got extremely bursty and unpredictable workloads.
Most don't. That is the value.
The cloud is also a lot easier to optimize and save money.
However, once you hit significant scale, the lessons from the operational experience of all of the major firms have been pretty consistent: it's cheaper to operate your own data centers than it is to outsource them.
From the perspective of a Fortune (checks list) 15 company: AWS saves us a fortune. No facilities costs all over the world with power/real estate/lawyers to handle local laws. No data center engineers across the globe. A solid discount on list. Consistent bills (thanks to judicious RI buys) and servers that are available in minutes - not weeks (have you ever seen enterprise IT ticketing practices?!)
If we had two orders of magnitude fewer employees/servers/locations AWS wouldn't make sense. But at this scale nothing else makes sense.
AWS was likely just a way to overhaul inefficiencies in a legacy IT org. Someone will be able to do the same thing in 5 years moving you from AWS back to self-hosted.
But some companies get so big, it actually makes sense to build their own network with their own custom tech and, yes, abandon the cloud. Amazon and Google and Microsoft can keep cloud prices low, thanks to economies of scale. But they aren't selling their services at cost. "Nobody is running a cloud business as a charity," says Dropbox vice president of engineering and ex-Facebooker Aditya Agarwal. "There is some margin somewhere." If you're big enough, you can save tremendous amounts of money by cutting out the cloud and all the other fat. Dropbox says it's now that big.
I think hybrid totally makes sense where you take the most expensive part bare metal with limited scope of maintenance.
1 PB is $23k per month on S3. It's nothing. That's barely the costs of 1-2 employees in SV.
The migration itself would take a lot more effort than one dude, even if there was a solution for completely free storage out there, the migration could only result in a huge net loss.
Bandwidth is almost always cheaper at a colo. Most compute instances are cheaper to buy and rack if you have continuous loads. Disk ... is tricky.
It is faster and has lower up front costs to get clouds up and running initially. For most startups who are going to fail, that's Good Enough(tm).
If, however, you continue existing for a while, the other things start to add up at much lower levels that you would expect. I'd say the crossover is around when you are spending about $15,000 per year. Your colo is about $10,000 of that per year and you can rack 5 new machines every year for the remaining $5K. That's not that much for an actual business.
Cloud is good for your initial startup and for bursty situations. Once you have continuous loads, you need to be moving to pulling stuff back to your own hardware.
It's a real myth that the cloud magically saves me a sysadmin. I have found almost exactly the opposite. Using the cloud effectively takes more time and more expertise. Debugging the cloud effectively takes WAY more expertise. Combine this with the fact that someone has to be able to architect a system to fit within the constraints of being on the cloud, and you're down extra employees.
The difference is in: "Eh, it's been down for 3 days? Sigh. Just reboot it." vs. "Um, that request hung for 93 seconds. Why?"
In the first case, the cloud is fine.
In the second case, someone is going to have to traipse over an enormous amount of systems (which you don't own and can't always instrument) and variables (some of which you aren't even aware of existing) to hunt it down. If, however, you can say "Pull that off the cloud into our own systems and keep an eye on it." you have made your debugging life a lot easier.
Of course, once you have the ability to do that, your team realizes and starts asking: "Given how much time we spend debugging issues with the cloud that aren't actually our fault, why are we on the cloud again?"
I always smile when that realization kicks in. Now I generally have to stop the team from pulling everything off the cloud. However, that's a much easier task.
For the hardware side of things every colo I've seen has a "remote hands" service.
AWS is also about 2749 times faster and more efficient, as measured by me the last time we ordered hardware on AWS and dell simultaneously.
Where I work this has been the standard for over a year now.
1. Git commit triggers CircleCI build and test phase
2. CircleCI deploy phase uploads the image to GKE
3. Google Container Engine stages the deployment for release
Jenkins and the Jenkins slaves also run on docker, and are managed by Kontena. In fact, the whole platform/pipeline runs completely on Docker. We use Ansible to install Kontena/docker on new servers.
I haven't set up enough systems to have a strong opinion on this one myself, but those I know who definitely have seem to come down in favour of plain LXC and possibly LXD in most scenarios. Typically, their argument is that the features you probably want are there anyway, and so the extra weight of the Docker ecosystem now seems to introduce more irritations than it fixes.
Sometimes they seem to distinguish between hosting on infrastructure like AWS and setting up containers with colo or closer-to-home managed hosting. I don't understand the subtleties here; can anyone enlighten me about why they might go with Docker in one case but be quite strongly against it in the other?
In fact, the LXD people have a guide on how to run Docker containers inside LXD :)
What I'm not seeing is why -- from an objective, technical point of view -- you couldn't do almost any of the same things with plain LXC these days, perhaps with LXD on top if the vanilla UI for setting things up isn't sufficient.
I mean, Docker Hub and some nice UI tools are great and all, but I don't see a USP or a defensible position worth a billion dollar valuation in there. So what is really behind the confidence that investors at this level must surely have?
In the case of AWS, I can see the advantage, you're renting a resource you need, rather than buying it. I guess I just don't see how you extract similar amounts of revenue from Docker...
Redhat is valued at ~1.3B USD. Github, currently at ~2B USD. How do you justify 10B USD for Docker. Obviously you can (they did), I just don't understand how this is done well. Does anyone have a better insight here?
Disclaimer: working at RH
In that respect, Docker is much more traditional than SuSE or Red Hat, and indeed it would be much harder for anyone else to replicate RH's "miracle" nowadays.  And that's exactly because RH is already there and applying its business knowledge to Docker's field.
Redhat's market cap is $17 billion, so I'm not sure where you're getting that figure from. https://www.google.com/finance?q=NYSE:RHT
Because that's what Docker is: JEE for non Java platforms
Why would containers become the de facto rather than something like Cloud Foundry which abstracts it away entirely? Docker is just a slightly less messy version of the Puppet, Salt, Chef Devops BS with added complications around networking.
I don't have insight but I'm guessing the valuation comes from the prospects of enterprise sales.
Also, Docker (the company) isn't staying still. We have to assume they will evolve to add future upmarket services on top of Docker that companies will pay for. Investors would see the PowerPoint slides for those future products but we as outsiders don't.
Maybe an analogous situation would be the Red Hat multi-billion valuation even though Linux kernel itself is free and open source.
The risk of course is that the nobody will want to pay per-node and the community will just invest in the open source container ecosystem and replicate the Enterprise features with plugins and forks.
Still, the market might be big enough that they can become another Red Hat just based on support and stewardship revenue.
Docker is much more than dockerhub.
I'd guess the valuation is based on the orchestration and hosting solutions much more than the container engine.
They're competing with AWS and Google cloud on many fronts. And docker controlling the defacto only strengthens the argument.
Personally I wouldn't bet my money on their hosting solutions but I wouldn't ignore them either.
But so was docker to start with... Given that we're all using the docker cli, and coding automation against the docker remote API, they have a massive impact on a huge community.
Even if they aren't ahead right now, the game is still wide open.
They are likely not counting on it as their returns are nearly guarenteed through the liquidation pref plus interest.
Employees of docker just saw their chances of a big monetaty exit cut dramatically with this funding round since they are in last place.
It is an area that yc continues to be silent on and is a travisty of the start up world today.
1 - As has been mentioned before, it's not really 1.3 Billion. If they have liquidation preferences, the value is much less.
2 - If they come in very late, growth investors may be ok with 3X or 5X.
Containers are something else entirely. They'll probably be running on every device on the planet in one form or another in just a few years. Anytime you're on the forefront of something like that, there's money to be made.
Investors must be betting that containers are going to be the next Java, XML, or virtualization in enterprise computing, and that by controlling that technology, Docker positions themselves for extremely lucrative enterprise IT support contracts.
Usually they also look to there being a MUCH lower change of going completely to 0 - more like 10-20% vs well over 50%.
And usually there it's because of the fundamentals of the business are starting to show (margin, cost of acquiring customers, customer churn and upsell, etc.)
Sometimes IP or assets add value/valuation as well, though.
In Docker's case, there is also probably a feeling that the asset (control of "Docker") is worth hundreds of millions.
In CloudFlare's case, it is millions of sites as users, and the ability (mostly untapped, I think) to monetize the data from that.
My second reaction was incredulity at how ridiculous my first reaction was.
What you say makes sense, but then Docker is also coming 10-12 years late to the game, and until it's profitable and has money in the bank it's hard to do many expensive acquisitions.
OpenShift 3 (current) has no code in common with any of the older space. It was a totally new platform built on top of Kubernetes and docker.
On the other hand, curious how much revenue DockerHub is bringing in and where they will plan on taking it. That model seems closer to Github. Will it be a newer way to discover new OS or even propriety images like how devs use Github?
We were able to get the environments set up and the app running, but the networking is so slow to be pretty much unusable. Something is wrong with syncing the FS between docker and the host OS. We were using the latest Docker for Mac. If the out of the box experience is this bad, it's unsuitable for local development. I was actually embarrassed.
Apple, for their part, are way behind the curve on all this. Completely MIA. For the amount of developer dev-station share they have it's amazing what macOS users have to deal with.
As of Docker 17.06, 90% of use cases can safely change the settings and noticeably improve performance. See the documentation: https://docs.docker.com/docker-for-mac/osxfs-caching/#perfor...
I will give credit as to how easy it was to get the app running -- compose makes everything a snap. It's unfortunate that something is amiss with volumes/networking.
Forking a project into a enterprise (paid for) version and limiting those features in the original open source version, creates tension in the community, and usually isn't a model that leads to success.
Converting an open source project directly into a paid for software or SaaS model is definitely the best route as it reduces head count and allows you to be a software company instead of a service company.
Perhaps best captured by Github warpping git with an interface and community and then directly selling a SaaS subscription and eventually an enterprise hosted version that is still delivered on a subscription basis just behind the corporate firewall.
Also of note is that Github didn't create git itself, and instead was done on the direct need that developers saw themselves, which means they thought what is the product I want, rather than, we built and maintain git, so let's do that and eventually monetize it.
But with how complicated my stack is, it just didn't make sense to use ultimately. I loved the idea of it, but in the end good old virtual machines and configuration management can basically do most of the same stuff.
I guess if you want to pack your servers to the brim with processes and shave off whatever performance hit you get from KVM or XEN, I get it.
But the idea of the filesystem layers and immutable images just kindof turned to a nightmare for me when I asked myself "how the hell am I going to update/patch this thing"
Maybe I'm crazy, but after a lot of excitement it seemed more like an extra layer of tools to deal with more than anything.
You include patches in the build process that produces a patched image. You then tear down your containers and deploy the new image.
One of the main benefits is once you have a proper pipeline setup you just modify your Dockerfile commit it to git, the build happens automatically and then it's automatically deployed once the new image is checked in to the repo.
Now the all your containers are "magically" patched. Imagine having to patch a CVE in each of your VMs. Quite different.
VMs are pets. Containers are cattle.
That's the funniest yet most accurate down-to-earth analogy I've heard so far that explains the mentality of Docker/containers. While funny, it's actually practical to have an analogy when explaining newcomers what Docker does to the mentality of server/service management. Thanks.
Then think about deploying some complex legacy apps that have upgrade paths that involve changing database schemas and running shell scripts to migrate data. How are you going to reasonably do that with docker? It just hurt my brain to think about it.
If you have the luxury of designing your app from the ground up and its not too complicated of a stack then I see how it is cool. But mostly if it saves you money on AWS bills or something...
The real power of containers comes with container orchestration (i.e. Kubernetes, Mesosphere, and OpenShift). By leveraging containers, container orchestration systems can provide high availability, scalability, and zero-downtime rollouts and rollbacks, among many other things. These things were hard before containers & container orchestration. By allowing containers to be moved between nodes in a cluster, one generally achieve higher hardware utilization than with VMs alone (which is in itself a big improvement upon software on bare-metal hardware). All of this also leads to easier/better continuous deployment, as well. This, in turn, leads to easier testing, and greatly simplifies provisioning of hardware for new projects.
So, the benefits are:
- Cheaper than VMs (through better hardware utilization)
- More reliable, through HA load balancers
- More scalable, through scalability load balancers
- Better testing, through CI/CD enabled by containers
- Faster application delivery by simplifying provisioning
- Nobody ever used VMs the way containers are
- A HA load balancer is not a container-specific concept
- There is no such thing as a 'scalability load balancer'
- You don't need a container to do CI/CD
- It's not faster, it's slower, and it's not simpler, it's more complex
- I never said an HA load balancer is a container-specific concept. I said that "these things were hard before containers". I stand by that statement. Containers and container orchestration make HA proxies really, really simple.
- A scalability load balancer is a load balancer in front of a service that monitors load and scales up, or down, the number of instances behind that load balancer. Again, container orchestration makes this really easy.
- No you don't need a container to do CI/CD. But containers make it much, much easier.
- It is faster. I can provision an entire cluster of machines in about 5 minutes on AWS or GKE. If I have an existing cluster to publish to, it's even easier--it's one line in my continuous integration config file. Container orchestration has a learning curve (I'm assuming this is why you say "it's more complex"?), but it is tremendously easier and faster to provision hardware for a project with containers and container orchestration when compared to provision actual hardware, or even provisioning VMs.
Sounds like you haven't really used containers at all.
The old model was a system would be running, and a lot of software components within that system would depend on each other, creating a web of dependencies. If one of the dependencies had a problem, it could bring down the whole system [in theory].
The new model is "simpler" in that every software component has its own independent operating environment, with [supposedly] no dependency on the others. In this way, if one dependency fails, it can do so independent of the system at large - the failed piece is simply replaced by a different, identical piece. In addition, the component environments don't store state or anything else that would be necessary in order to replace it.
We basically bloat up our RAM and waste CPU and disk space in order to be able to understand and support the system in a more abstract way, in the service of better availability, and also, better scalability.
How is this different from simply running a bunch of chroot'ed services on a system without containers? It really isn't. But you get more control over the software by adding things like namespaces and control groups, and by everyone using the same containers, more uniformity. By dealing with the annoyance of abstractions and bloat, we get more human-friendly interoperability.
There's no good reason we should have had that many in production. We had three versions of the 2.X series and two versions of the 3.X series because of mixing-and-matching base images we used plus management deciding that we could do partial upgrades of Python version by upgrading a project at a time. (We switched from 2 to 3, which meant we had containers -- with different base images -- where we updated the 2.X version but not the 3.X version and containers where we updated the 3.X version but not the 2.X version. This gave us all kinds of mixes and matches of Python 2/3 versions.)
So I just hoped whoever was maintaining base images was actually maintaining their security patches, kept the versions we were intentionally using up to date during container construction, and it (mostly) just sort of worked out.
We're down to... 3 versions of Python and 3 base images. I'm trying to get down to 2 versions of Python (a 2.X and 3.X).
Note also that binary packages help with repeatable deployment that uses
Docker, deployment tools (e.g. Ansible or Salt), configuration management
tools (CFEngine, Puppet), even manual -- and mix of all these. Docker images
only help with Docker deployment.
I'd argue that only Linux containers are like VMs.
Nix genuinely tries to solve the problem.
For example, here: https://github.com/mbrock/gf-static/blob/master/Dockerfile
Making static binaries is often extremely painful and confusing. It's easier on a distribution like Alpine, and nicely enough, Alpine comes as a Docker image.
In general, Docker's ability to basically spin up a whole Linux distribution, with a whole root file system, makes it very different from just using static binaries.
Along with Docker's image repository infrastructure, it makes some things easy that weren't easy before. Like, I don't know if there is a static binary build of the Erlang runtime system, and I don't know what kind of file system tree that system needs, but I just now opened an xterm and typed "docker run -it --rm erlang" and got an Erlang 9.0.2 shell.
Who is the author of aufs or overlayfs? Should these projects work with no recognition while VC funded companies with marketing funds swoop down and extract market value without giving anything back. How has Docker contributed back to all the projects it is critically dependent on?
This does not seem like a sustainable open source model. A lot of critical problems around containers exist in places like layers, the kernel and these will not get fixed by Docker but aufs or overlayfs and kernel subsystems but given most don't even know the authors of these projects how will this work?
There has been a lot of misleading marketing on Linux containers right from 2013 here on HN itself and one wishes there was more informed discussion that would correct some of this misinformation, which didn't happen.
It was incredibly impressive marketing, no matter what else we might say about it. And, since then, of course, they've leveraged that early success into a real platform and they've since built a much bigger pile of things (some are even really good). It's kind of a testament to the wildly over-optimistic approach of Silicon Valley product-building, and maybe also kinda damning of the sort of pillaging that the process often involves.
I had my reservations about Docker in the beginning, for all the reasons I mentioned above, but I'd be unlikely to bet against them, honestly. And, they have contributed quite a bit to OSS. I've been poking around in golang, lately, and have found a lot of libraries in my area of interest created by the Docker folks. They're seemingly doing good things, though not on par with Red Hat, which is entirely an OSS company.
This is where I agree with Peter Thiel's very succinct (although somewhat philosophical) definition of the term technology in "Zero To One": it allows us to do more with less.
Sometimes, merely re-imagining a better way to use existing tech could unlock new kinds of productivity previously unimagined. One of the best examples of this is the iPhone.
Here's an exert from a docker blog post from 2013
>>Under the hood Docker makes heavy use of AUFS by Junjiro R. Okajima as a copy-on-write storage mechanism. AUFS is an amazing piece of software and at this point it’s safe to say that it has safely copied billions of containers over the last few years, a great many of them in critical production environments. Unfortunately, AUFS is not part of the standard linux kernel and it’s unclear when it will be merged. This has prevented docker from being available on all Linux systems. Docker 0.7 solves this problem by introducing a storage driver API, and shipping with several drivers. Currently 3 drivers are available: AUFS, VFS (which uses simple directories and copy) and DEVICEMAPPER, developed in collaboration with Alex Larsson and the talented team at Red Hat, which uses an advanced variation of LVM snapshots to implement copy-on-write. An experimental BTRFS driver is also being developed, with even more coming soon: ZFS, Gluster, Ceph, etc.
I think the real nature of your complaint is that you want the recognition to be in the form of cash.
In the mean time LXD is developed by a single guy...
Its kernel namespaces that enable containers and allow one to launch a process in its own namespace(there are 5 namespaces containers use).
LXC/LXD containers launch an init in this process so you get a more VM like environment. Docker does not launch an init so you run a single app process or a process manager if you need to run multiple processes. Systemd Nspawn is similar to LXC.
All this being said, I do agree that this wasn't the article's mistake. I suspect if you asked the author if he's under the impression that Ben was present when Docker was founded though he'd say yes, so in some sense he's been misled. That's what most people take from the term founder I believe.
I view it as a relatively cheap bargaining chip that can burnish a person's reputation, but has little descriptive value in terms of what someone actually did to move the business forward.
And I'm seeing all sorts of companies hand out titles like "Founding Engineer", "Founding Developer", etc. with the same chump-change 100k/0.1% option packages as before.
Based upon the above experience, I firmly believe that Docker could be rewritten by a small team of programmers (~1-3) within a few month timeframe.
 Docker has grown to add some of this now, but back then had none of it: multiple infrastructure providers (physical bare metal, external cloud providers, own cloud/cluster), normalized CI/CD workflow, pluggable FS layers (eg. use ZFS or LVM2 snapshots instead of AUFS - most development was done on ZFS), inter-service functional dependency, guaranteed-repeatable platform and service package builds (network fetches during package build process are cached)...
Oracle, Sun, FreeBSD, RunC, OCI
None are _as popular_ as docker. But many offer more features, or a flatly superior product.
Docker _isnt good_. Docker is popular. Actually their constant breaking of compatibility makes me wonder why everyone continues to tolerate it.
Which means a popularity-based valuation is a house of cards.
Many wonderful, technically-superior-than-alternatives projects languish and die in obscurity, while its inferior counterpart wins the popularity/marketing contest.
It is unclear that with docker they really have much income, and enterprise software projects that perhaps make up most of their income are frequently rewritten with new toolsets.
Therefore, it is uncertain there will be commercial traction in 5 years... particularly because docker has (according to this thread) already lost the open source feature set battle (targeting a demographic which, incidentally, tends to shun marketing).
Hence, house of cards.
The reality is that (speculating), they probably issued a new class of stock, at $x/share, and that class of stock has all kinds of rights, provisions, protections, etc. that the others don't, and may or may not have any bearing whatsoever on what the other classes of shares are worth.
Docker Hub is a massive centralized store of software. If Docker becomes The Way to deploy infrastructure and services, Docker Hub could theoretically trojanize the world.
It would be a way of getting the kind of leverage over the Linux world that MS and Apple offer for their respective ecosystems.
So "better" is quite subjective... more secure? Sure, easier to deploy and build not even close.
Also the intentional re-use of shipping verbiage demands disambiguation.
- Deploy to many places having the same environment
- Update these programs easily (you just need to download and run a new container image)
- Clean up easily by discarding the container
I should probably also add that Docker is also a community to share these application images (containers).