Hacker News new | past | comments | ask | show | jobs | submit login
Docker Is Raising Funding at $1.3B Valuation (bloomberg.com)
321 points by moritzplassnig on Aug 18, 2017 | hide | past | web | favorite | 275 comments

I feel like this is one of those valuations which makes sense contextually, but not based on any sort of business reality.

Docker reminds me a lot of the PKZIP utilities. For those who don't remember, back in the late 80s the PKZIP utilities became a kind of defacto standard on non Unixes for file compression and decompression. The creator of the utilities was a guy named Phil Katz who meant to make money off of the tools, but as was the fashion at the time released them as basically feature complete shareware.

Some people did register, and quite a few companies registered to maintain compliance so PKWare (the company) did make a bit of money, but most people didn't bother. Eventually the core functionality was simply built into modern Operating Systems and various compatible clones were released for everything under the sun.

Amazingly the company is still around (and even selling PKZIP!) https://www.pkware.com/pkzip

Katz turned out to be a tragic figure http://www.bbsdocumentary.com/library/CONTROVERSY/LAWSUITS/S...

But my point is, I know of many many (MANY) people using Docker in development and deployment and I know of nobody at all who's paying them money. I'm sure they exist, the make revenue from somewhere presumably, but they're basically just critical infrastructure at this point and just becoming an expected part of the OS, not a company.

A nice selling point for me is the registry. If you have a public image, it's totally free. If it's private, you can pay to use it privately in a team. Of course, you can mount your own (it's a docker image, after all), or you can use Google or Amazon to host it, but it is much easier to pay them to host, and you don't need to bother with storage, backups, or authentication. Setting up team permissions is as easy as sending an invitation from the panel.

Just want to chime in and say running your own registry is actually really easy. It's literally a simple compose file and a bunch of env vars to configure.

It's has letsencrypt built in along with a few options for file storage, including AWS/gcloud.

Getting up and running with your own registry shouldn't really take longer than half an hour. The config docs for the registry image are really helpful with this too.

Sure it's easy...

But for a company it's almost always cheaper to pay for the service. That way you don't have to deal with maintenance, setting up redundancy, mirroring, etc.

That said, aws, google and quay are all competitors in this space.

It's pretty easy to just pay, I'll give you that.

For an example of just how good their image is:

I set my companies up almost a year ago, had to deal with it about once.

The docs cover a HA setup (if you really need it - most won't), data backups aren't really needed if you're using a storage backing, and I just use local file system - absolute worst case I have to wait half an hour to rebuild and push everything.

The downside risk of running your own is very low. The upside is a few bucks and you have your own private registry local to your network.

Just as a small note, data backup isn't just for hardware or service failure, it's also for ooops I just deleted my bucket. Yes you can rebuild, but that in my case would take quite a few days and money in compute. That having been said hosted services are actually worse in protecting your self from user error...

The (public) registry kind of terrifies me. Images of questionable maintenance based on images of questionable maintenance.

There are a lot of heavily used images out there with a bus factor of 1.

Where I work we've (relatively) recently migrated to using GitLab EE which comes with its own container registry[1]. I have no idea how much work it is behind the scenes from my perspective it has been flawless and much more convenient.

[1] https://docs.gitlab.com/ce/user/project/container_registry.h...

We've been using it where I work for a few months and have recently moved to Artifactory. Mainly because GitLab's registry makes it very very painful to remove old images, (you have to remove them from the UI one by one), especially when the number of registries and images grows.

We know that this is a real problem for many users. We plan to resolve it as soon as possible. For now you can use our custom tool that removes old revisions, see https://gitlab.com/gitlab-org/docker-distribution-pruner.

We also have a few issues about this, see https://gitlab.com/gitlab-org/gitlab-ce/issues/25322 and https://gitlab.com/gitlab-org/gitlab-ce/issues/20247.

I hope it helps!

It is available in Gitlab CE too - we don't use it yet as we already have one in-house, but we might migrate in future.

Hmm, GKE's Container Registry also takes care of storage, backups, and authentication. Presumably Quay and Amazon's Container Registry do as well. What makes Docker, Inc.'s registry special?

Dockers registry is really slow. I’ve had many failures where images never finish loading.

VMWare has a really nice free registry, which is s3cure by default and appears to work well.


Neither they have regional POPs so anywhere apart from North America the registry is unusable.

Chiming in from Australia. Even considering the fact we have horrific internet anyway, the lack of Australian registry mirrors means that a Docker pull can take ~5 minutes for something like ubuntu:16.04. Docker pushes of ~100MB can take hours. That's just not acceptable.

I also contribute to Docker and other container technologies, and I cannot express in words how horrifically long the Dockerfile-based build and integration testing process takes. It takes about an hour in America, and more than double that in Australia.

I think version control is a good analogy (Subversion and Git). It enables whole classes of products and services to be built on top of it, but that only happens because the basic tool is unambigiously free and can be used everywhere. Trying to monetize the basic thing just damages what makes it valuable.

I feel like docker is core to more businesses than compressing/decompressing files. A lot of business package things in docker containers and are probably more likely to donate based on that. Compressing/decompression feels more like a low-level utility and users are probably unaware of whose code they're running at all.

If the creater of "ls" or "cd" asked for donations, would you even realize? If docker asked for donations, would you realize?

> I feel like docker is core to more businesses than compressing/decompressing files.

All Docker images are glorified tar layers that are compressed with gzip. By definition, all users of Docker are users of gzip and tar. I understand what your point is (it's not as visible) but I don't agree with saying it's "core to more businesses".

Also, if someone is using Kubernetes they soon might not be aware (or even care) whether they're using Docker thanks to the CRI.

> All Docker images are glorified tar layers that are compressed with gzip. By definition, all users of Docker are users of gzip and tar. I understand what your point is (it's not as visible) but I don't agree with saying it's "core to more businesses".

Definitely. I was a little unclear, I meant core as in visibility, which you pointed out.

Also about Kubernetes, I can't comment on that because I've heard used it (but I've heard of it). I think it' s possible that there are a lot of businesses out there (such as the one I work for) that use docker but not Kubernetes.

The more visible an open-source project is, the more likely people are to donate to it. I could definitely see companies supporting docker in the same way that they support the Linux foundation or various Linux distros.

I feel like water and air are core to many lives. Does that make them valuable?

Things are only valuable if they're scarce and in demand.

You can apply this to open source in general, and the main reason why after the lawsuit was settled, the BSDs never became as successful as GNU/Linux.

I old enough to remember PKZIP, ARJ and many other tools.

In all these years, there was only one employer that ever bothered to contribute anything back from what they got for free.

Really sad history about Katz.

I'm so curious to understand how you pitch Docker at a 1.3BUSD valuation. With I assume a potential valuation of ~10BUSD to give the investors a decent exit?

Does anyone have an insight into this?

Looks like Github's last valuation was at 2BUSD. That also seems high, but I can understand this somewhat better as they have revenue, and seem to be much more widely used/accepted than Docker. In addition to that I can see how Github's social features are valuable, and how they might grow into other markets. I don't see this for Docker...

Simple - VMWare is valued at $40B. Someone is going to make the argument that Docker is the biggest threat to VMWares business and VMWare will likely just buy them to keep shareholders happy.

Except that VMware never open sourced VMkernel. That massively contributed to the lack of feature parity between VMware and Microsoft (I'd say VMware was 1-2 years ahead of MSFT). That helps you secure the market in the early years, and what you see as the valuation. Docker Inc is not similar to VMware in that regard.

Docker does not use VMWare, so it's a threat - it's that simple.

Everyone already has containers and had for decades now. Why didn't they buy Sun then?

If by everyone you meant UNIX-derived OSes, it's arguable that they all had containers in one form or another with the exception of Linux which only changed when cgroups [0] were added to the v2.6.24 kernel ~10years ago.

I think that addition to Linux paved the way for containers to enjoy wider adoption than was previously possible with other less popular container tech in OSes like Solaris (zones) or BSD (jails).

[0] https://en.wikipedia.org/wiki/Cgroups

Only three (of many) Unix-like families have containers, and while Linux was last "to the punch" the others only had them for a few years more. FreeBSD Jails were ready by 2000[1], Solaris Zones were released in 2005[2], and cgroups were merged in 2007. But OpenVZ's containers technology had already existed since 2005, and I would argue that most of the modern Linux "container" infrastructure came from their experiences (plus that of Google). To be fair, cgroups were nowhere near good enough and it took another few years for namespaces to become good enough to be considered "containers".

But my general point is that people talk about Linux as being out-of-touch when it comes to the history of containers, but if you look at the timeline that simply isn't true.

[1]: http://phk.freebsd.dk/pubs/sane2000-jail.pdf [2]: http://citeseerx.ist.psu.edu/viewdoc/download?doi=

Sun containers (and AIX, etc) were all very much treated as virtual machines. Docker containers virtualize applications, not OS. That's the key shift.

I'd argue that the key shift (in terms of e.g. configuration management) was from pet to cattle VMs. Application containers are mostly "cattle containers".

Perhaps they eventually would have if the company who currently also owns VirtualBox hadn't done so first.

Unrelated, but at the time virtualbox was owned by Sun :)

ok, it's kind of shocking to me that VMWare is valued at 40B USD. That's a really useful datapoint though. Thanks!

Tons of large enterprises pay VMWare a bunch of money for their products. That's why they're valued at $40B.

Shocking? They do $7B in revenue and ~$1.5B in net income...

Are investors seriously that stupid? VMware doesn't offer open source implementations so they aren't even close to the same class of business.

If containers impact VMware, it's because they are free and most of the revenue will not flow to docker.

It's not about revenue flowing to Docker, it's about revenue no longer flowing to VMWare and thus VMWare having to act (by e.g. buying Docker).

Except that buying Docker will accomplish nothing. The genie is out of the bottle. And VMware is arguably in a better position to profit from it because of their ties to customers.

I don’t know, they have a competing product: https://github.com/vmware/vic

Which is docker for vsphere.

In 2007 we used to rack hardware. Like physically open boxes, rack hardware with Ethernet cables. In 2014 I don't think a startup existed trying to use physical machines - it was all aws,etc. We then would use all this BS like puppet, salt, chef, etc to manage all the chaos (devops). Now 2017+, I am starting to see containers become the defacto. I see the future as running a docker file on your laptop, running it through a ci system, then pushing it to a container service. Docker is the clear standard.

Except you don't need Docker Inc. for that. Many container runtimes exist now, the user experience is very easy to replicate. Plus, their cloud doesn't seem to be making any money. So I too have trouble understanding where this valuation is coming from.

I think you are underestimating the power of a name brand. Docker is synonymous with containers to a lot of people, and if they want to pay for container support they'll pay Docker. Similar to, as other commenters have said, people pay Red Hat just so they can have someone to complain to when there's a problem.

This. I've been working in Government recently, it's all containers and it's all Docker.

If anything I'm surprised that their valuation isn't higher.

Are they paying Docker (the company) for anything? Or just using containers as a part of a wider software infrastructure?

I've been using Docker since early 2014 and they haven't gotten a dime from me. If you count the storage and bandwidth on the public registry I've cost them money.

Is any employee of Docker, inc. critical to your use of containers? Is any copyrighted, non-OSS Docker, inc. code critical to your operations?

Where does the money go.

It doesn't cost money to use the docker format. Why would they get a high valuation from no revenue?

I just checked - I had no idea that Red Hat had a market cap of 17.8BN right now. I'd assumed it was about 10% of that.

I wonder where Marc Fleury [0] is these days. He sold JBoss to RedHat for a cool $350m back in the day.

[0] https://en.wikipedia.org/wiki/Marc_Fleury

Container format brand? I bet Dell/HP/et al thought they had a datacenter advantage with their brands over whitebox, but we've seen how that has worked out.

What you described is a services/support company, something Red Hat has worked really really hard not to be over the years (to debatable degree of success). Being their for support vs OSS is not how you get great margins.

Anyway, at this level of the stack in prod settings brand won't take you that far.

Companies buy support from RedHat openshift or Pivotal CloudFoundry to run containers. They never pay Docker Inc.

> They never pay Docker Inc.

That's simply not true. Having a support team they can call is, rightly or wrongly, a large part of the reason some companies consider buying Docker Enterprise Edition at all.

Having a support team they can call is a large part of why they pay RedHat or Pivotal for docker.

The problem is that Docker is effectively an OS-level facility that platform vendors have already integrated into their stacks to prevent that from happening. If you use Docker and want support for production instances, you still pay Red Hat (or Microsoft, or AWS, or Google Cloud), because RHEL, Windows Server, GKE etc. include Docker containerization. I genuinely don't understand why you would try to load Docker Enterprise on top of an OS and rely on two different vendors working together.

Except you don't need Docker Inc. for that

Exactly. Build your docker image with the free open source tools then push it onto Azure, for example. How does Docker the company see any revenue here? Or if you are running on-prem with DC/OS in a private cloud. I don't know anyone using Docker's own cloud, and why would you? They need to sell either services or an "Enterprise" version that is better than what you can do without them. I think containers are definitely the future, but I also see containers as being just a commodity, no more exciting that Makefiles and RPM are now. The money will be in running them, and Azure, AWS et al will have that stitched up.

Docker has at least one product (Docker data center) that is aimed at enterprises. I know because I was almost going to go work there on that product. It's not just a container company.

The kind of companies that spend large amounts of money on things like support, think more traditional enterprises, are spending a ton of money on docker's paid services, because they do not have the ability to hire enough engineers to implement something in house.

I work at a large company. We're not paying Docker for Docker, we're paying Amazon.

Sure. You can host your own github too. But the future will be containers. They seem pretty good at that.

The question isn't if Docker the company will be successful in the future -- it's will they be as successful as that valuation suggests by selling support contracts.

You can host your own GitLab*, not your own Github.

Also, a lot of the value of Github is social. You can only get that (after a certain point) by paying Github money.

By contrast, you can get 100% of the value of the Docker community without paying a cent to Docker, Inc.

FWIW you can host your own GitHub


$21 a month per user.....to host it on my own hardware....no thanks. If I am hosting on my own hardware then you get a one-time-fee, none of this subscription nonsense.

glad I'm not the only one to find this subscription BS patently insulting.

You're not a 1Password user, are you? They are pushing the subscription model super hard.

and this is important because a good deal of tech companies rely on self-hosted GHE or hosted Github.

So what is a good replacement for docker?

OpenShift for example. It uses docker containers and can be used for both application containers and microservices.

A co-worker who came from AmEx related that they were so unhappy with OpenShift that they dropped the 'f' from the name for nearly every internal discussion.

To be fair, my understanding is that this was open shift 1 & 2. Openshoft three is more or less kubernetes with a few additional features.

Haha love it, what were the primary frustrations?

Keep in mind the distinction between running docker containers and what parent said: "You don't need Docker Inc for that"

rkt, containerd, LXC, runC, OpenShift. There are others, too. rkt and OpenShift seem to be the most common ones that aren't Docker.

Openshift runs on top of Docker. It is essentially "Enterprise Kubernetes"... a few more features and support baked in.

Openshift doesn't need docker though; it's an implementation detail. You see Dockerfiles, but you don't see docker itself.

What others? At this point you seem to be just naming things. containerd and runc are both parts of Docker, and OpenShift is a PaaS that runs on Docker.

Most major cloud providers provide support for docker and private docker registries out of the box.

> user experience is very easy to replicate

Wrong. The built-in Docker Swarm has currently the easiest UI for container orchestration (there is still stuff which could be done better though but still). This paired with sensible defaults and batteries included, such as a load balancer make Docker the clear winner and apparantly nobody has been able to replicate the UX. I know k8s has a bigger market share but is also way more complex.

Edit: Why the downvotes? Afraid that your k8s know-how will drop in market value? Please reply with valid counter arguments instead of this maddening silent downvoting. If I was wrong let me know where and why.

> Why the downvotes?

Honest answer: It's rude to open with the one-word sentence "Wrong." You compounded it by implying that anyone downvoting is doing so in bad faith. Neither is a good look. The rest of your comment was fine.

> Please reply with valid counter arguments

Sure! So, Kubernetes is Google's attempt to make real, full-strength Google-ish infrastructure "as simple as possible, but no simpler". This kind of infrastructure is really hard, so "as simple as possible" is still quite complicated. This makes k8s a pain in the ass to understand and use.

Docker swarm comes from the opposite end - it's dead simple to use, and seems to be aiming for the 80% use case. After all, most companies are not Google, and can work with a less complicated solution that offers a "Just Push Go" experience. The downside is that it's less flexible and less robust. (I also get the distinct sense that the engineering was rushed. But that can be fixed if it stays popular for long enough - eg I hear MySQL is decent these days.)

The potential problem, as kuschku is pointing out, is that the bigger, more Enterprise-y and more lucrative customers become, the more likely they are to want the power and robustness of Kubernetes. This presents an existential threat to Docker Inc. They could end up fully commoditised, building a vital platform that provides tons of value, but which they can't charge for because all the big support contracts go to Kubernetes Managed Services Providers or whatever.

The downvotes may be because you matter-of-factly retorted "Wrong." to a subjective opinion. But also maybe, don't sweat the downvotes?

The problem there is in the Enterprise market, which Docker is targeting, K8s is far better suited.

Not only is Kubernetes in a better place than Swarm, the core Docker engine is only a cog in that machine. Should the need arise it would be trivial for them to swap it out with another own container engine.

Kubernetes is the magic sauce, not Docker.

The attitude that docker is pointless and kubernetes is amazing despite Docker has been consistent from A vocal minority of the kubernetes community and it’s self serving and getting a little annoying.

Is docker perfect? no. Swarm has been an absolute disaster. API stability has been dramatically undervalued. There is room for containers to grow, and their power is nowhere near tapped out.

There still a lot of room for improvement on the containerization space, and docker is going to drive that, whether the kubernetes crowd likes that or not.

Kubernetes is amazing and gives us a real chance to have an operating system for the data center.

Kubernetes itself, despite years of investing is still ridiculously difficult to install. It still has an adoption that’s at least two orders of magnitude below baseline docker. The learning curve is still way higher than it should be.

Frankly I hope that all involved get their heads out of their asses and build something great rather then continue to muscle in on each other. There is no reason for this pissing match given that it’s the combination of borg and Docker that makes solutions deployed on top of kubernetes amazing.

>In 2014 I don't think a startup existed trying to use physical machines - it was all aws,etc.

You're also starting to see those startups realize how much money they're wasting on cloud services once they hit scale. It is EXTREMELY expensive to do cloud if you're even remotely efficient with your infrastructure unless you've got extremely bursty and unpredictable workloads.

"once they hit scale"

Most don't. That is the value.

But in return it's also extremely cheap to bootstrap and fail

It is much cheaper to run clouds than to buy and operate your own hardware.

The cloud is also a lot easier to optimize and save money.

You're getting downvoted, but your answer is partially correct: when you're a small startup, it is almost certainly cheaper (in terms of time and operational complexity) to outsource infrastructure to the cloud.

However, once you hit significant scale, the lessons from the operational experience of all of the major firms have been pretty consistent: it's cheaper to operate your own data centers than it is to outsource them.

Depends on what you mean by scale. If scale means providing services globally to many jurisdictions, you're going to have to start doing the things that the big players so like putting data centers in specific geographies (e.g., EU, China). That means getting real estate, power etc. At that point I think the scale tips in the other direction where it's now more costly for you to "scale". So, cheap at the small end, cheap at the high end and expensive in the middle (cloud).


From the perspective of a Fortune (checks list) 15 company: AWS saves us a fortune. No facilities costs all over the world with power/real estate/lawyers to handle local laws. No data center engineers across the globe. A solid discount on list. Consistent bills (thanks to judicious RI buys) and servers that are available in minutes - not weeks (have you ever seen enterprise IT ticketing practices?!)

If we had two orders of magnitude fewer employees/servers/locations AWS wouldn't make sense. But at this scale nothing else makes sense.

If you have reserved instance style workloads and aws is saving you money, it sounds like your private data centers were just grossly mismanaged or you have a negligible compute load.

AWS was likely just a way to overhaul inefficiencies in a legacy IT org. Someone will be able to do the same thing in 5 years moving you from AWS back to self-hosted.

People really need to stop. There is no Fortune 15. I get that your e-peen needs all the help it can get, but this is like a grown adult giving their age as 41 "and 3 quarters".

Speaking as the chief architect for solutions that are deployed on the cloud for financial services, this is not true. There is a point at which the infrastructure to manage the infrastructure becomes more expensive than the hardware and the network. Regulatory compliance especially PCI and HIPAA become very difficult to manage. Downtimes percent risks to large strategic customers. At that point the cloud comes in to atone because of the ability to scale completely dynamically and the capability for automation.

Sounds like you're either not that big or you have an incompetent team. If you have PCI/HIPAA/whatever under control, it doesn't magically get harder as you get bigger. Outsourcing it just means you pay someone else to get it right, in addition to giving up control.

The epic story of dropbox exodus from the amazon cloud empire


But some companies get so big, it actually makes sense to build their own network with their own custom tech and, yes, abandon the cloud. Amazon and Google and Microsoft can keep cloud prices low, thanks to economies of scale. But they aren't selling their services at cost. "Nobody is running a cloud business as a charity," says Dropbox vice president of engineering and ex-Facebooker Aditya Agarwal. "There is some margin somewhere." If you're big enough, you can save tremendous amounts of money by cutting out the cloud and all the other fat. Dropbox says it's now that big.

Dropbox is now at petabyte scale. Last I had a conversation with someone at Dropbox they mentioned they’re moving storage in-house but still use quite a bit of AWS for service.

I think hybrid totally makes sense where you take the most expensive part bare metal with limited scope of maintenance.

I'd assume Dropbox is at least hexabyte scale.

1 PB is $23k per month on S3. It's nothing. That's barely the costs of 1-2 employees in SV.

The migration itself would take a lot more effort than one dude, even if there was a solution for completely free storage out there, the migration could only result in a huge net loss.

Not even close to universally true.

Bandwidth is almost always cheaper at a colo. Most compute instances are cheaper to buy and rack if you have continuous loads. Disk ... is tricky.

It is faster and has lower up front costs to get clouds up and running initially. For most startups who are going to fail, that's Good Enough(tm).

If, however, you continue existing for a while, the other things start to add up at much lower levels that you would expect. I'd say the crossover is around when you are spending about $15,000 per year. Your colo is about $10,000 of that per year and you can rack 5 new machines every year for the remaining $5K. That's not that much for an actual business.

Cloud is good for your initial startup and for bursty situations. Once you have continuous loads, you need to be moving to pulling stuff back to your own hardware.

How does employee cost factor into that, though? I don't do devops, so I don't know at what scale you start needing dedicated people to manage hardware and configuration at a colo. However, employees are really expensive in comparison to just about everything else, so even if you're spending an extra $10K a year on AWS than you would at a colo, if using it saves you just 10% of one person's time you're coming out ahead.

Except that, invariably, I'm not saving that person's time.

It's a real myth that the cloud magically saves me a sysadmin. I have found almost exactly the opposite. Using the cloud effectively takes more time and more expertise. Debugging the cloud effectively takes WAY more expertise. Combine this with the fact that someone has to be able to architect a system to fit within the constraints of being on the cloud, and you're down extra employees.

The difference is in: "Eh, it's been down for 3 days? Sigh. Just reboot it." vs. "Um, that request hung for 93 seconds. Why?"

In the first case, the cloud is fine.

In the second case, someone is going to have to traipse over an enormous amount of systems (which you don't own and can't always instrument) and variables (some of which you aren't even aware of existing) to hunt it down. If, however, you can say "Pull that off the cloud into our own systems and keep an eye on it." you have made your debugging life a lot easier.

Of course, once you have the ability to do that, your team realizes and starts asking: "Given how much time we spend debugging issues with the cloud that aren't actually our fault, why are we on the cloud again?"

I always smile when that realization kicks in. Now I generally have to stop the team from pulling everything off the cloud. However, that's a much easier task.

For the software side of things, you still need admins with a "cloud" provider.

For the hardware side of things every colo I've seen has a "remote hands" service.

Because the employee that has to manage all of the aws accounts is free?

It's a lot easier to use the AWS console than to deal with DELL and Equinix.

AWS is also about 2749 times faster and more efficient, as measured by me the last time we ordered hardware on AWS and dell simultaneously.

It seems easier to optimize because you're starting out so far away from the goal, so you have a lot of space to improve. The money you mostly end up "saving" is money that you didn't really need to spend in the first place, but you chose to spend more instead of going through the optimizing effort up-front (which can be a perfectly valid approach).

" I see the future as running a docker file on your laptop, running it through a ci system, then pushing it to a container service. Docker is the clear standard."

Where I work this has been the standard for over a year now.

Would you mind just expanding on exactly what your ci pipeline/stack looks like which your company made its standard?

The method I've been using successfully:

1. Git commit triggers CircleCI build and test phase

2. CircleCI deploy phase uploads the image to GKE

3. Google Container Engine stages the deployment for release

Can you link to an article on 3) ? We have been releasing by invoking kubectl from Circle, I wasn't aware that GKE can release itself.

Jenkins 2.x for building the images (we just execute docker build and push in a Jenkinsfile), then we use Kontena for orchestration (e.g. the Kontena agent pulls the image from the docker registry). For local development we use docker-compose.

Jenkins and the Jenkins slaves also run on docker, and are managed by Kontena. In fact, the whole platform/pipeline runs completely on Docker. We use Ansible to install Kontena/docker on new servers.

I wonder what advantages Docker offers over plain LXC and LXD for a lot of real world use cases these days, though. Does it have some unique, defensible benefit that justifies this sort of valuation?

I haven't set up enough systems to have a strong opinion on this one myself, but those I know who definitely have seem to come down in favour of plain LXC and possibly LXD in most scenarios. Typically, their argument is that the features you probably want are there anyway, and so the extra weight of the Docker ecosystem now seems to introduce more irritations than it fixes.

Sometimes they seem to distinguish between hosting on infrastructure like AWS and setting up containers with colo or closer-to-home managed hosting. I don't understand the subtleties here; can anyone enlighten me about why they might go with Docker in one case but be quite strongly against it in the other?

I think that essentially, the LXD project focuses on making VM-like containers (with a full OS that you SSH into, etc), whereas Docker and friends focus on immutable application containers - essentially, a zone for running a single application, and which you delete and start from scratch rather than entering and changing stuff.

In fact, the LXD people have a guide on how to run Docker containers inside LXD :)


I understand that the focus/brand of Docker is aimed at a slightly different scenario.

What I'm not seeing is why -- from an objective, technical point of view -- you couldn't do almost any of the same things with plain LXC these days, perhaps with LXD on top if the vanilla UI for setting things up isn't sufficient.

I mean, Docker Hub and some nice UI tools are great and all, but I don't see a USP or a defensible position worth a billion dollar valuation in there. So what is really behind the confidence that investors at this level must surely have?

Yes, you can do this with most other container systems. But people haven't done this, except with Docker. Docker has a significant lead on competitors. People have little reason to adopt a competitor as a result.

OK, but presumably most of those people also aren't paying anything to Docker-the-business for using it right now. If they want to be worth billions, there will have to be real revenues sooner or later, and at that point whoever is paying does have an incentive to look at competitors. So at the risk of repeating myself, I'm still wondering what they have as a USP or other effective barrier to competitors stealing away their market share as fast as they built it up. Surely there must be something if investors are actually going in at this level, and I'm very curious to know what it might be.

Docker breaking shit all the time is probably a reason to adopt a competitor.

Right... I just don't see that being worth 10B USD I guess. I don't see how you come up with that number, and I assume this is where the "upside potential" needs to be.

In the case of AWS, I can see the advantage, you're renting a resource you need, rather than buying it. I guess I just don't see how you extract similar amounts of revenue from Docker...

Redhat is valued at ~1.3B USD. Github, currently at ~2B USD. How do you justify 10B USD for Docker. Obviously you can (they did), I just don't understand how this is done well. Does anyone have a better insight here?

Red Hat's revenues are a little short of 3 billion (though growing fast), and the market cap is about 15-20 (going by memory). That is with a full software stack going from IaaS to containers to application server. Right now containers are the thing, but what if they are going to become commodity in 5 years? And can the Docker brand compete against the Google and Red Hat brands that are also selling container management platforms (fully open source in the case of RH, no enterprise features and no lock-in)? It seems like a very generous valuation.

Disclaimer: working at RH

Think about what redhat was 10 years ago. They were basically a company that burned a Linux distro on a CD and shipped it to CompUSA that sold it for $50. You how exactly everything will unfold but just like we knew linux was the future, I thinks it's becoming obvious that containers are the future as well.

No, that was Red Hat during the dotcom boom. 10 years ago, they had already acquired JBoss, and RHEL was at its fourth major release (RHEL5; the first was 2.1 :-) in 2002, after which they stopped selling boxed sets). 10 years ago Red Hat was already highly profitable and had been growing for years. But as you point out that only happened after a complete overhaul of the governance and business model.

In that respect, Docker is much more traditional than SuSE or Red Hat, and indeed it would be much harder for anyone else to replicate RH's "miracle" nowadays. [1] And that's exactly because RH is already there and applying its business knowledge to Docker's field.

[1] https://techcrunch.com/2014/02/13/please-dont-tell-me-you-wa...

> Redhat is valued at ~1.3B USD.

Redhat's market cap is $17 billion, so I'm not sure where you're getting that figure from. https://www.google.com/finance?q=NYSE:RHT

I guess I was looking st their total equity. But I guess you're correct and market cap would be a better number to look at.

Meh, I'm still not impressed by Docker in the sense that you still have to hack around with configuration management, env vars and so on to approximate what JNDI provides since 20 years.

Because that's what Docker is: JEE for non Java platforms

> Now 2017+, I am starting to see containers become the defacto.

Why would containers become the de facto rather than something like Cloud Foundry which abstracts it away entirely? Docker is just a slightly less messy version of the Puppet, Salt, Chef Devops BS with added complications around networking.

>Does anyone have an insight into this?

I don't have insight but I'm guessing the valuation comes from the prospects of enterprise sales.[1]

Also, Docker (the company) isn't staying still. We have to assume they will evolve to add future upmarket services on top of Docker that companies will pay for. Investors would see the PowerPoint slides for those future products but we as outsiders don't.

Maybe an analogous situation would be the Red Hat multi-billion valuation even though Linux kernel itself is free and open source.

[1] https://www.docker.com/pricing

Their enterprise pricing is per node. So if enough companies sign on to this, ever box/VM they put docker on brings Docker Inc. revenue. I'm sure at large scale this will be more of a flat rate negotiated deal ($XX,XXX per year) but still that's a lot of potential cash on the table.

The risk of course is that the nobody will want to pay per-node and the community will just invest in the open source container ecosystem and replicate the Enterprise features with plugins and forks.

Still, the market might be big enough that they can become another Red Hat just based on support and stewardship revenue.

Becoming another Redhat wouldn't be enough to justify investment at this valuation though would it? They need an upside potential 5 to 10x higher than Redhat's current valuation.

Well, valuation is a gamble. You can try to justify the potential growth. The tech community is embracing the adoption and maturity of container-based CI/CD deployment. In the end, some number trick is done, higher valuation is posted, investors (new and old) are going nuts. Some months later, the big guy cash out large of the initial buying and move on to the hot baby in town.

VMware did pretty well, though!

I think Docker would be quite happy being the 'next VMware', and as an ex-VMW person myself it's hard for me to fault that, too.

There are enough things I find as a company user annoying about docker to make me suspect some company will help fund the community fork. We are not quite there, but if docker keeps making changes that break us.

GitHub is only source control.

Docker is much more than dockerhub.

I'd guess the valuation is based on the orchestration and hosting solutions much more than the container engine.

They're competing with AWS and Google cloud on many fronts. And docker controlling the defacto only strengthens the argument.

Personally I wouldn't bet my money on their hosting solutions but I wouldn't ignore them either.

docker services like "docker cloud" or "docker hub" are a little less than perfect..

But so was docker to start with... Given that we're all using the docker cli, and coding automation against the docker remote API, they have a massive impact on a huge community.

Even if they aren't ahead right now, the game is still wide open.

The best I can offer is that MySQL sold nearly 10 years ago for 1B. Much like a commentor further up the thread you probably don't know anyone that pays for MySQL (despite knowing many who use it) but they had a surprisingly decent customer base and revenue - and also a large profitable BU under Oracle - so not a meaningless valuation.

Valuations in private funding are made up numbers. The main difference between public equity and private equity is liquidation preferences and interest. I dont know the deal terms, but it is likely that the investors expect to get their money back, plus some interest with the potential for more upside if there happens to be a big exit.

They are likely not counting on it as their returns are nearly guarenteed through the liquidation pref plus interest.

Employees of docker just saw their chances of a big monetaty exit cut dramatically with this funding round since they are in last place.

It is an area that yc continues to be silent on and is a travisty of the start up world today.

A couple thoughts:

1 - As has been mentioned before, it's not really 1.3 Billion. If they have liquidation preferences, the value is much less.

2 - If they come in very late, growth investors may be ok with 3X or 5X.

Developer tools have never had a huge market. Honestly, what are the necessary products for a relatively small market of thrifty developers? Maybe an editor, a nice keyboard, hosted code repo, and some code analysis tools? It's not a huge market.

Containers are something else entirely. They'll probably be running on every device on the planet in one form or another in just a few years. Anytime you're on the forefront of something like that, there's money to be made.

Developer tools have never had a huge market, but the enterprise ops automation market is huge. Think of IBM - this is their bread & butter.

Investors must be betting that containers are going to be the next Java, XML, or virtualization in enterprise computing, and that by controlling that technology, Docker positions themselves for extremely lucrative enterprise IT support contracts.

I am normally a huge cynic when it comes to software valuations but I think GitHub is likely to be very valuable. First, it's a service that I and developer's gladly pay for. Second, the network effects are very powerful. When GitHub becomes the place to find and store repos that you want to share, there is a lot of inertia to that. And it becomes a powerful way for developers to find each other.

At the late stage, investors usually want to see 3-10x vs 10-100x as a realistic range vs earlier stage (A round) investors.

Usually they also look to there being a MUCH lower change of going completely to 0 - more like 10-20% vs well over 50%.

And usually there it's because of the fundamentals of the business are starting to show (margin, cost of acquiring customers, customer churn and upsell, etc.)

Sometimes IP or assets add value/valuation as well, though.

In Docker's case, there is also probably a feeling that the asset (control of "Docker") is worth hundreds of millions.

In CloudFlare's case, it is millions of sites as users, and the ability (mostly untapped, I think) to monetize the data from that.

You pick the Valuation, I set the Terms…


It's pretty hard to. I got off the train. I wish them the best though!

My first reaction was that I was surprised it wasn't higher.

My second reaction was incredulity at how ridiculous my first reaction was.

Redhat has a market cap of around 15 billion, that gives a rough idea of what kind of value Docker could build with a purely service based model for an open source product.

Redhat has a vast product suite now - they do a heck of a lot more than just RHEL, and a lot of it is "enterprise" stuff with juicy support contracts. Docker is a one-trick pony in comparison.

Wasn't RedHat a one trick pony at one point? I don't think it's insane to be that they'll be as successful as RedHat one day and it seems these investors would like to bet on it.

I think (with the exception of OpenShift), most of RH products are acquisitions. I can think of plenty of acquisitions Docker can make to take a similar path; Portainer, Kotena, some CI software (Drone, etc.) All that would kind of make Docker Inc a one-stop shop for startups (definitely) or whoever else wants everything from one company.

OpenShift is also an acquisition (OpenStack is not). [1]

What you say makes sense, but then Docker is also coming 10-12 years late to the game, and until it's profitable and has money in the bank it's hard to do many expensive acquisitions.

[1] https://www.redhat.com/en/about/press-releases/makara

OpenShift started before Makara with a simple code base called "OpenShift Express". The code acquired with Makara was never used in the hosted service or what became the open source project, although some ideas crossed over.

OpenShift 3 (current) has no code in common with any of the older space. It was a totally new platform built on top of Kubernetes and docker.

A startup can get the same stack and much more from AWS, Azure or Google Cloud. If it should bet on one company, it should bet on the cloud it's already using.

I wouldn't call it a one trick pony. Their docker cloud is pretty compelling in some ways.

They seem to be in a unique position to monetize with multiple models. Compared to something like Github, Docker and Docker Swarm has much more opportunity for large enterprise support contracts. Most companies feel that Docker is synonymous with containers, yet still don't fully understand the technology. Git already had a high adoption years before Github was around so developers were familiar with it (or other VCS) and didn't need to spend thousands of dollars to support a git repo.

On the other hand, curious how much revenue DockerHub is bringing in and where they will plan on taking it. That model seems closer to Github. Will it be a newer way to discover new OS or even propriety images like how devs use Github?

My bet is that Docker develops a very healthy services business, offering consulting services to companies that wish to use Docker within their organization, on par with RedHat on Linux. With that, a $1.3b valuation is the tip of the iceberg.

Docker still has a long way to go in terms of local development ergonomics. Recently, I finally had my chance to on board a bunch of new devs and have them create their local environment using Docker Compose (we're working on a pretty standard Rails application).

We were able to get the environments set up and the app running, but the networking is so slow to be pretty much unusable. Something is wrong with syncing the FS between docker and the host OS. We were using the latest Docker for Mac. If the out of the box experience is this bad, it's unsuitable for local development. I was actually embarrassed.

Linux is by far the best docker experience. I've experienced the same pains as you and possibly more supporting devs on OSX. The osxfs excuses are all a bit hand wavy and the new cached stuff kludgy IMHO. NFS is supposedly much faster, https://github.com/IFSight/d4m-nfs, but has no notification facilities.

Apple, for their part, are way behind the curve on all this. Completely MIA. For the amount of developer dev-station share they have it's amazing what macOS users have to deal with.

Have you tried https://github.com/EugenMayer/docker-sync ? I've had slow FS issues with docker on mac. Integrating docker-sync doesn't take long and should help you out.

I haven't, but I recommended it to one of the devs and he claimed it 'crashed' docker. Need more investigation, but we shouldn't need tools like this; it should work by default.

Agreed, and to my knowledge this is a known issue (I'm sure I've seen chatter regarding the problem on the docker forums/github). Nonetheless, with docker-sync I've been able to leverage all the benefits of docker-compose whilest mitigating those slow mounted fs issues, so for now I'm happy with it. At least you can configure it directly into your compose set up, so you don't need to spend time explaining it immediately when onboarding those new developers.

Docker for Mac has a default filesystem configuration that offers maximum consistency at the expense of performance. Unfortunately in some configurations that can result in "throw computer out the window" frustration, depending on the I/O pattern of your project.

As of Docker 17.06, 90% of use cases can safely change the settings and noticeably improve performance. See the documentation: https://docs.docker.com/docker-for-mac/osxfs-caching/#perfor...

A few things to note: I wasn't using Docker for Mac, but a custom VirtualBox setup that took over a month to perfect. When I last tested Docker for Mac, the networking was really slow then, but I took a gamble that it had been fixed by now.

I will give credit as to how easy it was to get the app running -- compose makes everything a snap. It's unfortunate that something is amiss with volumes/networking.

You should try dinghy. It replaces the default sync with a local NFS server and forwards fsevents. (I disabled its other features) https://github.com/codekitchen/dinghy

Monetizing open source directly is a bit challenging because you end up stuck in the same service model as everyone else. Which is basically to sell various support contracts to the fortune 100-500.

Forking a project into a enterprise (paid for) version and limiting those features in the original open source version, creates tension in the community, and usually isn't a model that leads to success.

Converting an open source project directly into a paid for software or SaaS model is definitely the best route as it reduces head count and allows you to be a software company instead of a service company.

Perhaps best captured by Github warpping git with an interface and community and then directly selling a SaaS subscription and eventually an enterprise hosted version that is still delivered on a subscription basis just behind the corporate firewall.

Also of note is that Github didn't create git itself, and instead was done on the direct need that developers saw themselves, which means they thought what is the product I want, rather than, we built and maintain git, so let's do that and eventually monetize it.

I used docker for a while last year and attended Dockercon. I was really excited about it and thought it was going to solve many of my problems.

But with how complicated my stack is, it just didn't make sense to use ultimately. I loved the idea of it, but in the end good old virtual machines and configuration management can basically do most of the same stuff.

I guess if you want to pack your servers to the brim with processes and shave off whatever performance hit you get from KVM or XEN, I get it.

But the idea of the filesystem layers and immutable images just kindof turned to a nightmare for me when I asked myself "how the hell am I going to update/patch this thing"

Maybe I'm crazy, but after a lot of excitement it seemed more like an extra layer of tools to deal with more than anything.

The big thing with Docker containers is that you don't patch or update them.

You include patches in the build process that produces a patched image. You then tear down your containers and deploy the new image.

One of the main benefits is once you have a proper pipeline setup you just modify your Dockerfile commit it to git, the build happens automatically and then it's automatically deployed once the new image is checked in to the repo.

Now the all your containers are "magically" patched. Imagine having to patch a CVE in each of your VMs. Quite different.

VMs are pets. Containers are cattle.

> VMs are pets. Containers are cattle.

That's the funniest yet most accurate down-to-earth analogy I've heard so far that explains the mentality of Docker/containers. While funny, it's actually practical to have an analogy when explaining newcomers what Docker does to the mentality of server/service management. Thanks.

This is unfortunately the conclusion I came to. Virtual machines locally and in the cloud just don't seem to be that much worse in any facet, and Docker (containers in general) seem to be far more complicated.

I dunno.

Learning Ansible really well proved far more valuable to my work life. I integrate lots stuff, stuff like OpenVPN, Asterisk, Freeswitch, stuff that can hardly be "contained" and still even work. Some stuff does not work in containers, and even if it does work, dealing with the weird filesystem mounts, "statelessness", locations of logs and other persistent data, and the ethos of just using 1 process per container and so on and so forth.

Then think about deploying some complex legacy apps that have upgrade paths that involve changing database schemas and running shell scripts to migrate data. How are you going to reasonably do that with docker? It just hurt my brain to think about it.

If you have the luxury of designing your app from the ground up and its not too complicated of a stack then I see how it is cool. But mostly if it saves you money on AWS bills or something...

See my comment above

I dont understand containers. First you go to through great pain sharing and reusing libraries. Then you make a copy of all the libraries and the rest of the system for each program !?

Others have discussed how docker uses a layered approach, and how two containers that share a base system will share most of the filesystem and memory.

The real power of containers comes with container orchestration (i.e. Kubernetes, Mesosphere, and OpenShift). By leveraging containers, container orchestration systems can provide high availability, scalability, and zero-downtime rollouts and rollbacks, among many other things. These things were hard before containers & container orchestration. By allowing containers to be moved between nodes in a cluster, one generally achieve higher hardware utilization than with VMs alone (which is in itself a big improvement upon software on bare-metal hardware). All of this also leads to easier/better continuous deployment, as well. This, in turn, leads to easier testing, and greatly simplifies provisioning of hardware for new projects.

So, the benefits are:

  - Cheaper than VMs (through better hardware utilization)
  - More reliable, through HA load balancers
  - More scalable, through scalability load balancers
  - Better testing, through CI/CD enabled by containers
  - Faster application delivery by simplifying provisioning


  - Nobody ever used VMs the way containers are
  - A HA load balancer is not a container-specific concept
  - There is no such thing as a 'scalability load balancer'
  - You don't need a container to do CI/CD
  - It's not faster, it's slower, and it's not simpler, it's more complex

- Nobody ever used VMs the way containers are? Many people are using VMs to get better utilization of hardware by running multiple distinct apps on the same hardware. Containers do this, too. Container orchestration makes it really easy, allowing for even better hardware utilization than with VMs. You say that "nobody ever used VMs the way container are", yet I've got multiple clients that deployed to Kubernetes in large part because of this very point.

- I never said an HA load balancer is a container-specific concept. I said that "these things were hard before containers". I stand by that statement. Containers and container orchestration make HA proxies really, really simple.

- A scalability load balancer is a load balancer in front of a service that monitors load and scales up, or down, the number of instances behind that load balancer. Again, container orchestration makes this really easy.

- No you don't need a container to do CI/CD. But containers make it much, much easier.

- It is faster. I can provision an entire cluster of machines in about 5 minutes on AWS or GKE. If I have an existing cluster to publish to, it's even easier--it's one line in my continuous integration config file. Container orchestration has a learning curve (I'm assuming this is why you say "it's more complex"?), but it is tremendously easier and faster to provision hardware for a project with containers and container orchestration when compared to provision actual hardware, or even provisioning VMs.

Sounds like a lot of this is confusing containers with an API to control instances in general. Most of this is already achievable with the APIs offered by EC2, azure, VMware, openstack, etc.

How is it slower to use containers than traditional machines? Provisioning for a container is basically just downloading an image and running it.

Sounds like you haven't really used containers at all.

This and the fact docker in AWS creates several layers of NAT

It's basically leaning on shitty software engineering to produce increased reliability in release engineering/operations.

The old model was a system would be running, and a lot of software components within that system would depend on each other, creating a web of dependencies. If one of the dependencies had a problem, it could bring down the whole system [in theory].

The new model is "simpler" in that every software component has its own independent operating environment, with [supposedly] no dependency on the others. In this way, if one dependency fails, it can do so independent of the system at large - the failed piece is simply replaced by a different, identical piece. In addition, the component environments don't store state or anything else that would be necessary in order to replace it.

We basically bloat up our RAM and waste CPU and disk space in order to be able to understand and support the system in a more abstract way, in the service of better availability, and also, better scalability.

How is this different from simply running a bunch of chroot'ed services on a system without containers? It really isn't. But you get more control over the software by adding things like namespaces and control groups, and by everyone using the same containers, more uniformity. By dealing with the annoyance of abstractions and bloat, we get more human-friendly interoperability.

What you're missing is that if two containers have the same base they will share all of those resources (both on disk and in RAM). Thus while each container is a complete system you can still fit many more containers on a given host than you could VMs (which don't share anything with each other).

Hmm, well sometimes that is true. Because containers don't encourage stable APIs, another scenario is that each container has slightly differing versions of each library because of unnecessary API changes (or in the positive case, because they extensively test the program with a particular library version).

Containers are how we managed to have something like five or six versions of Python in our production environment.

How do you apply security updates to 'five or six versions of Python' without losing your mind?

You don't.

There's no good reason we should have had that many in production. We had three versions of the 2.X series and two versions of the 3.X series because of mixing-and-matching base images we used plus management deciding that we could do partial upgrades of Python version by upgrading a project at a time. (We switched from 2 to 3, which meant we had containers -- with different base images -- where we updated the 2.X version but not the 3.X version and containers where we updated the 3.X version but not the 2.X version. This gave us all kinds of mixes and matches of Python 2/3 versions.)

So I just hoped whoever was maintaining base images was actually maintaining their security patches, kept the versions we were intentionally using up to date during container construction, and it (mostly) just sort of worked out.

We're down to... 3 versions of Python and 3 base images. I'm trying to get down to 2 versions of Python (a 2.X and 3.X).

I can see two different versions being necessary (Python 2.x and 3.x), but is it really necessary to have versions other than the latest of each?

With block/page level deduplication you can achieve even better results for VMs (ZFS, KSM etc). Containers can run on such setup too.

VMs are irrelevant because containers are not VMs.

Seems relevant to me. People have been using VMs for situations where they can now use containers.

Immutability, and consequently determinism, are wonderful traits to embrace when you're managing deployments across environments and regions for dozens or hundreds of services.

And with testing...

I had trouble understanding the benefits of it as well. A container is a snapshot of a specific version of an application/daemon and the OS it runs on all bundled into a deployable/runnable image. You create a container for every version of an app. Deploying or rolling back is almost as simple as deploying an image on any host that runs Docker.

There is also the fact that things stay inside a container (more or less). You want to remove mysql? No problem, just discard the container. Compare that to removing mysql and all its config files etc. from all over the place.

The underlying problem is things like mysql that scatter things all over the place. Docker only encourages such sloppiness, IMHO.

While it might be true that MYSQL being a bad roommate is the core problem, does that matter? As an admin you don't get to ignore the pager because the outage is Oracle's fault.

Its about creating and maintaining a clean environment. Conceptually they aren't that different than a VM just much lighter weight.

Except that it's not clean environment. Make it a properly built DEB or RPM package, and then we're starting (but just starting) to talk about clean.

Note also that binary packages help with repeatable deployment that uses Docker, deployment tools (e.g. Ansible or Salt), configuration management tools (CFEngine, Puppet), even manual -- and mix of all these. Docker images only help with Docker deployment.

Well, we need to differentiate between application containers (Docker) and Linux containers (old:openVZ, new:LXD/LXC). Then there are application containers which are even more "limited," like Flatpack.

I'd argue that only Linux containers are like VMs.

Docker was LXC wrapper in the past. Now it just handles cgroup stuff itself. You can run full system in the Docker but purists regard this as antipattern.

I'd prefer LXD / LXC anyway. Sadly, LXD porting to Debian is a slow process.

Docker users have essentially given up.

Nix[0] genuinely tries to solve the problem.

[0] https://nixos.org

An interview question I asked recently: what are the advantages of Docker over an application deployed as a statically linked binary?

I use Docker as a tool for creating statically linked binaries...

For example, here: https://github.com/mbrock/gf-static/blob/master/Dockerfile

Making static binaries is often extremely painful and confusing. It's easier on a distribution like Alpine, and nicely enough, Alpine comes as a Docker image.

In general, Docker's ability to basically spin up a whole Linux distribution, with a whole root file system, makes it very different from just using static binaries.

Along with Docker's image repository infrastructure, it makes some things easy that weren't easy before. Like, I don't know if there is a static binary build of the Erlang runtime system, and I don't know what kind of file system tree that system needs, but I just now opened an xterm and typed "docker run -it --rm erlang" and got an Erlang 9.0.2 shell.

The same benefit of deploying a statically linked binary in a VM without the overhead of the VM.

What kind of answers do you expect/would like to hear?

I intend it to be a starting point for a conversation, someone who really understands it will be able to convince the skeptic that I play, someone who is just typing the commands because it's trendy won't. Also it helps if they know what a statically linked binary is.

It is obvious from your comments that nobody will convince you of any merit Docker has. You are not "playing a skeptic".

I like containers as a concept, and I like the Docker commands for building them and running them locally. I am less convinced by Docker Swarm as a Prod-grade runtime environment, and skeptical that Docker the company can build a viable business out of something that will rapidly become a commodity, I expect the file format will long outlive the company that created it. But those are nothing to do with the question really, which can be answered purely technically.

Curious--how do you feel about Mesosphere and Kubernetes vs. Docker Swarm?

Big fan of DC/OS. Ambivalent about k8s, but maybe it's just not for my use cases.

Based on most interviews, he wants you to answer like he would answer or he wouldn't hire you.

Did you just assume my agenda?

Binaries are built for specific operating systems. E.g. A Linux binary can't run on Mac OS X, except with a VM or... a container. :)

So are containers...?

Indeed, Docker on OS X is actually running virtual machines through xhyve.

Immutable infrastructure. And there should be no pain involved if the ci/cd pipeline is automated correctly.

Docker generated value from the LXC project, aufs, overlay, btrfs and a ton of open source projects yet few people know about these projects, their authors and in the case of the extremely poorly marketed LXC project even what it is thanks to negative marketing by the Docker ecosystem hellbent on 'owning containers'.

Who is the author of aufs or overlayfs? Should these projects work with no recognition while VC funded companies with marketing funds swoop down and extract market value without giving anything back. How has Docker contributed back to all the projects it is critically dependent on?

This does not seem like a sustainable open source model. A lot of critical problems around containers exist in places like layers, the kernel and these will not get fixed by Docker but aufs or overlayfs and kernel subsystems but given most don't even know the authors of these projects how will this work?

There has been a lot of misleading marketing on Linux containers right from 2013 here on HN itself and one wishes there was more informed discussion that would correct some of this misinformation, which didn't happen.

I was always amazed/impressed/annoyed that when Docker exploded in popularity it was a tiny pile of code, and a bunch of other mature and mostly mature pieces from the OSS community, and yet "Docker" was what everyone recognized from that equation...even though 95% of the code was something else. Possibly the most amazing thing was that Docker was so successful while doing less than LXC.

It was incredibly impressive marketing, no matter what else we might say about it. And, since then, of course, they've leveraged that early success into a real platform and they've since built a much bigger pile of things (some are even really good). It's kind of a testament to the wildly over-optimistic approach of Silicon Valley product-building, and maybe also kinda damning of the sort of pillaging that the process often involves.

I had my reservations about Docker in the beginning, for all the reasons I mentioned above, but I'd be unlikely to bet against them, honestly. And, they have contributed quite a bit to OSS. I've been poking around in golang, lately, and have found a lot of libraries in my area of interest created by the Docker folks. They're seemingly doing good things, though not on par with Red Hat, which is entirely an OSS company.

> Possibly the most amazing thing was that Docker was so successful while doing less than LXC.

This is where I agree with Peter Thiel's very succinct (although somewhat philosophical) definition of the term technology in "Zero To One": it allows us to do more with less.

Sometimes, merely re-imagining a better way to use existing tech could unlock new kinds of productivity previously unimagined. One of the best examples of this is the iPhone.

>Who is the author of aufs or overlayfs? Should these projects work with no recognition while VC funded companies with marketing funds swoop down and extract market value without giving anything back. How has Docker contributed back to all the projects it is critically dependent on?

Here's an exert from a docker blog post from 2013[0]

>>Under the hood Docker makes heavy use of AUFS by Junjiro R. Okajima as a copy-on-write storage mechanism. AUFS is an amazing piece of software and at this point it’s safe to say that it has safely copied billions of containers over the last few years, a great many of them in critical production environments. Unfortunately, AUFS is not part of the standard linux kernel and it’s unclear when it will be merged. This has prevented docker from being available on all Linux systems. Docker 0.7 solves this problem by introducing a storage driver API, and shipping with several drivers. Currently 3 drivers are available: AUFS, VFS (which uses simple directories and copy) and DEVICEMAPPER, developed in collaboration with Alex Larsson and the talented team at Red Hat, which uses an advanced variation of LVM snapshots to implement copy-on-write. An experimental BTRFS driver is also being developed, with even more coming soon: ZFS, Gluster, Ceph, etc.

I think the real nature of your complaint is that you want the recognition to be in the form of cash.

[0] https://blog.docker.com/2013/11/docker-0-7-docker-now-runs-o...

I agree with him, usually supermarket cashiers don't accept blog posts.

Docker is swimming in cash for some reason only people in SF understand.

In the mean time LXD is developed by a single guy...

Is this really that hard to understand? Docker has marketing and developer evangelists etc. LXD is making software, Docker is selling a product.

I would add cgroups concept in Linux kernel in general. You can run Docker images with small bash script and systemd-nspawn alone.

Its namespaces. Cgroups can be used to limit any Linux processes and do not have anything in particular to do with containers. Userland container managers use it to limit mainly cpu and memory resources to the container processes.

Its kernel namespaces that enable containers and allow one to launch a process in its own namespace(there are 5 namespaces containers use).

LXC/LXD containers launch an init in this process so you get a more VM like environment. Docker does not launch an init so you run a single app process or a process manager if you need to run multiple processes. Systemd Nspawn is similar to LXC.

There's a couple of things in this article that I don't think are true. I don't think Ben Golub was a co-founder of Docker. Maybe he counts as a co-founder of Docker but not of Dotcloud? That seems a bit weird though. I also am pretty sure Docker's headquarters are in San Francisco, not Palo Alto.

You're correct with both assertions. As someone who lives in Palo Alto and works for Docker, I can only wish that it were headquartered there.

Cofounder is a title, like any other. It doesn't just refer to "I was there at the start of the corporate entity". If they're calling Ben a cofounder, then they certainly mean it. You're right that he wasn't there at the start of Dotcloud, but that doesn't mean that he didn't have (or didn't earn) the title of cofounder of Docker. Ben's linkedin has "co-founder" of on it, so it certainly wasn't the article making a mistake.

So then Ben might say something like: "I co-founded Docker. I was hired 2 years after it was founded." And that wound give you pause? It seems to me we're divorcing the word "founder" from the concept it derives from, which is the act of founding. I suspect that, should this become common practice, it will quickly devalue the title of co-founder. The only reason it has the allure that it does is because, in most people's minds, it's not a title like any other since. For most people, the only way to be a founder is to found.

All this being said, I do agree that this wasn't the article's mistake. I suspect if you asked the author if he's under the impression that Ben was present when Docker was founded though he'd say yes, so in some sense he's been misled. That's what most people take from the term founder I believe.

Cofounder is sort of nebulous, it doesn't carry any official legal authority and isn't an "officer" of the corporation in most senses, though it has a lot of cachet in SV.

I view it as a relatively cheap bargaining chip that can burnish a person's reputation, but has little descriptive value in terms of what someone actually did to move the business forward.

I disagree. You're not exactly handing the title out willy-nilly. If they're calling Ben a cofounder, I'll be that has significant meaning to them.

Sure, but it's a lot different from being a CEO, shareholder, member of the board, or high-level corporate officer. "Founder" is something you can bestow retroactively, not so much with "CEO".

And I'm seeing all sorts of companies hand out titles like "Founding Engineer", "Founding Developer", etc. with the same chump-change 100k/0.1% option packages as before.

I worked with LXC since 2009, then personally built a cloud provider agnostic workflow interface superior in scope to Docker in feature set[1] between about 2013-2014 as a side project to assist with my work (managing multi-DC, multi-jurisdiction, high security and availability infrastructure and CI/CD for a major cryptocurrency exchange). (Unfortunately I was not able to release that code because my employer wanted to keep it closed source, but the documentation[2] and early conception[3] has been online since early days.) I was also an early stage contributor to docker, providing security related issues and resolutions based upon my early LXC experience.

Based upon the above experience, I firmly believe that Docker could be rewritten by a small team of programmers (~1-3) within a few month timeframe.

[1] Docker has grown to add some of this now, but back then had none of it: multiple infrastructure providers (physical bare metal, external cloud providers, own cloud/cluster), normalized CI/CD workflow, pluggable FS layers (eg. use ZFS or LVM2 snapshots instead of AUFS - most development was done on ZFS), inter-service functional dependency, guaranteed-repeatable platform and service package builds (network fetches during package build process are cached)...

[2] http://stani.sh/walter/cims/

[3] http://stani.sh/walter/pfcts/

It effectively has been.

Oracle, Sun, FreeBSD, RunC, OCI

None are _as popular_ as docker. But many offer more features, or a flatly superior product.

Docker _isnt good_. Docker is popular. Actually their constant breaking of compatibility makes me wonder why everyone continues to tolerate it.

Docker _isnt good_. Docker is popular.

Which means a popularity-based valuation is a house of cards.

Maybe. But there's also something to be said for generating popularity and mind share intentionally. It seems Docker has been very good at that part (and pretty good at the technical side too).

Many wonderful, technically-superior-than-alternatives projects languish and die in obscurity, while its inferior counterpart wins the popularity/marketing contest.

The businesses that win in that way (VHS vs beta) do so because there is money flowing.

It is unclear that with docker they really have much income, and enterprise software projects that perhaps make up most of their income are frequently rewritten with new toolsets.

Therefore, it is uncertain there will be commercial traction in 5 years... particularly because docker has (according to this thread) already lost the open source feature set battle (targeting a demographic which, incidentally, tends to shun marketing).

Hence, house of cards.

I wish people would stop talking about valuation this way, emphasizing the bullshit headline valuation.

The reality is that (speculating), they probably issued a new class of stock, at $x/share, and that class of stock has all kinds of rights, provisions, protections, etc. that the others don't, and may or may not have any bearing whatsoever on what the other classes of shares are worth.

The guy who came up with chroot in the first place is kicking himself.

By analogy, there's a lot more to a Model T than an internal combustion engine.

Do they actually have any significant revenue? I love developer tools companies, but there are several tools upstarts that have no proven business model. They look like really bad gambles in terms of VC investment, unless you can get in early enough to unload to other fools.

I don't know their numbers, but it has been reported that their 2016 revenue was north of $10m, and that their 2017 revenue is more than 2x that. This is just a guess, but I'd guess that they are seeing >300% year-over-year revenue growth, and that they're projected to see >$50m in 2017. I would guess 20x current revenue, so a $1.3b valuation would mean roughly $10m per month in July 2017 (starting from $2-3m in Jan 2017). If this is accurate, I'd be glad to invest in Docker at a $1.3b valuation.

That sounds very pie in the sky, I can't imagine them cleaning 10m a month.

Check out Chef's https://habitat.sh for one fresher take on all this. It moves the containerization approach closer to something that feels like Arch Linux packaging, with a pinch of Nix-style reproducibility. Looks very promising at this point, even if a bit rough on the edges still.

As someone who witnessed the 2000 tech bubble pop, I feel like Bill Murray in Groundhog's day, except unfortunately this time its not just tech. Its going to end very badly.

Docker is funded by In-Q-Tel

The shadow government is containerized.

I'm very curious what your concern is in this? I understand the CIA interest and value in PayPal, Facebook, and Google. But what do they gain from Docker, other than as a simple investment? Backdoors or something?

I have no idea if the parent is correct or what, but just for the sake of argument I can see one issue here.

Docker Hub is a massive centralized store of software. If Docker becomes The Way to deploy infrastructure and services, Docker Hub could theoretically trojanize the world.

It would be a way of getting the kind of leverage over the Linux world that MS and Apple offer for their respective ecosystems.

You can run a private registry and build your own images.

Unikernels are a much better solution to the problems that Docker solves.

Im not sure unikernels are a better solution. The texhnology has a lot of promise but the tools are not there yet. For instance running MySQL as a unikernel is possible but it's much more difficult than a container.

So "better" is quite subjective... more secure? Sure, easier to deploy and build not even close.

That's a lot of money for a static compiler.

why are they called Software Maker?

Non-tech people have no idea what Docker is. Still even a lot of the less up-to-date people know what it is and I suspect vanishingly few people (relative to something like GitHub) really understand it beyond the mechanics of the commands.

Also the intentional re-use of shipping verbiage demands disambiguation.

I've had to use Docker before, and I still have no idea what it is. It's true that they love to say "virtualization" and "containerize" the same way the media loves "disavow" and "recuse".

In the case of Docker, containers = prepackaged ready-to-use out of the box software. Basically, you run some software (e.g. mysql) inside a docker container and it comes with everything it needs preconfigured and installed (e.g. libraries, configurations,...). This makes it easy to:

- Deploy to many places having the same environment

- Update these programs easily (you just need to download and run a new container image)

- Clean up easily by discarding the container

I should probably also add that Docker is also a community to share these application images (containers).

I love that comparison!

So they aren't confused with "Pants Maker"?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact