
Docker closes $40M Series C led by Sequoia - yla92
https://blog.docker.com/2014/09/docker-closes-40m-series-c-led-by-sequoia/
======
sillysaurus3
This means Sequoia expects Docker to either go public or to be acquired for at
least (10 * $40M / sequoia_ownership) in order to be considered a "win,"
right? (A "win" in the sense of being worth the VC's investment, not in the
sense of being valuable to the world.)

The reason I say this is because a VC who merely breaks even on investments
will eventually go out of business, so it would be a mistake to invest unless
the expectation is that Docker might be a win for them.

Assuming Sequoia owns, say, 35%, then that comes to an expected acquisition
price of about $1.15B for Sequoia to earn 10x their money back.

What are some hypothetical scenarios which end with Docker going public? What
are some scenarios where a company would acquire Docker for north of $1B?

I'm not trying to imply anything about Docker with these questions.
Personally, I love Docker. It's just fun to theorycraft.

~~~
ig1
Generally the multiple expected is lower for later stage rounds as there's
less risk. Also Sequoia wouldn't have got anywhere near 35%.

That said the investors would certainly be looking for >$1bn exit.

~~~
nickstinemates
> Also Sequoia wouldn't have got anywhere near 35%.

Bingo!

I don't think anyone would be interested in taking a round that would dilute
us this much, especially when the need for money is not pressing.

~~~
sillysaurus3
The smaller the percentage, the higher the expected acquisition price. For a
10% ownership stake from a $40M investment, the acquisition would need to be
$4B just to earn 10x their money back.

I was just wondering about some scenarios where Docker could achieve that kind
of price.

~~~
nickstinemates
You haven't taken in to consideration an important variable - how much of the
40M is invested by Sequoia vs. pro rata by existing investors.

~~~
benologist
Why does that matter, the other investors are hoping for returns too?

~~~
benmathes
preferred vs common, participating preferred vs common (i.e. liquidation
preference), possible pro-rata rights (thought at this stage further
financings are probably unlikely)

disclaimer: I work at greylock, who was an earlier investor, though have
_zero_ inside knowledge of the deal as I work on unrelated stuff.

------
weavie
How are Docker monetizing their product? Is it just in hosting and support? By
open sourcing Docker they have opened up the door to hundreds of competitors
offering the same thing, often for a much lower cost? Is their only
competitive advantage the fact that they own the project and thus understand
it better and can dictate its course?

I'm sure they would make for a very interesting case study on how to do open
source right.

~~~
nickstinemates
I'll paraphrase the questions and provide answers best I can

> How are Docker monetizing their product?

We offer commercial support for Docker and also offer paid features on Docker
Hub.

> Competition?

Docker is Apache 2 licensed. Anyone can fork Docker and start monetizing it
tomorrow in a completely different way than we had anticipated.

This is actually a good thing. There's a separation of Docker between the
project and the company, and there's a virtuous cycle of the company aligning
to the objectives of the project and the project benefiting by all of the
business-y things you can do.

> Competitive Advantage

We don't own the project, the community does. We fundamentally believe the
value of a platform or ecosystem is proportional to the amount of competition
it brings to everyone. As per above, we have no interest in locking in a
competitive advantage on the Docker project that only we can benefit from.

> Doing Open Source Right

We have a long ways to go! We're certainly trying something new - but we've
gathered a lot of momentum and our focus is just to continue working with the
community, our great partners, and work diligently to deliver great product
and support to our users.

~~~
peterwwillis
That is one way of putting it. Another way is that you're creating a new de
facto standard and making sure everyone needs to use it. The partnerships you
make build your product into other products, making it the default option for
anything someone might need to do with containers. Then you increase the
visibility of the product (and thus the company) by getting lots of PR and
making sure VCs and potential customers read it.

But competition has nothing to do with open source. Source code is not a
competitive advantage, even if you get minor quality improvements like
increased code visibility and test coverage. No open source company has ever
forked code from an existing product, started a competing business, and stolen
business away from the originator. Companies that provide services on top of
other people's code, however, often fall victim to a better sales pitch,
customized tailored services or a shift in direction.

And honestly, the idea that 'the community' owns the Docker project is a joke.
Is the community getting 40 million dollars? Is the community making the
design decisions for the product? Is the community pushing the integration of
your tool with other companies and services? As far as I can see, you have a
company based on a product, and you give that product away because it costs
you nothing to do so. Open Source is a marketing tool, and a great one at
that.

~~~
nickstinemates
I'm glad you bring up partners! They're what I personally focus on 24x7, so
have a lot to say on the topic.

> The partnerships you make build your product into other products, making it
> the default option for anything someone might need to do with containers.

I think you can view some partnerships through that lens, but as a whole I do
not believe this statement holds at all.

My #1 partnering goal is to make sure that the interest that exists in Docker
can be realized on the services and products that people are using today.
You'll never see us form a partnership that has any conclusion, whether direct
or indirect, that the only proper way to use docker is in combination with
partner technology X.

I think you could also view projects like libcontainer, which is written by
some of the maintainers of Docker, and understand that it's being used by
other projects not related to Docker at all. In some cases, even by
_perceived_ competition (like Pivotal.)

> Then you increase the visibility of the product (and thus the company) by
> getting lots of PR and making sure VCs and potential customers read it.

It is important to highlight the reasons we make these partnerships - I can
assure you, it's not to get VC attention. That's completely short-sighted and
unsustainable.

To the best we can, we deflect the visibility on the project on to others, big
or small, doing great things with Docker.

> And honestly, the idea that 'the community' owns the Docker project is a
> joke.

I'm not laughing. Maybe you're not familiar with how the Apache 2 license
works. I'd get familiar with that. Link:
[http://en.wikipedia.org/wiki/Apache_License](http://en.wikipedia.org/wiki/Apache_License)

> Is the community making the design decisions for the product?

Yes. The projects design is open. There is no privileged discussion about the
Docker project - it happens all in the open on GitHub and IRC. If there's a
specific area of conversation that requires in-depth collaboration, we sponsor
people to meet in person. This happens regularly.

> Is the community pushing the integration of your tool with other companies
> and services?

Yes. Red Hat is a perfect example. Pre-0.7, any Red Hat customer could not use
Docker because of 1) AUFS not being available on the platform, 2) Docker not
being supported on Red Hat. So anyone using Docker on Red Hat at that time was
breaking their agreement. That's a problem.

> As far as I can see, you have a company based on a product, and you give
> that product away because it costs you nothing to do so. Open Source is a
> marketing tool, and a great one at that.

We can argue the relative advantages and disadvantages of free vs.
commercially licensed software all day. You can write Docker off as a sheer
marketing ploy, but, I'd say that's a pretty disingenuous statement to make at
an individual level of the company.

I'll also say, the trade-off does not come without cost. It's not even close
to free.

------
davidw
How do people actually use Docker?

In my world of bootstrapped, smaller apps looking for market traction, even if
things go well, a few Linodes should be enough to handle most of the traffic
I'll ever need to deal with, so this kind of thing is kind of foreign to me.
I'm curious how people utilize it in practice.

~~~
csirac2
I (or rather, jenkins) builds all my software in it, I don't actually use it
for containerizing final applications.

In a nutshell, for me the value is in trivial repeatability. I can reproduce
the entire build toolchain, test environment and produce artifacts all from a
few KiB git repo which centres around the Dockerfile and submodules to
dependencies.

Some of my ARM stuff takes hours to cross-compile and involves enormous
amounts of fiddly babysitting normally. Dockerfiles have RUN statements (think
lines a shell script) which are cached. My adjustments toward the end of a
Dockerfile take only seconds to test and produces the exact same result as if
it had really run each statement from the start, which doesn't sound like much
but turns out (for me) to be pretty liberating compared to constantly trying
to fight other automation where you have to dance around short-circuiting
stuff to re-use bits of a past build to save time and get only a handful of
"pristine" iterations in a day (that might differ to the iterations you rolled
by hand).

~~~
voltagex_
I'd be super interested if you could share any of your ARM cross-compilation
Dockerfiles.

~~~
csirac2
They've got a lot of idiosyncrasies at the moment, some of it working around
the fact ADD some/directory/ could never be cached (so my build scripts
maintain some.directory.tar.gz and those are ADDed instead), but I really
should. I guess I'd put it under my github profile (I'm also csirac2 there).

There's two types of ARM builds, ones which can cross-compile and those that
can't. The ones which can't cross-compile are done with qemu-binfmts and we
chroot into an ARM filesystem and run the build there.

Perhaps the only useful contribution would be the fact that I persist the
ccache up to the docker host with a shared ccache volume, and that helps
enormously especially for the qemu-binfmts builds which can be quite slow.

~~~
voltagex_
Where does your original (non-qemu) toolchain come from? Is that built inside
or outside of a container?

~~~
csirac2
(em)debian provides nearly enough of what I need most of the time. The trigger
for going to a ARM chroot is when I can't get build dependencies installed
properly on an amd64 host. Either that or the thing I'm compiling just isn't
developed to be cross-compiler friendly and it's too much work to hotwire it
to be so.

For example, say I need libfoo, I have an amd64 host (being the docker
container). Sometimes I just can't get the libfoo:armhf or libfoo-dev:armhf
package installed because it would break/conflict with the amd64 host's
version of it in some way. xapt often helps but then sometimes screws up by
re-packaging something that has an "all" arch (non-arch-specific) to something
armhf specific (eg. foo-data). This ultimately either conflicts with the host
or fails to be named properly in such a way that it meets the build-deps of
the project.

Sometimes I know it would be easier in some cases to avoid the debian
packaging ecosystem, but for my workflow and distribution requirements it
brings a lot of benefits.

Edit: see here
[https://wiki.debian.org/EmdebianToolchain](https://wiki.debian.org/EmdebianToolchain)

------
valarauca1
Went to read this blog when I found out blog.docker.com doesn't support TLS
1.2. And only has one available cypher suite

    
    
          TLS_RSA_WITH_RC4_128_SHA
    

Which is cool because RC4 is broken.

docker.com actually does support TLS1.2, but their blog subdomain doesn't :\

~~~
ewindisch
Thank you. The blog is on different infrastructure than our website and the
DockerHub. We'll look at this pronto! If you discover any other security
issues or concerns, please send them to security@docker.com.

~~~
valarauca1
Yes and thank you for responding quickly to my email.

------
brianbreslin
Can someone explain Docker in layman's terms and juxtapose it against
something I already understand (aws perhaps)?

~~~
taylorbuley
In the naughts, virtualization developed as a software layer to abstract
physical hardware and provide so-called "virtual machines" that, instead of
working exclusively with hardware, function as a group and share resources as
a pool.

Docker provides one more layer of abstraction and grouping where a machine
(or, commonly, a virtual machine) abstracts its resources in order to provide
them to thread-like "containers." These containers share their resources with
other containers running on a given host.

A docker container, written like a spec into a `Dockerfile`, is a way to
package your application as if there was a `run.sh` that would install your
OS, any dependencies and your application itself -- and, importantly, run that
application after everything is installed. The host can choose to surface to
the world any ports on the running container, or keep them private to itself.
The container draws from the host's pool of resources so long as your
application continues to run inside the "thread" managed by docker.

------
golubbe
Docker's blog post on this [https://blog.docker.com/2014/09/docker-
closes-40m-series-c-l...](https://blog.docker.com/2014/09/docker-
closes-40m-series-c-led-by-sequoia/)

------
a3049073
I tried Docker for a little while, just to see what it is. It seems like the
authors never used UNIX before. Nonstandard argument format, some strange
formatting in the manual page. And the idea of downloading random software
from strangers from the internet and running it on your machine creeps me out
as well.

~~~
wereHamster
Of course nobody is running random images from the internet in production. You
build your own. Building custom images is not rocket science.

But to get started with docker it's incredibely easy to download an image and
have something running within minutes.

~~~
ewindisch
I run "random" images. However, I do so by only running automated builds with
source code I can (and do) first audit. Some of those images, by the way, are
official and provided by the application developer / org.

------
steeve
Congrats to all the team!

~~~
nickstinemates
Thank you for being a big part in the community, specifically around
boot2docker. You rock!

------
sz4kerto
It's very interesting to see that Docker is getting so much recognition, money
and success -- while the real core of this thing, lxc is rarely mentioned and
also the authors are not part of this huge success.

~~~
ad_hominem
Docker doesn't use lxc by default anymore (although you can still run the
docker daemon with it if you want). It uses libcontainer, which they wrote:
[https://github.com/docker/libcontainer](https://github.com/docker/libcontainer)

~~~
sz4kerto
Good to know, thanks.

------
indielol
Docker is one of those FOSS that I always want to actively contribute to, but
I don't lest they seem to be doing great without any help.

~~~
nickstinemates
Someone made a contribution earlier today as small as adding a carriage return
in an RST file. That is extremely appreciated by everyone.

I'd encourage you to jump in. The IRC channel is fairly active and there's a
ton of places to get started at all experience levels. Let me know if you need
help.

~~~
jakehow
The stewardship of community contributions seems particularly poor in Docker.
One of the more frustrating projects I have tried to help, and is really the
only red flag. The promise of docker is awesome.

See this thread (and previous discussions around the issue) begging for docker
core to participate and getting nowhere for ~1 year:
[https://github.com/docker/docker/issues/7284](https://github.com/docker/docker/issues/7284)

Is there an outline somewhere on your plan for governance and stewardship for
community contributions, how proposals move through the pipeline, and whether
anyone outside of Docker, Inc has the commit bit?

~~~
nickstinemates
I'm sorry to hear you've had a frustrating experience. The governance model
should be outlined in the CONTRIBUTING.md file located in the repository which
should address the concerns.

There are project maintainers that are not on the Docker, Inc. payroll, and
getting anything committed requires the approval of at least 2. We actually
consider this a litmus test for our involvement with the ecosystem, and it's
fundamentally a great thing.

As for the issue at hand, I personally understand desire on both sides. I have
been frustrated multiple times by the lack of being able to have multiple
Dockerfiles per repo as a simple example. On the other, providing strict
guarantees about context ensures true portability of Dockerfiles.

What I will say, is that this is a topic we talk about a lot, whether it be on
the issues themselves or in IRC. It's tough to get the right balance.

~~~
jakehow
Docker, Inc employees have popped in to make an offhand comment at various
points, similar to the one you made "On the other, providing strict guarantees
about context ensures true portability of Dockerfiles." but actual
participation is nonexistent.

This has been the case on other issues I have seen as well, either things
languish, or they get magically swept into the project, with the decision
happening elsewhere.

Stewardship would mean actually explaining the position above, and discussing
with the community how the issue affects them in order to gain an
understanding of what we are talking about.

Yelling "repeatability" with no context and then disappearing is pretty
frustrating.

~~~
shykes
Jake, you're being disingenuous. Back in April I gave a detailed explanation
[1] as to why I hesitated to make the proposed change. When comments kept
rolling in I followed up in June with a possible solution and a conclusion
that "if somebody contributed this, we would love to merge it" [2] To my
knowledge nobody has.

I know for a fact that you are aware of this since my comment was in direct
response to you.

And since comments are _still_ rolling in (even though there is already an
open call for contribution, with a pre-approved design), I am focusing on it
again this week [3].

There are definitely lots of growing pains in how we run the project, and
having any participation at all from you is super appreciated. But the picture
you pain here is unfair and inaccurate.

[1]
[https://github.com/docker/docker/issues/2112#issuecomment-39...](https://github.com/docker/docker/issues/2112#issuecomment-39763037)

[2]
[https://github.com/docker/docker/issues/2112#issuecomment-47...](https://github.com/docker/docker/issues/2112#issuecomment-47448194)

[3]
[https://github.com/docker/docker/issues/7284#issuecomment-55...](https://github.com/docker/docker/issues/7284#issuecomment-55849748)

------
notacoward
I wonder how much of this was just a way to rearrange who owns how much, ahead
of the inevitable acquisition.

------
droob
"the money helps show the market that the company has stability"

Free money from some dudes unrelated to the company's business really
shouldn't indicate "stability", should it?

------
markokrajnc
This will help them much with big customers who decide if they should use
Docker - because they have now longer-term stability and support.

------
dschiptsov
'Hot' here should be interpreted as a new, fresh, popular meme and buzzword -
'a cool stuff for a cloud - orchestration, you know'.

Well, in this way it is hot indeed.

~~~
jacques_chester
Docker doesn't do orchestration and doesn't provide a PaaS.

I imagine they'll try to grow in that direction because their customers will
hanker for it, but (and I'm biased here because I work for a PaaS developer)
they'll find that building automagical distributed platforms is hard. _Very_
hard.

Edit: from the blog post -- it looks like moving up into PaaS is their
intention.

~~~
teabee89
Why would Docker sell their dotCloud platform if they wanted to stay in the
PaaS business?

~~~
jacques_chester
God I hate it when people make excellent points directly underneath my
remarks.

------
jister
yep they are hot as in overhyped

------
mrwizrd
Here's a copy of the blog post for anyone having trouble reading.

Today is a great day for the Docker team and the whole Docker ecosystem.

We are pleased to announce that Docker has closed a $40M Series C funding
round led by Sequoia Capital. In addition to giving us significant financial
resources, Docker now has the insights and support of a board that includes
Benchmark, Greylock, Sequoia, Trinity, and Jerry Yang.

This puts us in a great position to invest aggressively in the future of
distributed applications. We’ll be able to significantly expand and build the
Docker platform and our ecosystem of developers, contributors, and partners,
while developing a broader set of solutions for enterprise users. We are also
very fortunate that we’ll be gaining the counsel of Bill Coughran, who was the
SVP of Engineering at Google for eight years prior to joining Sequoia, and who
helped spearhead the extensive adoption of container-based technologies in
Google’s infrastructure.

While the size, composition, and valuation of the round are great, they are
really a lagging indicator of the amazing work done by the Docker team and
community. They demonstrate the amazing impact our open source project is
having. Our user community has grown exponentially into the millions and we
have a constantly expanding network of contributors, partners, and adopters.
Search on GitHub, and you’ll now find over 13,000 projects with “Docker” in
the title.

Docker’s 600 open source contributors can be proud that the Docker platform’s
imprint has been so profound, so quickly. Before Docker, containers were
viewed as an infrastructure-centric technology that was difficult to implement
and remained largely in the purview of web-scale companies. Today, the Docker
community has built that low-level technology into the basis of a whole new
way to build, ship, and run applications.

Looking forward over the next 18 months, we’ll see another Docker-led
transformation, this one aimed at the heart of application architecture. This
transformation will be a shift from slow-to-evolve, monolithic applications to
dynamic, distributed ones.

SHIFT IN APPLICATIONS

As we see it, apps will increasingly be composed of multiple Dockerized
components, capable of being deployed as a logical, Docker unit across any
combination of servers, clusters, or data-centers.

DISTRIBUTED, DOCKERIZED APPS

We’ve already seen large-scale web companies (such as GILT, eBay, Spotify,
Yandex, and Baidu) weaving this new flexibility into the fabric of their
application teams. At Gilt, for example, Docker functions as a tool of
organizational empowerment, allowing small teams to own discrete services
which they use to create innovations they can build into production over 100
times a day. Similar initiatives are also underway in more traditional
enterprise environments, including many of the largest financial institutions
and government agencies.

This movement towards distributed applications is evident when we look at the
activity within Docker Hub Registry, where developers can actively share and
collaborate on Dockerized components. In the three months since its launch,
the registry has grown beyond 35,000 Dockerized applications, forming the
basis for rapid and flexible composition of distributed applications
leveraging a large library of stable, pre-built base images.

Future of Distributed Apps: 5 Easy Steps

The past 18 months have been largely about creating an interoperable,
consistent format around containers, and building an ecosystem of users,
tools, platforms, and applications to support that format. Over the next year,
you’ll see that effort continue, as we put the proceeds of this round to use
in driving advances in multiple areas to fully support multi-Docker container
applications. (Look for significant advances in orchestration, clustering,
scheduling, storage, and networking.) You’ll also see continued advances in
the overall Docker platform–both Docker Hub and Docker Engine.

The work and feedback we’ve gotten from our customers as they evolve through
these Docker-led transformations has profoundly influenced how Docker itself
has evolved. We are deeply grateful for those contributions.

The journey we’ve undertaken with our community over the past 18 months has
been humbling and thrilling. We are excited and energized for what’s coming
next.

------
borplk
It's nice to see the folks building the building blocks getting some -
financial - attention.

Now to yield a nice return on that they just have to turn docker into a
ephemeral social photo sharing app for blind vegan Bulldogs and say they want
to change the world ;)

~~~
nickstinemates
That's hilarious. :)

I think containers as a concept have the chance to really fundamentally change
the way applications are developed, delivered, and managed in data centers
going forward - whether it's my own little rack sitting in a corner office, or
a large scale, multi dc deployment.

We're betting on that being Docker, but, the worst thing that could happen is
to become complacent and not recognize there's a tremendous amount of work
left to do.

~~~
nomadlogic
Trying not to be a curmudgeon - but I really don't see what the big fuss is
about, or how docker is fundamentally changing anything.

Not to take anything away from docker being a decent tool in some
circumstances - but really this methodology has been around in one
implementation for ages on Unix platforms.

~~~
pbreit
That's sorta where I've been coming up, too. Great, hard working group but
docker strikes me as a very small component of the deployment stack, the
entirety of which doesn't even have much enterprise value.

What would be a good comparison for a component company like this getting to
100s of millions or $1b?

~~~
ilaksh
LOL. The massive and obvious enterprise value is in having a standard way to
deploy and interface isolated Linux applications along with their
dependencies, along with a convenient hub for distributing and exchanging
them.

No, its not the first time any technologies with _some_ these types of
capabilities have been available, but it is the first time this powerful
combination of those capabilities have come together in a way that has so much
momentum.

