Hi everyone, Docker maintainer here. Here's my list of docker hosting services. Please correct me if I forgot one! I expect this list to get much, much longer in the next couple months.
So I just checked the docker website (http://www.docker.io/learn_more/) and there is still a flag up stating it's not yet ready for production use. Does this mean you guys/gals are gaining enough confidence in it now?
We spent 3 months going door to door, making demos to people I knew were working on similar projects or looking for one. We had a good reputation in the ops and systems engineering community because of our work on dotCloud over the last 6 years. Then we bootstrapped the open-source community with that initial group of 30-50 people willing to federate efforts. By the time the project was leaked on hacker news, the github repo and mailing list were already very active.
Early members of that "seed" community included engineers from Twilio, Heroku, Soundcloud, Koding, Google, Meteor, RethinkDB, Mailgun, as well as the current members of the Flynn project.
I am flattered, but we are already well funded and are not currently looking for new investors. However if you're feeling generous I can point you to a few people who have been making awesome contributions to Docker on their free time. I'm sure they would appreciate donations, or perhaps contract work :)
There are also several startups currently raising money for a business based on docker. This is bigger than any one company!
When you are an ops and spend time finding the perfect way to make "redeploying easier than fixing", docker became the answer.
I got to meet the docker team (a lot of french dudes in the team!). Very passionate, technically super sharp, and really fun ! They were interested in my point of view and opinion. Plus, there lead dev knows how to party from what i have seen during a meetup !
As a former Sun guy, I can say it's because extracting value wasn't something we were very good at or really gave much weight. From Grid to Java to Solaris 10 Zones and ZFS, Jini, RFID we mostly just made cool stuff and then... went and made other cool stuff.
To be honest - I think it's a timing thing - virtualization wasn't popular initially but VMWare did a great marketing job. Then any hyper-visor became acceptable. Now - VPS-style containers are becoming acceptable. IE: Docker.
Being too early can kill you. If you think your idea is awesome but too early, my advice is to keep trying for as long as it takes. Docker was not my first attempt at solving this particular problem :) [1] [2] [3]
Sounds like you took the best parts of digital ocean and are trying to push it as a platform with docker baked in. I like. It seems like you're also trying to simplify using docker. I like even more.
I love the fact that you keep trying to define your own vocubulary 'Deck' etc, but always have to explain it. Best to stick with the more eaily understood term, rather than invent your own, I think.
Unless you're going to try and trademark them all.
While I agree with your comment, I hope it isn't used as a measure or justification for doing so. I've had the same cognitive problem with Heroku as the parent describes.
God this drives my absolutely insane. Elvish marketing speak is such a stupid waste of time. Why can't we stick to commonly accepted terms instead of trying to bake up new "Cloud"esque replacement terms.
I think that even if [deck drop instance] is clearer than [dockerfile image container] it would be better to use [dockerfile image container] it's the standard set forth from docker, sticking to the standard makes interoperating easier for everyone.
I'd have to agree that the standard Docker terminology would be much preferred. Your business covers what is a pretty cutting edge, advanced concept right now. Your customers are likely to be at least somewhat understanding of the standard terminology. Your custom terminology tripped me up as well, despite having a reasonable grasp of the higher level Docker terminology.
Other than that, this looks great! I'm excited for you guys.
I'm in a similar problem space to you. After a year of defining my own 'simpler' terminology, decided to abandon it in favour of being consistent with the more popular albeit complex terminology.
I like the idea. Really cool. I've been researching docker a lot lately, and did most of my recent development on Core OS. I do have a question that wasn't immediately obvious: Docker maintains that one should make a container out of every application so that instead of having to install apache + mysql + php in one Ubuntu environment, you'd create three docker containers (apache, mysql, memcache) and run them together and define the share settings, etc. Now here's my question: It seems as if on Stardock, every container would be a seperate (at least) $5 instance? So if I want to run apache + mysql + memcached I'd need to cram them all into one docker container in order to have them on one machine? Or is it possible to use a $5 stardock system and run multiple containers on them, like on Core OS?
There is a new feature of Docker called Links which allows you to organize your stack in multiple containers and "link" them together so they can discover and connect to each other.
I tried to deploy a Django application with Docker a few weeks ago (using a single image with supervisord), only to discover that, during "docker build", I needed the database already running (so Django could create its database), which was pretty much impossible using a single Docker image and a Dockerfile.
With the new Links functionality, this is much easier, but are you planning to ever have the ability to use a single Dockerfile to deploy an application which may contain multiple images (with links between them)? I want to be able to do "docker build ." and have my application up and running when it finishes.
> are you planning to ever have the ability to use a single Dockerfile to deploy an application which may contain multiple images (with links between them)?
It is common to include dependencies like MySQL and Apache in the container of your application. Usually people use supervisord with a configuration file to start all the different daemons needed.
"Docker-as-a-Service", simple, easy-to-understand pricing. Love it.
This is my favourite Docker offer so far. I've been looking for something to replace dotCloud's deprecated sandbox tier for just playing around, and it looks like this fits the bill.
I configured and launched a machine with redis and node in less than 5 minutes. Very cool.
How will you isolate instances from each other? My instance appears to have 24 GB of RAM and 12 cores, and it looks like I can use all of it in my instance.
You can limit Docker to have CPU weight shares, and also a memory limit. The file storage limits are due to Docker 0.7, and for now you can ulimit them.
One thing that confuses me with Docker is that how do you configure your containers to communicate with each other.
So say I have a fancy Django image, and a fancy Postgres image.
How do I then have the Django one learn of the Postgres one's IP, and then auths (somehow), and then communicates seperately.
Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained), and how does that even work with a DaaS like this? I'm pretty confused. Is there an idiomatic way in which to do this?
Do service registration/discovery things for Docker already exist?
> Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained)
The recommended advice for production is to create a persistent volume with 'docker run -v', and to re-use volumes across containers with 'docker run -volumes-from'.
Mounting directories from the host is supported, but it is a workaround for people who already have production data outside of docker and want to use it as-is. It is not recommended it you can avoid it.
Either way, you're right, it is an exception to the self-contained property of containers. But it is limited to certain directories, and docker guarantees that outside of those directories the changes are isolated. This is similar to the "deny by default" pattern in security. It's more reliable to maintain a whitelist than a blacklist.
We give you a standard Docker instance in the cloud - all the tools work exactly the same as they do locally. You can even instantly open a remote bash shell, like the now-famous Docker demo!
The big point of Docker for me is that I can build the container on my machine, run automated tests on it, play with it and then ship it to the production machines when I'm confident that it is working.
If you build the container on a service like this testing it is hard or in some cases even impossible. For example acceptance tests with Selenium.
Gemfile.lock and similar version binding tools help, but prebuild containers bring the deployment stability to whole new level and is the reason why I'm exited about Docker and containers in general.
What would be even better is to decouple the idea of a drop from the containers running it. What I like about container approaches is having "machines" I can run them on. So let's say I make a "www" drop or several. I should then be able to fire up my containers into particular types of drops and have them started on those without having to think about the specifics. The benefit of this I'd that I only care about my container running and having some basic resource requirements and not so much the specific machine instance it is running on. I could even co-mingle different containers on types of "machines". Also separating out disk resources from CPU and ram would be good. Maybe you do this already buy it wasn't clear to me.
But if you host your site on your infrastructure, and it goes down, you can't post status updates to tell people what's going on/ when you will be back online.
Its quite reasonable to not host your own homepage or mechanism of updating your customers IMO.
I disagree. Your website should run on your own infrastructure and a separate status page, under a different (sub)-domain should be operated from another AS (autonomous system) e.g. statuspage.io or whatever you like/prefer.
Great initiative! One thing to be aware is that Docker is using LXC for containers and LXC relies on kernel isolation and cgroup limits. The concern is about the vulnerabilities.
It is comforting that Heroku is also using LXC for dynos. Would be interesting to know how much in-house adjustments to the kernel and LXC has been made to ensure the hardening.
I work at ActiveState on Stackato, which is a private Platform as a service. Similar to Heroku, only for private hosting (e.g. you host it on your own hardware or hypervisor). We use Docker as of our v3 beta release today (http://beta.stackato.com/). Our use of docker in 3.0+ means that we bring their tuned security along with us (they integrate with apparmor really well, in fact they require it to start up a container). Here's a really good overview of LXC (and docker) security in general: http://blog.docker.io/2013/08/containers-docker-how-secure-a...
Just curious, how are people building Docker images these days? Doesn't it only run on 64-bit Linux? I have a 32 bit Linux desktop and a Mac and haven't gotten around to installing Docker. At work I have a 64 bit Linux desktop and it seemed to be extremely picky about the kernel version so I gave up.
Are people running Linux VMs on their Macs to build containers?
I like the idea of this service. But both the client side and the server side have to be easy. Unless I'm missing something it seems like they made the server side really easy, but the client side is still annoying.
Yes. Emerging best practices seems to be to use Vagrant to create a great development environment, then use docker containers inside that for isolation. The two work together quite well. There's a comment from the Vagrant creator here about that:
https://news.ycombinator.com/item?id=6291549
So I already use Linux almost exclusively for development, and VMs are not in my workflow at all. It seems bizarre to build a VM to build a container! Like too many levels of yak shaving.
> It seems bizarre to build a VM to build a container! Like too many levels of yak shaving.
Perhaps, but you just said:
> I have a 32 bit Linux desktop and a Mac and haven't gotten around to installing Docker. At work I have a 64 bit Linux desktop and it seemed to be extremely picky about the kernel version so I gave up.
Precisely. Hence VMs, because Vagrant makes it trivial to spin up an instance configured however you like.
You're basically saying "I have a problem installing Docker, but I don't need a VM because I don't have any problems a VM would solve", but this is nonsense because this is the precise problem development VMs are meant to solve.
I can see where you're coming from, but my issue is that Docker itself is LESS portable than the applications it's containerizing! It's creating the very problem it's trying to solve. The task I care about isn't to build and deploy a Docker container. It's to build and deploy my app.
I have a beef with build/deploy systems that have bootstrapping problems. For example I'm hearing from people using Chef that they have to freeze the version of Chef, its dependencies, and the Ruby interpreter (or maybe it was Puppet, I don't use either). To me that is just crazy. My code isn't that picky about the versions it needs, and to introduce a deployment tool like that makes things less stable, not more.
Take for example Python -- in my experience it's almost entirely portable because Linux and Mac. And I imagine the same is with node.js, Ruby, PHP, etc. Almost all C libraries you need are portable too. So in my ideal world you would only use a VM when you actually need it for the OS/CPU architecture. I suspect for a lot of people that would be 50-90% the time without a VM, depending on how you like to develop.
I'm working on a chroot-based build system, which in theory will work on Mac and Linux (but not Windows). It does need to solve a versioning problem. Because stuff isn't as portable between Python 2.6 and Python 2.7 on the same OS as it is between Python 2.7 on two different architectures/OSes.
I think it might depend on what sort of problem you're trying to solve.
If you have, let's say, a django app, and you want to be able to run it all sorts of places, Docker is very much the wrong tool; it doesn't run at all most places, and it's finicky to get working. You're better off just getting that one app to run when and where you want. And if you run into any issues, virtualenv will solve it, no big deal.
If you have a bunch of apps you want to get running (or perhaps a bunch of interlocking pieces of a single stack, or the different elements of a SOA), then Docker suddenly starts to look very attractive. And then you might go to the trouble to get a single gold server image with docker installed and working (or an Ansible playbook, or a Chef cookbook, or a Digitalocean snapshot, or an EC2 AMI, or whatever), and you know you can just spin up a server and deploy any app you want to it. And once you start thinking about testing, CI, orchestration, automatic scaling, etc., it all becomes that much more attractive; you've got these generic docker servers, on the one hand, and these generic docker containers on the other, and you can mix and match them however you like. When you start having more than 1 server and 1 app, that's amazing. Very much worth the cost of entry of having to install docker everywhere...if you need that kind of thing.
You're focusing on portability between operating systems, but that's not the point of docker; as you say docker isn't portable at all (which should be a strong hint that isn't the problem it solves). But docker containers are portable between servers with docker on it, and with some architectures (or at a certain scale), you will suddenly realise just how useful that is.
If it helps, consider Heroku (and the other PaaS outfits like dotCloud, etc.). A lot of startups outsource big chunks of their infrastructure to Heroku, and Heroku uses a very docker-like architecture. If you were to shift that back in house, in many cases that same architecture makes sense (largely depending on just what you were outsourcing to Heroku...). ...and sometimes it doesn't. But if it does, docker is probably a core part of any attempt at implementing your own in-house PaaS. And if you need that kind of thing, you aren't going to stop because "well, it doesn't run on OSX"; nobody (well, nearly) is using OSX is production. :)
The dominant workflow in docker-land is to ditch the Vagrantfile, use a Dockerfile instead, and sometimes use Vagrant when it helps you get a VM up and running with docker on it (but that Vagrantfile is typically the same across all projects requiring docker).
I don't get the need for Vagrant? Are you suggesting to use Vagrant solely for those not developing on a host capable of running Docker? If my host _can_ run Docker, what value do I get from running it inside Vagrant instead?
Vagrant is a useful way to very quickly get a Docker capable host. You wouldn't use it for production, no.
For development, if you're running on OS X or Windows (in which case, my condolences), you basically have to use a VM. If developing on Linux, it's a tossup; the complexity and overhead of Vagrant versus the pain and annoyance of fooling around with kernels and dependencies.
I use a Mac for day-to-day development, so a simple Vagrant VM is a no-brainer. :)
I'm building docker images on 64-bit linux (ubuntu) and maintaining a repo of Dockerfiles, instead of uploading to the docker repository.
You need a recent version of the linux kernel that supports Linux Containers. It's best if you can run Ubuntu 13 somewhere.
> Are people running Linux VMs on their Macs to build containers?
FreeBSD supports jails which are similar to linux containers in a way, but OSX does not. So unfortunately you're going to have to run a VM, checkout vagrant and docker though. [1]
I love this idea, and want to try it but I have no experience with Docker (on the todo list).
I wanted to spin up an instance of Sphinx Search but no idea how to go about doing it.
Maybe creating a set of tutorials will help with this. I can think of two advantages. The first being customers like myself will love it. Second, similar to Linode and their tutorials it will drives a lot of traffic and establishes your reputation as docker experts. Will probably build a lot of back-links too as people link to your tutorials.
Absolutely. Along similar lines, DigitalOcean has done a great job of encouraging the community to write tutorial and articles, and as a result there are tons of resources to get you started with all kinds of ways to use a VPS. Doing the same would be tremendously beneficial for Stackdock.
How is private networking handled between Docker containers?
UPDATE: I'd also be interested to hear about Digital Ocean-style "shared" (but non-private) networking—basically, any network adaptor with a non-Internet routable IP address. ;)
Not being familiar with the subject basically it seems that:
Docker is a simple description of an internet server including the various services required (mysql, httpd, sshd, etc. - the bundle being call a deck).
It seems then you can create a server elsewhere (eg on your localhost), generate the docker description of that and use that description to fire up a server (either a VM or dedicated) using the service in the OP.
Am I close?
Could I use this to do general web hosting?
Edit: and looking at digitalocean.com it appears I can activate and deactivate the "server" at will, so I can have it online for an hour for testing and pay < 1¢?
This looks awesome! I currently have an AWS box for the same purpose, running a few of my docker containers. Will this support the ADD directive, or the ability to add custom files (config files) into containers?
Wonder if they have an idle/spin up time. Only their one instance plan is $5, but I know I have to buy more than one on heroku to get no idle/spin up time - that or use hacks like constant pingers, etc.. This is important for when I'm doing experiments/UI tests/alpha tests/submitting apps for reviews before they have any consistent traffic, but I don't want them to occasionally get stuck on 15 second spin up times on requests.
Probably the differences are your Docker instances run on dedicated server instead, and you have all the setup and preparation and maintaince made for you.
Looks cool. Here's what I'd love to see: built-in git deployment (ie. take a Dockerfile, build an image from it, and then after a push add the latest source code to /app and start new instances), and some kind of orchestration so you could run a number of app containers behind a load balancer container.
Hmm StackDock.com is hosted on a server at Hetzner in Germany.
I don't 100% know if the containers themselves are hosted by Hetzner or not, but Hetzner is more of a budget provider than something you host production sites on.
I've heard many mixed review about their network and mostly their support which isn't up to scratch. We'll see what happens but from what I see, if someone decides to abuse the service, Hetzner might just take down the whole server without warning just like OVH do.
http://www.hetzner.de/en/hosting/produkte_rootserver/px120ss... (I'm guessing they are using something similar to this).It's a pretty powerful and cheap server but if you search hard enough you can find something equivalent in the States for around the same price.
Hetzner surely has gone downhill over the years (quality and pricewise), and support was better 8 years ago, but to say you would not host a production site there is a pretty bold statement.
If you need real HA you should perhaps use more than one provider anyway. Or what are your recommendations?
Of course, with the prices these guys are charging, they are certainly going with a budget host.
Since Docker is still in beta, it's not production ready yet anyways. Docker could still go through a lot of changes between now and 1.0.
ETA: Whoops, I got the pricing wrong. It's $5 per instance. I was thinking you would get 1GB of RAM and 20GB of space to run as many containers as you like. That makes it not as cheap as I was originally thinking.
I Love the idea! really. I just don't like all the UX yet. Some things feel ... off. It might be something personal. I'm not sure. But I guess it's interesting to discuss. "Drops are distilled Decks" The words feel semantically mismatched for some reason. If I think "Deck"I don't think "Config". If I think "Drop" I don'think "Deployable stuff" and I don't see how a "Distilled deck" is a "drop". Also it feels odd that I can create a "New deck" in the "instances" section.
though adding "cards" to a "deck" sounds intuitive.
I'm trying to come up with better terminology. something with ships and containers...
When I created a Deck (default Sinatra Hello World) and converted it to a Drop, it did just that: it removed the Deck and created a Drop.
I guess I thought it would keep the Deck so that I could see the configuration that I had chosen to create it. Is this a Docker thing where, once you've created it, you don't see the config any longer? I don't think it is but I've not honestly played with Docker yet. $5 a month is a low ask for me to try it out.
Also, when it comes time to pay for a Deck/Drop and you don't have credit card info saved, it forwards you to that page... but, after entering the info, you're not put back into the process. You're dumped back into the Deck page. That seemed odd to me... wasn't sure if it had been converted or not.
I wish the word 'manifest' wasn't used in so many contexts because, if you're going to stick with the container shipping analog, it would have made more sense to have Manifests, Containers and Ships. That's just me though... who knows. ;)
All in all, cool service. Look forward to playing around with it this weekend.
EDIT: I see that you can create a copy of the Deck that created a Drop... still seems odd that the default behavior is to blow it away upon creation of a Drop.
IMO Labels/tooltips should be added to the icons for the cards. Some of them, including the leaf (nodejs?) and the tree (nfi what that is) aren't especially obvious.
Hackernews traffic spike! You can signup and create a Dockerfile - we've just paused instance deployment for a couple of hours as we add more servers. Sorry for the inconvenience.
You should do some A/B tests to confirm, but I but the pricing table at the bottom was a little confusing because the price was not highlighted in any way, and the call to action was round when it is typically a rectangle.
The issue with linux containers is (or at least it used to be) that it is possible for a malicious user to 'break out' of the container. Has this problem been solved?
I agree this looks very cool. As far as http://deis.io/ is concerned, we're focused more on the "operate your own PaaS" capability, whereas this seems to be a pure hosted service -- which is great for lots of use cases.
* http://baremetal.io
* http://digitalocean.com (not docker-specific but they have a great docker image)
* http://orchardup.com
* http://rackspace.com (not docker-specific but they have a great docker image)
* http://stackdock.com
EDIT: sorted alphabetically to keep everyone happy :)