Let's pretend that I'm an intermediate developer with no knowledge about sysadmin or even deployments. How would you break this down and ELI5 .
How does Packer fit into all of this (assuming I'd use Digital Ocean). What's Vagrant for? What does Virtual Machine do here? Do I need these three machines on the target VPS or only on my local development machine? To add my DO keys do I need to ssh into vagrant once it's up and running?
You provide the octovagrant box, is it secure? Is Vagrant production-ready or is it not part of the production mix? You've got 6 cookbooks listed in the cookbook repo. Do I need all of them? How do I use/install each of them?
What does Docker do? Once I've done all of this setup work, how can I push up all of my code to the desired VPS? Do any of the defaults have security provisions and set up ufw rules to only allow port 80 etc , disallow root access, only allow ssh access and all of that goodness? If I use this over manually provisioning and securing servers, do I get sane and secure defaults?
That's a lot of questions, but I may not be the only one asking them, so if I may so bravely ask, ELI5?
 Explain like I'm 5 - like http://www.reddit.com/r/explainlikeimfive
 Just suspend disbelief about my zero-knowledge about setting up servers
Packer is a tool that helps to build the image that octohost runs on. It installs all of the software needed and prepares the VM it runs on.
Vagrant is a tool that allows you to run different virtual machines on your local development box. It has nothing to do with production at the moment - it's just for running it locally.
The octovagrant box is pretty open - but that's because it's for running things locally. When it's installed on AWS/DO/Rackspace/Linode/etc. it's firewalled from remote people - but still is pretty open internally. I wouldn't let untrusted people push to it at the moment.
Yes - you need all of the cookbooks, but Packer will take care of that for you - you don't have to really worry about that.
If you're just using the AWS AMI that we've already built - then you can really ignore Packer and Chef - just launch the AMI and you're done.
Docker allows you to run processes inside of a container. So you can launch a set of processes from your source code and have it run on its own.
Once it's setup, you merely add a remote git target, push your code to the server and it builds and launches the code that you have pushed. It works like Heroku for simple websites.
That help at all?
I'd honestly love a system that deploys the image for you to the cloud of your choice - with setup for ssh keys and things to make it simple.
Just not in the cards timewise at the moment.
The theory of operation page hints that if no ports are exposed, then the container isn't launched. It would be great if the following would be possible:
1) No ports exposed. Just run some software and that's it.
2) Expose one or multiple http ports with different domains for each.
3) Expose one or multiple tcp/udp ports, which get mapped directly to a host port.
I also can't see if there's any support for volumes, but if not, that also seems fairly important.
For what it's worth, here is how I handled it, but the project is very sloppy and I do not recommend it's use to anyone since I'm looking to switch ;)
 https://github.com/r04r/dockah/blob/master/dockah.sh#L35 (reads a config file like https://gist.github.com/r04r/d5d0ea6506824e2cf6d9)
I added some "magic" Dockerfile comments - this is the one to add in order to not expose anything via HTTP:
I am not a huge fan of volumes where data is stored on the box - but you can do it the same way I've described on the page.
I think of octohost boxes as app servers - if you have data on it, you should likely have it stored elsewhere. I've used Heroku to store additional data sometimes:
We've also used remote MySQL servers:
There's lots of ways to do it.
I'm getting a bit overwhelmed by the number of meta tools around docker deployment.
Also, and most importantly, how are you handling logging? Is it being persisted on the host volume or is rsyslog-streamed
Fig didn't exist when I started this and I wanted to mirror the git push that Heroku used.
Logging is handled by Docker - and can be sent to services like Papertrail if you use something like logspout:
One major difference (like already mentioned) is that I wanted to use Dockerfiles rather than Procfiles. I wanted to have full control over how it built and ran - and I didn't want to go through slugbuilder / buildstep at the time.
It may actually be easier on end users to use Heroku buildpacks and abstract some of the magic - but for our uses Dockerfiles were the best fit.
Now, the Tutum team came out with something that even simplifies it even more - and it might be worth looking at:
I have had that on my backlog for several months and just haven't had time.
It's basically a web UI that builds images from tarballs, dockerfiles, and github repos. We use it internally all the time and we thought we'd contribute back.
You can find our post about it here:
Great job on octohost.io!
Then I have detailed what happens more thoroughly here:
I should link to it from the homepage - that will be added shortly.
Deis is a much more mature choice and is based on CoreOS which is awesome.
I'd likely change that today if I was starting now - but that was in November 2013 when Docker was much more beta.
Ansible also (currently) seems a bit more suited for the Docker workflow, so I'd be interested to hear about the developer's experiences with this.
I started this project with Ansible because I wanted to learn how to use it.
As I went along, I wanted to learn more about TDDing Chef cookbooks with Test Kitchen so I switched back to Chef.