

Show HN: Hosted Docker-as-a-Service on SSDs for $5 - edbyrne
http://blog.copper.io/stackdock-blazing-fast-docker-as-a-service-with-ssds-for-5/#more-1187

======
shykes
Hi everyone, Docker maintainer here. Here's my list of docker hosting
services. Please correct me if I forgot one! I expect this list to get much,
much longer in the next couple months.

* [http://baremetal.io](http://baremetal.io)

* [http://digitalocean.com](http://digitalocean.com) (not docker-specific but they have a great docker image)

* [http://orchardup.com](http://orchardup.com)

* [http://rackspace.com](http://rackspace.com) (not docker-specific but they have a great docker image)

* [http://stackdock.com](http://stackdock.com)

EDIT: sorted alphabetically to keep everyone happy :)

~~~
sillysaurus2
The rise of Docker is fascinating. How did you get people to care about it
initially? Did everyone immediately see it as a good idea? Congrats.

~~~
atomaka
The real question is why Sun didn't succeed in leveraging this technology with
their implementation of zones.

~~~
edbyrne
To be honest - I think it's a timing thing - virtualization wasn't popular
initially but VMWare did a great marketing job. Then any hyper-visor became
acceptable. Now - VPS-style containers are becoming acceptable. IE: Docker.

~~~
shykes
Timing is definitely part of it.

Being too early can kill you. If you think your idea is awesome but too early,
my advice is to keep trying for as long as it takes. Docker was not my first
attempt at solving this particular problem :) [1] [2] [3]

[1] [https://bitbucket.org/Foi3GraS/dotcloud-
fork/commits/1](https://bitbucket.org/Foi3GraS/dotcloud-fork/commits/1)

[2]
[https://github.com/dotcloud/cloudlets/commit/0af885a5266fba7...](https://github.com/dotcloud/cloudlets/commit/0af885a5266fba79ac99a79d214f14be05946ffb)

[3]
[https://bitbucket.org/dotcloud/vm2vm/commits/2a34438989fbff0...](https://bitbucket.org/dotcloud/vm2vm/commits/2a34438989fbff034d0034b4bea255c61a6238e8)

------
_lex
Sounds like you took the best parts of digital ocean and are trying to push it
as a platform with docker baked in. I like. It seems like you're also trying
to simplify using docker. I like even more.

~~~
edbyrne
Hey - thanks a million - that's the plan!

~~~
thepicard
Speaking of digitalocean, are you guys affiliated at all? Because I get a
digitalocean vibe from your pricing/terminology/etc. for some reason.

~~~
apathetic
are they?

------
Angostura
I love the fact that you keep trying to define your own vocubulary 'Deck' etc,
but always have to explain it. Best to stick with the more eaily understood
term, rather than invent your own, I think.

Unless you're going to try and trademark them all.

~~~
edbyrne
Thanks - we discussed that a lot - we were trying to make a simple 3 steps
process. If we get a lot of feedback that it's confusing we'll ditch it.

~~~
htilford
I think that even if [deck drop instance] is clearer than [dockerfile image
container] it would be better to use [dockerfile image container] it's the
standard set forth from docker, sticking to the standard makes interoperating
easier for everyone.

~~~
rattray
I agree with this, though I'm biased because I personally find [dockerfile
image container] clearer than [deck drop instance]. Explicit > flashy.

------
terhechte
I like the idea. Really cool. I've been researching docker a lot lately, and
did most of my recent development on Core OS. I do have a question that wasn't
immediately obvious: Docker maintains that one should make a container out of
every application so that instead of having to install apache + mysql + php in
one Ubuntu environment, you'd create three docker containers (apache, mysql,
memcache) and run them together and define the share settings, etc. Now here's
my question: It seems as if on Stardock, every container would be a seperate
(at least) $5 instance? So if I want to run apache + mysql + memcached I'd
need to cram them all into one docker container in order to have them on one
machine? Or is it possible to use a $5 stardock system and run multiple
containers on them, like on Core OS?

Thanks!

~~~
shykes
There is a new feature of Docker called _Links_ which allows you to organize
your stack in multiple containers and "link" them together so they can
discover and connect to each other.

There's a great explanation here:
[http://blog.docker.io/2013/10/docker-0-6-5-links-
container-n...](http://blog.docker.io/2013/10/docker-0-6-5-links-container-
naming-advanced-port-redirects-host-integration/)

~~~
StavrosK
I tried to deploy a Django application with Docker a few weeks ago (using a
single image with supervisord), only to discover that, during "docker build",
I needed the database already running (so Django could create its database),
which was pretty much impossible using a single Docker image and a Dockerfile.

With the new Links functionality, this is much easier, but are you planning to
ever have the ability to use a single Dockerfile to deploy an application
which may contain multiple images (with links between them)? I want to be able
to do "docker build ." and have my application up and running when it
finishes.

~~~
shykes
> _are you planning to ever have the ability to use a single Dockerfile to
> deploy an application which may contain multiple images (with links between
> them)?_

Yes, definitely :)

------
panarky
This is truly awesome, nice work!

I configured and launched a machine with redis and node in less than 5
minutes. Very cool.

How will you isolate instances from each other? My instance appears to have 24
GB of RAM and 12 cores, and it looks like I can use all of it in my instance.

~~~
aroch
Docker uses LXC which supports memory and CPU limits

------
kmfrk
"Docker-as-a-Service", simple, easy-to-understand pricing. Love it.

This is my favourite Docker offer so far. I've been looking for something to
replace dotCloud's deprecated sandbox tier for just playing around, and it
looks like this fits the bill.

------
antihero
One thing that confuses me with Docker is that how do you configure your
containers to communicate with each other.

So say I have a fancy Django image, and a fancy Postgres image.

How do I then have the Django one learn of the Postgres one's IP, and then
auths (somehow), and then communicates seperately.

Also, the recommended advice for "production" is to mount host directories for
the PostgreSQL data directory. Doesn't this rather defeat the point of a
container (in that it's self contained), and how does that even work with a
DaaS like this? I'm pretty confused. Is there an idiomatic way in which to do
this?

Do service registration/discovery things for Docker already exist?

~~~
shykes
> _One thing that confuses me with Docker is that how do you configure your
> containers to communicate with each other._

Docker now supports linking containers together:

[http://blog.docker.io/2013/10/docker-0-6-5-links-
container-n...](http://blog.docker.io/2013/10/docker-0-6-5-links-container-
naming-advanced-port-redirects-host-integration/)

> _Also, the recommended advice for "production" is to mount host directories
> for the PostgreSQL data directory. Doesn't this rather defeat the point of a
> container (in that it's self contained)_

The recommended advice for production is to create a persistent volume with
'docker run -v', and to re-use volumes across containers with 'docker run
-volumes-from'.

Mounting directories from the host is supported, but it is a workaround for
people who already have production data outside of docker and want to use it
as-is. It is not recommended it you can avoid it.

Either way, you're right, it is an exception to the self-contained property of
containers. But it is limited to certain directories, and docker guarantees
that outside of those directories the changes are isolated. This is similar to
the "deny by default" pattern in security. It's more reliable to maintain a
whitelist than a blacklist.

------
bfirsh
We're doing a similar thing called Orchard:

[https://orchardup.com](https://orchardup.com)

We give you a standard Docker instance in the cloud - all the tools work
exactly the same as they do locally. You can even instantly open a remote bash
shell, like the now-famous Docker demo!

------
esamatti
The big point of Docker for me is that I can build the container on my
machine, run automated tests on it, play with it and then ship it to the
production machines when I'm confident that it is working.

If you build the container on a service like this testing it is hard or in
some cases even impossible. For example acceptance tests with Selenium.

Gemfile.lock and similar version binding tools help, but prebuild containers
bring the deployment stability to whole new level and is the reason why I'm
exited about Docker and containers in general.

Do they support prebuild containers?

~~~
lhc-
"You can create a Docker file with some easy steps we’ve created, or you can
upload your own Docker file and create an instance from that."

Sounds like a yes.

~~~
lotyrin
Well, no. A Dockerfile is the build instructions, not the build artifact.

~~~
sams99
you can commit at any point and ship that via private registry

------
rmoriz
> We’re using dedicated because running virtual containers on virtual
> instances seems nuts to us.

but a traceroute points to AWS…

~~~
ojbyrne
Perhaps [http://aws.amazon.com/dedicated-
instances/](http://aws.amazon.com/dedicated-instances/)

~~~
rmoriz
building a virtualization infrastructure on top of another, black box
virtualization infrastructure…

What could possibly go wrong?

~~~
wmf
Hey, it worked for Heroku.

~~~
toomuchtodo
It kind of works for Heroku. Every few months I see the Hacker News post
"Why/how we moved away from Heroku."

------
shtylman
What would be even better is to decouple the idea of a drop from the
containers running it. What I like about container approaches is having
"machines" I can run them on. So let's say I make a "www" drop or several. I
should then be able to fire up my containers into particular types of drops
and have them started on those without having to think about the specifics.
The benefit of this I'd that I only care about my container running and having
some basic resource requirements and not so much the specific machine instance
it is running on. I could even co-mingle different containers on types of
"machines". Also separating out disk resources from CPU and ram would be good.
Maybe you do this already buy it wasn't clear to me.

------
AhtiK
Great initiative! One thing to be aware is that Docker is using LXC for
containers and LXC relies on kernel isolation and cgroup limits. The concern
is about the vulnerabilities.

It is comforting that Heroku is also using LXC for dynos. Would be interesting
to know how much in-house adjustments to the kernel and LXC has been made to
ensure the hardening.

~~~
bacongobbler
I work at ActiveState on Stackato, which is a private Platform as a service.
Similar to Heroku, only for private hosting (e.g. you host it on your own
hardware or hypervisor). We use Docker as of our v3 beta release today
([http://beta.stackato.com/](http://beta.stackato.com/)). Our use of docker in
3.0+ means that we bring their tuned security along with us (they integrate
with apparmor really well, in fact they require it to start up a container).
Here's a really good overview of LXC (and docker) security in general:
[http://blog.docker.io/2013/08/containers-docker-how-
secure-a...](http://blog.docker.io/2013/08/containers-docker-how-secure-are-
they/)

------
chubot
Just curious, how are people building Docker images these days? Doesn't it
only run on 64-bit Linux? I have a 32 bit Linux desktop and a Mac and haven't
gotten around to installing Docker. At work I have a 64 bit Linux desktop and
it seemed to be extremely picky about the kernel version so I gave up.

Are people running Linux VMs on their Macs to build containers?

I like the idea of this service. But both the client side and the server side
have to be easy. Unless I'm missing something it seems like they made the
server side really easy, but the client side is still annoying.

~~~
Lazare
Yes. Emerging best practices seems to be to use Vagrant to create a great
development environment, then use docker containers inside that for isolation.
The two work together quite well. There's a comment from the Vagrant creator
here about that:
[https://news.ycombinator.com/item?id=6291549](https://news.ycombinator.com/item?id=6291549)

In short, yes, just run a VM.

~~~
chubot
So I already use Linux almost exclusively for development, and VMs are not in
my workflow at all. It seems bizarre to build a VM to build a container! Like
too many levels of yak shaving.

~~~
Lazare
> It seems bizarre to build a VM to build a container! Like too many levels of
> yak shaving.

Perhaps, but you just said:

> I have a 32 bit Linux desktop and a Mac and haven't gotten around to
> installing Docker. At work I have a 64 bit Linux desktop and it seemed to be
> extremely picky about the kernel version so I gave up.

Precisely. Hence VMs, because Vagrant makes it trivial to spin up an instance
configured however you like.

You're basically saying "I have a problem installing Docker, but I don't need
a VM because I don't have any problems a VM would solve", but this is nonsense
because this is the precise problem development VMs are meant to solve.

~~~
chubot
I can see where you're coming from, but my issue is that Docker itself is LESS
portable than the applications it's containerizing! It's creating the very
problem it's trying to solve. The task I care about isn't to build and deploy
a Docker container. It's to build and deploy my app.

I have a beef with build/deploy systems that have bootstrapping problems. For
example I'm hearing from people using Chef that they have to freeze the
version of Chef, its dependencies, and the Ruby interpreter (or maybe it was
Puppet, I don't use either). To me that is just crazy. My code isn't that
picky about the versions it needs, and to introduce a deployment tool like
that makes things less stable, not more.

Take for example Python -- in my experience it's almost entirely portable
because Linux and Mac. And I imagine the same is with node.js, Ruby, PHP, etc.
Almost all C libraries you need are portable too. So in my ideal world you
would only use a VM when you actually need it for the OS/CPU architecture. I
suspect for a lot of people that would be 50-90% the time without a VM,
depending on how you like to develop.

I'm working on a chroot-based build system, which in theory will work on Mac
and Linux (but not Windows). It does need to solve a versioning problem.
Because stuff isn't as portable between Python 2.6 and Python 2.7 on the same
OS as it is between Python 2.7 on two different architectures/OSes.

~~~
Lazare
I think it might depend on what sort of problem you're trying to solve.

If you have, let's say, a django app, and you want to be able to run it all
sorts of places, Docker is very much the wrong tool; it doesn't run at all
most places, and it's finicky to get working. You're better off just getting
that one app to run when and where you want. And if you run into any issues,
virtualenv will solve it, no big deal.

If you have a bunch of apps you want to get running (or perhaps a bunch of
interlocking pieces of a single stack, or the different elements of a SOA),
then Docker suddenly starts to look very attractive. And then you might go to
the trouble to get a single gold server image with docker installed and
working (or an Ansible playbook, or a Chef cookbook, or a Digitalocean
snapshot, or an EC2 AMI, or whatever), and you know you can just spin up a
server and deploy any app you want to it. And once you start thinking about
testing, CI, orchestration, automatic scaling, etc., it all becomes that much
more attractive; you've got these generic docker servers, on the one hand, and
these generic docker containers on the other, and you can mix and match them
however you like. When you start having more than 1 server and 1 app, that's
amazing. Very much worth the cost of entry of having to install docker
everywhere... _if_ you need that kind of thing.

You're focusing on portability between operating systems, but that's not the
point of docker; as you say docker isn't portable at _all_ (which should be a
strong hint that isn't the problem it solves). But docker containers are
portable between servers with docker on it, and with some architectures (or at
a certain scale), you will suddenly realise just how useful that is.

If it helps, consider Heroku (and the other PaaS outfits like dotCloud, etc.).
A lot of startups outsource big chunks of their infrastructure to Heroku, and
Heroku uses a very docker-like architecture. If you were to shift that back in
house, in many cases that same architecture makes sense (largely depending on
just what you were outsourcing to Heroku...). ...and sometimes it doesn't. But
if it does, docker is probably a core part of any attempt at implementing your
own in-house PaaS. And if you need that kind of thing, you aren't going to
stop because "well, it doesn't run on OSX"; nobody (well, nearly) is using OSX
is production. :)

------
boyter
I love this idea, and want to try it but I have no experience with Docker (on
the todo list).

I wanted to spin up an instance of Sphinx Search but no idea how to go about
doing it.

Maybe creating a set of tutorials will help with this. I can think of two
advantages. The first being customers like myself will love it. Second,
similar to Linode and their tutorials it will drives a lot of traffic and
establishes your reputation as docker experts. Will probably build a lot of
back-links too as people link to your tutorials.

~~~
frakkingcylons
Absolutely. Along similar lines, DigitalOcean has done a great job of
encouraging the community to write tutorial and articles, and as a result
there are tons of resources to get you started with all kinds of ways to use a
VPS. Doing the same would be tremendously beneficial for Stackdock.

------
jaegerpicker
This is pretty awesome. An api to automate deployments/management/monitoring
would completely rock too.

------
erichocean
How is private networking handled between Docker containers?

UPDATE: I'd also be interested to hear about Digital Ocean-style "shared" (but
non-private) networking—basically, any network adaptor with a non-Internet
routable IP address. ;)

------
pbhjpbhj
Not being familiar with the subject basically it seems that:

Docker is a simple description of an internet server including the various
services required (mysql, httpd, sshd, etc. - the bundle being call a _deck_
).

It seems then you can create a server elsewhere (eg on your localhost),
generate the docker description of that and use that description to fire up a
server (either a VM or dedicated) using the service in the OP.

Am I close?

Could I use this to do general web hosting?

Edit: and looking at digitalocean.com it appears I can activate and deactivate
the "server" at will, so I can have it online for an hour for testing and pay
< 1¢?

------
conradev
This looks awesome! I currently have an AWS box for the same purpose, running
a few of my docker containers. Will this support the ADD directive, or the
ability to add custom files (config files) into containers?

------
lnanek2
Wonder if they have an idle/spin up time. Only their one instance plan is $5,
but I know I have to buy more than one on heroku to get no idle/spin up time -
that or use hacks like constant pingers, etc.. This is important for when I'm
doing experiments/UI tests/alpha tests/submitting apps for reviews before they
have any consistent traffic, but I don't want them to occasionally get stuck
on 15 second spin up times on requests.

~~~
habosa
There are some websites that will ping your heroku instance every few minutes
for free. Works great for me.

------
guido4000
I'm not sure about the pricing yet as I can run like 5 or 10 docker instances
in one DigitalOcean VM costing 5 dollars per month.

~~~
sntran
Probably the differences are your Docker instances run on dedicated server
instead, and you have all the setup and preparation and maintaince made for
you.

------
Matsta
Hmm StackDock.com is hosted on a server at Hetzner in Germany.

I don't 100% know if the containers themselves are hosted by Hetzner or not,
but Hetzner is more of a budget provider than something you host production
sites on.

I've heard many mixed review about their network and mostly their support
which isn't up to scratch. We'll see what happens but from what I see, if
someone decides to abuse the service, Hetzner might just take down the whole
server without warning just like OVH do.

[http://www.hetzner.de/en/hosting/produkte_rootserver/px120ss...](http://www.hetzner.de/en/hosting/produkte_rootserver/px120ssd)
(I'm guessing they are using something similar to this).It's a pretty powerful
and cheap server but if you search hard enough you can find something
equivalent in the States for around the same price.

~~~
thomaslutz
Hetzner surely has gone downhill over the years (quality and pricewise), and
support was better 8 years ago, but to say you would not host a production
site there is a pretty bold statement.

If you need real HA you should perhaps use more than one provider anyway. Or
what are your recommendations?

------
nfm
Looks cool. Here's what I'd love to see: built-in git deployment (ie. take a
Dockerfile, build an image from it, and then after a push add the latest
source code to /app and start new instances), and some kind of orchestration
so you could run a number of app containers behind a load balancer container.

------
arianvanp
I Love the idea! really. I just don't like all the UX yet. Some things feel
... off. It might be something personal. I'm not sure. But I guess it's
interesting to discuss. "Drops are distilled Decks" The words feel
semantically mismatched for some reason. If I think "Deck"I don't think
"Config". If I think "Drop" I don'think "Deployable stuff" and I don't see how
a "Distilled deck" is a "drop". Also it feels odd that I can create a "New
deck" in the "instances" section.

though adding "cards" to a "deck" sounds intuitive.

I'm trying to come up with better terminology. something with ships and
containers...

~~~
cobrabyte
One thing that surprised me...

When I created a Deck (default Sinatra Hello World) and converted it to a
Drop, it did just that: it removed the Deck and created a Drop.

I guess I thought it would keep the Deck so that I could see the configuration
that I had chosen to create it. Is this a Docker thing where, once you've
created it, you don't see the config any longer? I don't think it is but I've
not honestly played with Docker yet. $5 a month is a low ask for me to try it
out.

Also, when it comes time to pay for a Deck/Drop and you don't have credit card
info saved, it forwards you to that page... but, after entering the info,
you're not put back into the process. You're dumped back into the Deck page.
That seemed odd to me... wasn't sure if it had been converted or not.

I wish the word 'manifest' wasn't used in so many contexts because, if you're
going to stick with the container shipping analog, it would have made more
sense to have Manifests, Containers and Ships. That's just me though... who
knows. ;)

All in all, cool service. Look forward to playing around with it this weekend.

EDIT: I see that you can create a copy of the Deck that created a Drop...
still seems odd that the default behavior is to blow it away upon creation of
a Drop.

~~~
edbyrne
Appreciate the feedback - thanks - point taken and we'll fix this.

------
Touche
Is the pricing for 1 dockerfile or unlimited dockerfiles?

~~~
edbyrne
It's per instance - so you can have unlimited docker files; you only pay when
you create an instance from one.

------
kbar13
IMO Labels/tooltips should be added to the icons for the cards. Some of them,
including the leaf (nodejs?) and the tree (nfi what that is) aren't especially
obvious.

Otherwise, cool!

~~~
jonny_eh
And when I click on one, the checkmark doesn't disappear until I unhover the
mouse.

------
dylanz
Very cool, and I was waiting for something like this to be built out. Are you
planning on having a command line tool to control your deployments?

~~~
zmitri
I use a similar service called Orchard that has a "heroku-like" command line
wrapper around the docker client. It's quite nice
[https://github.com/orchardup/orchard-
client](https://github.com/orchardup/orchard-client)

------
theunixbeard
I started the default instance with sinatra running, but where do you see the
IP address to visit it via a web browser?

------
j-b
Just signed up but the site now appears to be down, receiving "We're sorry,
but something went wrong."

~~~
edbyrne
Hackernews traffic spike! You can signup and create a Dockerfile - we've just
paused instance deployment for a couple of hours as we add more servers. Sorry
for the inconvenience.

------
tehwebguy
Looks awesome! Anyone know if there are bandwidth / throughput / transfer
charges?

Also, forgive my ignorance, but what would it take to be able to "add
containers" in the same way that you can add dynos on Heroku?

------
bionsuba
You should do some A/B tests to confirm, but I but the pricing table at the
bottom was a little confusing because the price was not highlighted in any
way, and the call to action was round when it is typically a rectangle.

------
MilesTeg
The issue with linux containers is (or at least it used to be) that it is
possible for a malicious user to 'break out' of the container. Has this
problem been solved?

------
knotty66
Nice. Looking forward to seeing how this and all the other Docker based PAAS
ecosystems like Flynn, Deis, Tsuru, Shipbuilder, CoreOS etc pan out.

~~~
gabrtv
I agree this looks very cool. As far as [http://deis.io/](http://deis.io/) is
concerned, we're focused more on the "operate your own PaaS" capability,
whereas this seems to be a pure hosted service -- which is great for lots of
use cases.

Best of luck guys!

------
shtylman
Can I use a docker image I have already created?

------
Geee
Is this production-ready and trusted? Who are these guys? I don't want my apps
to be hosted on a quick hack.

------
kohanz
This looks like an awesome service. And the image on the site reminds me of
Season 2 of The Wire - even better!

------
susi22
Q: Do people have root on the containers?

~~~
joevandyk
Yes.

------
samtp
Cool service but your branding makes it look like you are affiliated with
Canonical/Ubuntu.

------
secure
Excellent! Will play around with it soon. Thanks for offering this, and best
wishes.

------
cvburgess
This is fantastic!

Does anyone know where DO servers are located?

~~~
sandhillcount
> DigitalOcean currently has data centers in San Francisco, New York City, and
> Amsterdam.

From here:
[http://www.enterprisenetworkingplanet.com/datacenter/digital...](http://www.enterprisenetworkingplanet.com/datacenter/digitalocean-
rolling-in-with-new-features-big-plans-for-asia.html)

~~~
cvburgess
Thanks! This is just what I was looking for.

------
aurels
I get a 500 error when logging in, am I the only one ?

------
gregf
Like the idea, but would like to see hourly billing.

~~~
edbyrne
Thanks for the suggestion - we are looking at more usage based billing -
including per CPU cycle / RAM usage to be a 'true' utility.

------
madisp
Clicking on alpha/deploy leads to 404 :(

------
jongleberry
where are the servers hosted? AWS? US or EU?

------
matiasb
Sounds great!

