Hacker News new | past | comments | ask | show | jobs | submit login
Announcing Docker 1.9: Production-Ready Swarm and Multi-Host Networking (docker.com)
311 points by ah3rz on Nov 3, 2015 | hide | past | favorite | 79 comments



On topic of docker and multi-container, multi-machine orchestration... Is there a comprehensive "docker deployment for dummies" guide out there? For example, let's say I have couple web applications with their dockerfiles ready, a database and a redis instance on software side, and then couple server instances for it all to run on. Where do I go from there? What's the best process to package everything up and get it to run on those servers? Deliver updates to those applications, preferably in zero-downtime manner? I have a vague notion that my CI should be building the images, and pushing them to something called docker registry. But how are those secured? Is that a paid service? And what happens then, how do servers know to fetch and run the new version?


I've implemented a zero downtime Continuous Deployment pipeline with Jenkins and Docker, see project here: https://github.com/francescou/docker-continuous-deployment


I wouldn't call it comprehensive but I did this:

https://zwischenzugs.wordpress.com/2015/08/26/a-high-availab...


This space is still (fairly) new, so the general answer seems to be that there are multiple solutions to each problem, some that work well with others and some that do not.

For orchestration, offhand the most active projects seem to be Kubernetes [1], Swarm [2], Deis [3] and Mesos[4]. Kubernetes is built primarily by Google, Swarm by Docker and Deis by EngineYard, with each team having experience in different areas (orchestration, containers and full-tier solutions, respectively).

[1] http://kubernetes.io/ [2] https://docs.docker.com/swarm/ [3] https://github.com/deis/deis [4] http://mesos.apache.org/

Kubernetes, Swarm and Mesos handle the orchestration portions only, while Deis is a more feature-complete solution that handles the CI and registry portions as well.

Delivering updates to these solutions and doing so with zero downtime is still very early as well. Kubernetes has a rolling update mechanism, but it can still (occasionally) result in downtime if not setup correctly. Deis handles updates via git-push and will ensure that new containers are in place before the old ones are taken out of service. As for Swarm, my personal knowledge is limited in regards to rolling update, so I'll leave that for someone else to fill in.

For building and delivering images, there are as well multiple solutions. The common solutions are to use a Docker-compatible registry such as Quay [5] (Disclaimer: I'm a lead engineer on the Quay team) or the DockerHub [6]. In addition to supporting simple image pushes, both registries as well support building images in response to GitHub or BitBucket, so they can also be used as an integrated CI, of sorts. Both these services are paid for private repositories. Docker, as well, has an open source registry [7] which can be run on your own hardware or a cloud provider.

Registries are secured by running under HTTPS at all times (unless explicitly overridden in Docker via an env flag), and having user credentials for pushing and (if necessary) pulling images. Registries typically offer organizations and teams support as well, to allow for finer-grained permissions. Finally, some registries (such as Quay) offer robot credentials or named tokens for pulls that occur on production machines as an alternative to using a password.

[5] https://quay.io [6] https://hub.docker.com/ [7] https://github.com/docker/distribution/blob/master/docs/depl...

In terms of how servers know when updates are available, it all depends on which orchestration system is being using. For Kubernetes, we at CoreOS has been experimenting with a small service call krud [8] which reacts to a Quay (or DockerHub) image-push webhook and automatically calls Kubernetes to perform a rolling update. Other orchestration systems have their own means and methods for either pushing or pulling the fact that the image to deploy has changed.

[8] https://github.com/coreos/krud

Hope this information helps! (and if I forgot anything, I apologize)


The docker ecosystem is hard to follow. Like you've just mentioned there are multiple solution to each problem. Docker based solutions for orchestration(Swarm), storage (v1.9) and networking (v1.9) overlap with the offerings from Kubernetes,Mesos, Flocker and whole bunch of others.

It's hard to know whether to wait for Docker to provide a solution or to use something that already has momentum. Take networking for example. Solutions have been bandied about for the last year or so and only now do we have something that's production ready. Do I rip out what I already have for something that is docker native or do I continue with the community based solution.

Storage (Data Locality) also follows a similar path. Kubernetes provides a way for making network based storage devices available to your containers. But now, with the announcement of Docker v1.9 do I go with their native solution or something that has been around for ~6months longer?

I've been working with these technologies for the past year and it has not been easy building something that is stable with a reasonable amount of future-proofness baked in.


My advice would be to think hard about your requirements and pick something which meets them. Don't fret about the "best" solution - you and your team have more important problems to solve. If something works for you then you have made the right choice. All the solutions you would pick today will still be around tomorrow.


Try writing a book on it! Maddening.


great writeup, most helpful. thanks a lot!


If you're comfortable deploying to AWS, I'm building an open source and free platform, Convox, that addresses your setup and deployment questions.

Here are a couple guides that walk you through your first Docker cloud deployment:

http://convox.github.io/docs/getting-started/ http://convox.github.io/docs/getting-started-with-docker/

This gives you a private build and registry service that are secured in your own VPC and accessable only through authenticated API calls.

The software that sets this all up is open source and free, but you do pay for your AWS usage (EC2, ELB and S3).

Servers know how to fetch the new version by issuing one `release` command that triggers zero-downtown rollout on the EC2 Container Service (ECS).


How is the multi-host networking implemented? Is there a dependency on a service discovery system? What service discovery system? Or are they using portable IP addresses? How are those implemented? Overlay networks? BGP? Or is it doing some crazy IPTables rules with NAT?

Will it work with existing service discovery systems? What happens if a container I'm linked to goes down and comes up on another host? Do I get transparently reconnected?

There's so much involved with the abstraction they're making that I'm getting a suspicion that it's probably an unwieldy beast of an implementation that probably leaks abstractions everywhere. I'd love to be proven otherwise, but their lack of details should make any developer nervous to bet their product on docker.


https://blog.docker.com/2015/11/docker-multi-host-networking... http://www.container42.com/2015/10/30/docker-networking-rebo...

vxlan (overlay) and bridge, depending:

"Docker ships with 2 drivers:

bridge -- This driver provides the same sort of networking via veth bridge devices that prior versions of docker use, it is the default. overlay -- Not to be confused with the "overlay" storage driver (thanks overlayfs), this driver provides native multi-host networking for docker clusters. When using swarm, this is the default driver."


This link is basically what Socketplane was working on when they got acquired: https://github.com/docker/docker/issues/8951

Basically integrating OVS APIs into Docker so it could use more mature networking code as well as VXLAN forwarding. VXLAN is basically IP encapsulation (a 16-bit ID) that the networking industry has standardized on. It more or less allows for L2 over L3 links. I like to think of it as the next Spanning Tree.

So the unwieldy part is the weight OVS brings as well as the VXLAN encapsulation in software - both of which have momentum towards being more lightweight.


Could you explain this to someone not as familiar with networking?


According this [1] blog post, networking is pluggable, and the current "overlay driver" uses an overlay network via VXLAN tunneling. The driver sets up the tunnels automatically for you.

[1] https://blog.docker.com/2015/06/networking-receives-an-upgra...


I don't quite understand the swarm & compose workflow for production. I'd rather use a declarative language to specify what the systems look like, potentially with auto-scaling, health checks to replace containers if they go down, etc. I don't want to run one-off commands to launch containers based on local instead of centrally stored configuration, run one-off commands to launch the underlying hosts and to scale to more instances (which then isn't persisted anywhere), etc.

I feel like I'm just not understanding the "docker approved" approach. Which is surprising because docker itself is so great.

The networking stuff seems interesting though, I'm very curious how the rest of the ecosystem will evolve to take advantage of it or not.


This is very much Kubernetes's opinion, based on the production experience the engineers who built it.

Brian Grant has explained it recently here: https://github.com/kubernetes/kubernetes/blob/master/docs/wh...

"The technical definition of "orchestration" is execution of a defined workflow: do A, then B, then C. In contrast, Kubernetes is comprised of a set of control processes that continuously drive current state towards the provided desired state. It shouldn't matter how you get from A to C: make it so. This results in a system that is easier to use and more powerful, robust, and resilient."

Look into PaaSes that are built on top of K8s, like Red Hat's OpenShift v3 or Deis.


Swarm & compose are pretty low level and not sufficient for production deployments in my experience. In production, you usually need things like logging/monitoring, versioning/rollbacks, scaling, configuration/user management, load balancing, etc. which you either have to setup yourself or get through a PaaS. I personally recommend Deis because it's Docker/CoreOS based (lightweight) and its developers are very active and responsive. I have to admit though I haven't evaluated all alternatives. https://github.com/deis/deis


If you don't mind using AWS, check out Empire. Open source, 12-factor compatible PaaS built on top of amazons robust ECS container scheduler. Rollbacks/versioning/load balancing/scaling included.

https://github.com/remind101/empire


You're absolutely right about still needing logging, scaling, configuration management and load balancing.

Me and a team are working on another open-source project, Convox, that offers this on AWS.

One differentiator is that we use "pure" AWS for everything.

Load balancing comes from configuring ECS and an ELB the right way. Logging is based on Kinesis and Lambda. We're seeing great reliability and manageability with this approach.

http://convox.com/


+1 for deis its pretty nice and is actually "production ready"


Compose isn't great for deploying services but it really shines for command line utilities. At Pachyderm our entire build and test system is based on compose. It's nice because it means that all we need for dev is a working docker daemon and our tests can run in very production like conditions.


Docker more and more gets into a direction which i don't like.

EDIT the missing content..: I mean currently it mostly is for the big users. There aren't too much things for "small" users. The big things like kubernetes, etc are really hard to configure / maintain, etc. I mean it's easier to maintain ansible / puppet / chef / etc... - scripts than maintaining a real "docker" environment. even looking at deis, flynn, openshift its not just run "this, upgrade with this".

after you setup the hole thing you need to create huge buildpack scripts or Dockerfiles or kubernetes configs or whatever. you just needed process isolation, now you build a infrastructure on top of a infrastructure.


I think at this moment docker does looks cool, but doesn't provide much value at this time. Some people are using it just for packaging their apps, but that's something that was already solved or supposed to be solved by the language that is used.

The direction where it is going is to create an "internet operating system" you upload your application and don't care on which server it is running.

Currently that above problem has not been solved yet, and what we have is just bunch of tools to put them together and DIY.

The real power of this will come, once cloud providers will allow you to simply upload the images without having to build that infrastructure yourself.

I think you're right to fall back to ansible / puppet / chef, because this technology is simply not ready yet.

This is especially true if you're using public cloud (and while you can make it work in AWS, it will cost you more; both due to amount of effort, but also due to overhead imposed by it. Remember, AWS still charges you by VM).

There might be some benefits of using it right now when you have your own datacenter with physical machines. It could provide costs savings, compared to running your apps on dedicated physical servers or even using VMs.


Certainly. There's an excellent opportunity there if someone is willing to execute. Would you (the community) pay for a product? Or support an open source project through consultancy? Or are we going to sit and wait for a large engineering organisation to build it internally and open source?


i would pay, yes. however it would need to be a excellent product. and I don't think that any product the near term could replace a existing ansible/puppet/etc... workflow which contains a bunch of lines (less then 1000 for multiple projects).

as said the only thing gain for docker would be process isolation, so it should be really really awkward simple and useful on low end hardware. (as the other solution already does) and getting process isolation with cgroups isn't too hard on newer kernels. (# systemctl set-property httpd.service CPUShares=500 MemoryLimit=500M)

so what the product needs to have:

  - process isolation
  - easy configuration
  - configure the os/software and update it easily
  - nothing more than a bunch of lines per project (no dockerfile frickling)
  - binary / git rollouts


Installing kubernetes isn't actually as difficult as you've made it out to be. You'd be able to draft a workflow in less effort in comparable circumstances using puppet and you'll get things such as health checking and failover of your apps for mostly free.

If you're well invested in puppet, using puppet is going to be easier because you know it. You can happily use docker with puppet. Stop puppet from installing $APP and instead use it to docker pull && docker run $APP.

This means the logic for building your application has obviously moved to the Dockerfile. You cannot currently get rid of this logic, only hide it in abstractions. I prefer it living in the apps repo as it's a nice seperation of concerns, but you obviously would prefer it to be magic, which you can have but at the price of versatility.

If you are able write code to build and deploy apps, then moving over to using docker should be pretty trivial for you. However, I actually see docker as a replaceable part, whilst kubernetes might actually be here to stay.


* you need network isolation as well. no point in doing process isolation without it. and thus forwarded ports.

* shared folders to persist necessary files. and thus volumes. and a few years later distributed volumes.

* not just isolate a process, but all its dependencies as well. no point in having a shared .so file which everybody can change, while just a single process is isolated. and thus a whole sandboxed container.

* and then deal with the size of a full sandbox, until you need some way to share unchanged files. and thus images and layers.

* and so on and so on.

big things always start small. at least in docker's case they did start small, and they're still small and lean individual projects. feel free not to use docker compose or anything else.

edit: formatting


I totally agree with you and that's why I started https://github.com/slicebuild

The project is only month and a half long so if you want I can talk you through privately


You're describing a PaaS. There are already several.

My favourite is Cloud Foundry, because I've worked on it and I trust the way it's built.

Here's how I deploy an app:

    cf push
Done.


docker-compose and tmux and runit works.


so how is docker helping you? I mean tmux and runit are really good without docker and docker just gives a few things, like process isolation. (which you could also have with cgroups and other container technologies which probably working better with runit)


Ability to reproduce a build environment - and have others do it. I can spend time working out what dependencies are needed on various distros, or I can ship a Dockerfile in my Git repo.


I've been watching these 5 RCs for a while now because of great additions:

- Build time variables: https://docs.docker.com/engine/reference/builder/#arg

- docker volume subcommand, finally we have a sane way to clean up left over volumes which are left by default.


That volume subcommand is critical to any description of Docker as "production ready". Docker defaults to leaving volumes "dangling" on the filesystem and there was previously no supported way to remove them. Really glad to see this make it in.


Volumes have been an underdeveloped part of the Docker world for a long time. It took a long time, but it looks very promising.


Congrats to the Docker team on 1.9 and Swarm 1.0! From the ClusterHQ team.

Thanks for the shoutout to flocker in the docker volume example.

$ docker volume create -d flocker --name=myvolume $ docker run -v myvolume:/data busybox sh -c "echo hello > /data/file.txt" $ docker run -v myvolume:/data busybox sh -c "cat /data/file.txt"

on a related note we've been have been having a great time seeing volumes get mounted and remounted with swarm 1.0/flocker.


"We’ve been scale testing [swarm] to 1,000 nodes and 30,000 containers and it keeps on scheduling containers in less than half a second."

That's really, really exciting. Been looking forward to the GA release of Swarm for quite some time, glad to see it here.


Freakin' awesome. Congrats on the docker team for this new release! Looking forward to testing out the multi-network hosting.

On a side note, is there any authentication / security for public-ip docker multi-networking?


I've put together this asciinema demo to show some of 1.9's new features:

https://github.com/ianmiell/shutit-docker-1_9/blob/master/RE...

feedback welcome @ianmiell


Great to see this. I've been missing Docker Compose on windows. Playing around with swarm for some time now I was hoping for it to become production ready. Does anyone know if swarm can now reschedule failed containers to another host? Couldn't find this detail in the blog post about it.


I am really trying to figure out the ecosystem. I did some stuff with a single server but now as we need to move it to multiple servers we have Rancher, Weave, and so many others (kubernetes?). And now docker has integrated multihost networking so I am really not sure how to proceed.


I'm biased: use a full PaaS. I like Cloud Foundry because I've worked on it. OpenShift is another alternative.


Pretty sure openshift v3 is built on top of kubernetes


And Cloud Foundry can run Docker containers.

The point is that both are platforms. Application developers shouldn't really need to care about the internals of a PaaS, for the same reason that I don't really care about the internals of the Linux kernel.


Yes: Docker, Kubernetes, and etcd


I've been using Deis (http://deis.io/) which is built on top of Docker and inspired by Heroku. I think it's a bit more "lightweight" than Cloud Foundry but really pleased so far. I had a few issues but the Deis team was always quick to help and fix bugs.


If you want a lightweight Cloud Foundry, try Lattice[0]. It's specifically intended to allow developers to experiment with the core Cloud Foundry components (routing, scheduling/placement and log draining) with very low overhead.

[0] http://lattice.cf/


The ecosystem is changing so rapidly right now.

One answer is to stick with whatever AWS offers. The EC2 Container Service (ECS) offers a practical solution to running your app as containers on a cluster of multiple servers without adding any other software for orchestration.

It has a lot of deficiencies, but these are solved with other AWS services. For example you still use an ELB for load balancing across your servers and containers.

The rest of the ecosystem is pushing for a more radical container future that doesn't rely on AWS.

Joyent has a true "container-native" infrastructure with Triton. You can run your app as multiple containers without considering anything about servers.

Tutum (recently acquired by Docker) operates the same way. I'm excited to see what this platform offers if it evolves in lockstep with Docker core for networking, logging and data.

Kubernetes and Swarm are projects that solve the low level challenges of orchestrating containers in a cluster. But you almost certainly need to build a lot more around these systems to get logs, load balancing, etc.


What is the difference between Compose, Swarm, Tutum and Kubernetes? To me looks like you can use each of them to compose a set of containers to run an app.


The Docker command is used for working with one image.

When working with multiple images, coordinating ports, volumes, environment variables, links, and other things gets very troublesome very quickly as you get into using a mish-mash of deployment and management scripts.

Compose aims to solve this problem by declaratively defining ports, volumes, links, and other things.

Compose does allow you to scale up and down easily but it doesn't do auto-scaling, load balancing, or automated crash recovery -- this is where Swarm comes in.

Kubernetes does what both compose and swarm do, but in a single product.

Both Swarm and Kubernetes are designed to accommodate provisioning of resources across multiple hosts and automatically deciding which containers go where.

Compose, Swarm, and Kubernetes are all things you can install yourself.

Tutum is far bigger and the scope of its usage falls well outside of what Kubernetes and the other's do, but suffice to say that it's more of a PaaS than anything else.

Someone please correct me if I'm wrong, I'm not very familiar with Swarm, Kubernetes, or Tutum.


Thanks, that was helpful.


Swarm is a service that sits between your docker cli and your docker engines. It makes it as if you are talking to one docker engine from the cli when in fact you are talking to many. This makes it easier to manage docker engines across multiple hosts.

Compose is a tool that issues commands to docker engines and will e.g. spin up containers and link them together in the right order. It makes rote docker commands a little less painful. It can talk to a single engine or, apparently, many engines via Swarm.

When it comes to putting providing a production "service" based on containers you need to be able to add and remove docker engines, to, for example, deploy new code via rolling update. Google Container Engine (GKE) and Amazon ECS marry docker concepts with front-to-back implementations of hosted infrastructure like server instances and network load balancers. Over-simplified, each has an agent that runs on a docker engine and does work similar to Compose and Swarm against AWS and GCE. Google's daemon is called Kubernetes.


I won't comment on Kubernetes because I'm not qualified to do so.

Compose: Multi-container orchestration. You define an application stack via a compose file including links between containers (web front end links to a database). When you run docker compose up on that file, compose stands up the containers in the right order to deal with dependencies.

Swarm: Represents a cluster of Docker Hosts as a single entity. Will orchestrate the placement of containers on the hosts based on different criteria and constraints. You interact with the swarm the same way you would any single docker host. You can plug in different discovery services (Consul, Etcd, Zookeeper) as well as different schedulers (Mesos, K8s, etc).

Tutum: SaaS-based graphical front end for Docker Machine, Swarm, and Compose (although it's Stackfiles not Compose files w/ Tutum). The stuff described above is handled via Web GUI

You didn't ask but Machine is automated provisioning of Docker hosts across different platforms (AWS, GCE, Digital Ocean, vSphere, Fusion, etc).


If tutum is primarily a gui on top of a bunch of open source products, it doesn't sound like much of a business plan.


Ansible Tower is a gui on top of Ansible.

Docker Hub is a gui on top of the Docker registry.

GitHub is a gui on top of git.

And so on.


It worked well enough to be acquired by Docker :)


There was an article posted on HN not too long ago about this.

https://news.ycombinator.com/item?id=10438273


Swarm and Kubernetes are definitely competitors.

Swarm is a container manager that automatically starts and stops containers in a cluster using a scheduling algorithm. It implements the Docker API, so it actually acts as a facade that aggregates all the hosts in the pool. So you talk to it just like you would with a single-host Docker install, but when you tell Swarm to start a given container, it will schedule it somewhere in the cluster. Asking Swarm to list the running instances, for example, would list everything running on all the machines.

Kubernetes is also a container manager. The biggest difference is perhaps that it abstracts containers into a few high-level concepts — it's not tightly coupled with Docker and apparently Google plans to support other backends — that map more directly to how applications are deployed in practice. For example, it comes with first-class support for exposing containers as "services" which it can then route traffic to. Kubernetes has a good design, but for various reasons the design feels overly complicated, which is not helped by some of the terminology they've invented (like replication controllers, which aren't program, but a kind of declaration), nor by its somewhat enterprisy documentation.

Kubernetes is also complicated by the fact that every pod must be allocated a public (or at least routable) IP. If you're in a private data center that already has a DHCP server set up, that's a non-issue, but in this day and age, most people probably will need an overlay network. While there are tons of such solutions — Open vSwitch (aka OVS), GRE tunnels, IPsec meshes, OpenVPN, Tinc, Flannel (formerly Rudder), VXLAN, L2TP, etc. — none of them can be called simple. Of course, plain Docker doesn't solve this in any satisfactory way, either, but at least you can be productive with Docker without jumping into the deep end like Kubernetes forces you to do.

Docker Networking is a stab at solving the issue by creating an overlay network through VXLAN, which gives you a Layer 2 overlay network. VXLAN has historically been problematic because it has required multicast UDP, something few cloud providers implement, and I didn't know VXLAN was a mature contender; but apparently the kernel has supported unicast (which cloud providers to support) since at least 2013. If so, that's probably the simplest overlay solution of all the aforementioned.

As for Compose, it's a small tool that can start a bunch of Docker containers listed in a YAML file. It's unrelated to Swarm, but can work with it. It was designed for development and testing, to make it easy to get a multi-container app running; there's no "master" daemon that does any provisioning or anything like that. You just use the "compose" tool with that one config file, and it will start all the containers mentioned in the file. While its usefulness is limited right now (for example, you can't ensure that two containers run on the same host, unlike Kubernetes with its pods), the Docker guys are working on making it more mature for production use.


> If so, that's probably the simplest overlay solution of all the aforementioned

(I work on Weave)

Weave Net also lets you create a Docker overlay network using VXLAN, without insisting that you configure a distributed KV store (etcd, consul, etc.). So I would argue Weave Net is the simplest :-)

More detail here: http://blog.weave.works/2015/11/03/docker-networking-1-9-wea...


FWIW, it's possible to specify Swarm filters using Compose which Swarm can use to know it should colocate containers on the same node ("affinity"): https://github.com/docker/swarm/tree/master/scheduler/filter...


I think it worth mentioning that setting up networking in Kubernetes is largely a problem for deployers - not so much k8 users.

There are a variety of hosted, virtualized and bare metal solutions available.

The CoreOS folks have some nice all-in-one VMs if you want to experiment.

Google's hosted Container engine is about as simple as it gets - and very inexpensive (I have been playing with it for a few weeks and have spent about $20).


Can it create unprivileged containers yet?


The experimental build has support for user namespaces.


I am curious (and afraid to look): How much breakage and backwards-incompatible API changes are there this time 'round?


Here's a changelog for the API:

https://docs.docker.com/engine/reference/api/docker_remote_a...

The API is backwards compatible, so you if you don't want any of the new features, there is no breakage! E.g. if you're using Docker 1.8 (API version 1.20), prefix your API calls with /1.20/ and you're safe to upgrade to Docker 1.9.


What's the relation between Docker Swarm and Hipache? Is Hipache discontinued? Is Swarm built on it? Is it compatible?

Also, is it possible to make Swarm answer to the same client IP with the same backend server during a 'session'? This is very important for database applications where the sync after each write may take some time and you don't want the UI to show different states from different servers until the system has settled. Hipache AFAIK doesn't offer this, IMO the biggest downside of this.


Hipache is a reverse proxy, swarm is for managing/scheduling a cluster of docker engines.


Well, when I think of a cluster I think of load balancing and failover - for which you can use a reverse proxy such as Hipache. So I don't quite understand how this is not related. The way I understand Swarm, it could be just a higher level abstraction built on top of that?


Load balancing and failover are generally a requirement in clustered environments, but are really different jobs. LB's "schedule" connection requests, cluster schedulers schedule the things that those connections are going to (and probably the LB), makes sure they are up and running, etc.


Congratulations on new release! Sadly, still without FreeBSD Jails support, that was sort-of promised around 1.0...


I'm not sure the completeness of the support as I don't run BSD, but I know it's being worked on and at least compiles on BSD. There's actually quite a bit of BSD-only code in.


Does anyone know if Docker Toolbox supports NFS (or something faster than VBox Shared Folders) yet?


No direct support, but you can use scripts like https://github.com/adlogix/docker-machine-nfs to do the mount sharing yourself.


Couldn't you use ssh and build something with sshfs?


May I ask, why this was downvoted? Isn't the idea of vbox's shared folders to provide a convenient data exchange with the host OS? If one puts ssh into the docker container, the host may mount a container's directory into its tree.


[on-topic, but somewhat identifying comment removed] I didn't realize I was actually properly unhellbanned. I prefer the echo. :/


This completes Docker.


Docker is a solution looking for a problem.


Can you provide details about why you think that?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: