Hacker News new | more | comments | ask | show | jobs | submit login
Docker for Mac and Windows Is Now Generally Available and Ready for Production (docker.com)
599 points by samber on July 29, 2016 | hide | past | web | favorite | 163 comments

I've been tracking the beta for a while. I'm confused about this announcement. These issues still seem unresolved?

(1) docker can peg the CPU until it's restarted https://forums.docker.com/t/com-docker-xhyve-and-com-docker-...

(2) pinata was removed, so it can't be configured from CLI scripts https://forums.docker.com/t/pinata-missing-in-latest-mac-bet...

(3) it's not possible to establish an ip-level route from the host to a container, which many dev environments depend on https://forums.docker.com/t/ip-routing-to-container/8424/14

(4) filesystem can be slow https://forums.docker.com/t/file-access-in-mounted-volumes-e...

Are these fixed in stable? I'm personally stuck transitioning from docker-machine and (from the comments) it seems like other folks are as well...

Sadly, the state of things, be it the Docker ecosystem or others, "ready for production" means something much different than it did years ago.

For me, the definition of ready for production, Debian is a good example of the opposite end of Docker.

I think by 'production', they mean 'ready for general use on developer laptops'. No one in their right mind is deploying actual production software on Docker, on OS X/Windows.

I've been using it on my laptop daily for a month or two now, and it's been great. Certainly much better than the old Virtualbox setup.

>No one in their right mind is deploying actual production software on Docker, on OS X/Windows.

Since the whole point of Docker would be to deploy these in production and not just for development, I don't see how the term 'ready for production' can be used. Isn't this just a beta?

I doubt the problems mentioned happen on Linux or CoreOS, which is likely what a production environment will run on.

> Linux or CoreOS

Well, now I'm confused

Sorry, CoreOS is Linux as well, but in my mind it's enough of a hyper-specialised immutable auto-updatable container-specific version of Linux that it warrants a separate category when talking about Docker.

Docker for Windows is to isolate Windows software.

It's not a tool to test Linux containers on Windows.

The deployment target for Docker containers for Windows will be a Windows OS.

Sadly, no, they're using the name "Docker for Windows" to refer to the Docker-on-Linux-in-a-VM-on-Windows version.

Real native Windows containers and a Docker shim to manage them are coming: [1] but not released yet.

[1] https://msdn.microsoft.com/en-us/virtualization/windowsconta...

I don't think so. That's what Jeffery Snover is working on in Server 2016 with Windows nano server.

Unless something has changed since the last time I checked, The WindowsServerCore docker image was not generally available yet and requires server2016 (I think it was TP6 the last time I checked)

Docker, to my knowledge, is still exclusively Linux flavors. (Though I'm happy to be corrected if someone knows more than me)

Docker images still aren't generally available, but you can now run Windows Container Images based on the NanoServer docker image (and WindowsServerCore image if you replace nanoserver with windowsservercore in their image URL in the docs below) on Windows 10 (insiders build)[0].

[0]: https://msdn.microsoft.com/en-us/virtualization/windowsconta...

I went wide-eyed about three or four times while reading those instructions!

Super exciting! Thanks for the comment.

I am almost positive that is completely incorrect. Can you give any example of Docker being used to isolate Windows software?

You're right. I was wrong about this

you would use kubernetes, dc/os, swarm mode for aws, etc for that. Containers are portable.. nobody is launching a windows vm and doing a "docker run" for their production env

The fact that I can have Bash up and running in any distro I feel like within minutes blows my friggin mind. Docker is the stuff of the future. We were considering moving our development environment to Docker for some Fun, but we're still holding off until it is more stable and speedy.

I'm still using VirtualBox. Could you elaborate why Docker is better?

Leaving containers vs VMs aside, docker for Mac leverages a custom hypervisor rather than VirtualBox. My overall experience with it is that it is more performant (generally), plays better with the system clock and power management, and is otherwise less cumbersome than VirtualBox. They are just getting started, but getting rid of VirtualBox is the big winner for me.

It's based on the OS X sandbox and xhyve which is in turn is based on bhyve https://blog.docker.com/2016/03/docker-for-mac-windows-beta/


When I used VirtualBox for Docker (using Docker machine/toolbox), I would run out of VM space, have to start and stop the VM, and it was just clunky all around.

Docker.app has a very nice tray menu, I don't know or care anything about the VM it's running on, and generally is just better integrated to OS X. For instance, when I run a container, the port mapping will be on localhost rather than on some internal IP that I would always forget.

I don't think he was comparing Docker to VirtualBox.

In Docker 1.11 they used VirtualBox to host a Linux Docker Image to run containers. In 1.12 they switched to Microsoft's Hyper-V.

On the other hand I find my old setup with VMware much more reliable and performant. And I can continue to use the great tools to manage the VM instead of being limited to what docker provides. Some advanced network configuration is simply impossible in docker's VM.

I'm pretty sure they don't mean that, or they would have said that it was still in Beta.

This isn't a product that's "ready for production"; it's a product company declaring that it is.

This means what it's always meant: that the company believes the sum they'll make by convincing people it's "production ready" is greater than the sum they'll lose from people realizing it isn't.

Keep in mind the optimal state of affairs for Docker Inc. is one where everyone is using Docker and everyone requires an enterprise contract to have it work.

So misinformed. Docker for mac and docker for windows are not targeting production. They are designed for local dev envs

So why call it "production ready"?

I agree that it is confusing. Production ready in the sense that it is stable for the targeted use-case: local development environments. Not for "production". Damn now i'm confused...

GA would probably be a more appropriate description.

Well, it was beta before.

Exactly. "Ready for production" and "industrial" are constantly abused. All these tools are awesome and we use them, but PROPERLY deploying and supporting them in production is far from painless (or easy).

I think many view "ready for production" as a sign of what they do have in place is stable enough and support options are available so that it ticks all the CTO/CEO boxes in business plans.

Which basicly gets down to when your CTO/CEO or some manager comes in preaching docker - we should be doing that, why arn't we has one less argument to dismiss it now than before.

Yes many aspects need improving but case of what is there is deemed to of gained enough run-time in environments to be deemed stable enough to say, we can support this in production for off the shelf usage without you needing lots of grey-bearded wizards to glue it all in place and keep it that way.

I'm not completely disagreeing with you but Debian in recent years has taken massive steps backwards as far as production stability. Jessie for example did not ship with SELinux enabled which was a key deliverable for Jessie to be classed as stable / ready for production, what's worse is it doesn't ship with the require SELinux policies - again another requirement before it was to be marked as stable, it's filled with out of date packages (you know they're old when they're behind RHEL/CentOS!) and they settled on probably the worst 3.x kernel they could have.

You've given one example; SELinux. Did wheezy ship with SELinux enabled? No. So how is that a step backwards? It would have been a step backwards if they shipped with it enabled and it was half-assed. SELinux is notoriously hard to get right across the board. See how many Fedora solutions start with "turn off SELinux." Shipping jessie without SELinux enabled was the right thing to do, if the alternative was: not shipping jessie; or shipping borked jessie with borked SELinux support on by default. Those who know what they are doing can turn it on with all that entails.

You gripe about kernel 3.16 LTS but provide no support for your statement. With a cursory search I can't find any. If it was such a big deal I have to assume I would. For my part I use Jessie on the desktop and server and have not encountered these mysterious kernel problems of which you complain. Again, you may have wished for some reason that they shipped with 3.18 or 4.x, but they shipped. They have 10 official ports and 20K+ packages to deal with, I'm sorry they didn't release with your pet kernel version. Again, those who know what they are doing can upgrade jessie's kernel themselves if they are wedded to the new features.

So, massive steps backwards?

Unfortunately, nobody has stepped for SELinux maintainance. If this is important for you, you should help to maintain those policies.

All your remaining points are vague at best.

Oh believe me, we did try to contribute to Debian, in recent years the community has aged poorly and become toxic and hostile, where the Redhat / CentOS community has grown, is more helpful and we have found them to be more accepting of people offering their time than ever.

Most people I have spoken to about this say exactly the opposite. In 2014, the project even ratified a Code of Conduct [0].

The only major contentious issue I can recall was the systemd-as-default-init discussion, but that was expected.

[0] https://www.debian.org/code_of_conduct

I genuinely don't know about what toxicity and hostility you are speaking of. Any pointer?

It's amazing to me that a tool I use to prove that our stuff is ready for production is having such a hard time achieving the same thing.

Do you run your containers in production with "docker run" ??

Only for a tiny pet project.

The sales pitch I usually give people is that any ops person can read a Dockerfile, but most devs can't figure out or help with vagrant or chef scripts.

But it's a hell of a lot easier to get and keep repeatable builds and integration tests working if the devs and the build system are using docker images.

You are doing it wrong then. People run containers in production using orchestration platforms, like ECS, kubernetes, mesos etc. The docker for mac/windows are not designed to serve containers in production environments.

They help you build and run containers locally, but when it comes time to deploy you send the container image to those other platforms.

Using docker like that is like running a production rails app with "rails s"

And how do you solve all of the security problems and over-large layer issues that the Docker team has been punting on for the last 2 years?

Which security problems are you referring to? Our containers run web applications, we aren't giving users shell access and asking them to try and break out.

Over large layers: Don't run bloated images with all your build tools. Run lightweight base images like alpine with only your deployment artifact. You also shouldn't be writing to the filesystem, they are designed to be stateless.

Credentials capture in layers. Environment variable oversharing between peer containers (depending on tool).

And the fact that nobody involved in Docker is old enough to remember that half of the exploits against CGI involved exposing environment variables, not modifying them.

With kubernetes, putting credentials in env vars is an anti pattern.

You create a secret and then that secret can be mounted as a volume when the container runs, it never gets captured in a layer.

Also CGI exploits exposing env vars would work just as well on a normal non-container instance would they not?

Two separate issues.

Yes, you can capture runtime secrets in your layers, but it's pretty obvious to everyone when you're doing that and usually people clue in pretty quickly that this isn't going to work.

Build time secrets are a whole other kettle of fish and a big unsolved problem that the Docker team doesn't seem to want to own. If you have a proxy or a module repository (eg, Artifactory) with authentication you're basically screwed.

If you only had to deal with production issues there are a few obvious ways to fix this, like changing the order of your image builds to do more work prior to building your image (eg, in your project's build scripts), but then you have a situation where your build-compile-deploy-test cycle is terrible.

Which would also be pretty easy to fix if Docker weren't so opinionated about symbolic links and volumes. So at the end of the day you have security-minded folks closing tickets to fix these problems one way, and you have a different set that won't provide security concessions in the name of repeatability (which might be understandable if one of their own hadn't so famously asserted the opposite http://nathanleclaire.com/blog/2014/09/29/the-dockerfile-is-... )

I like Docker, but I now understand why the CoreOS guys split off and started building their own tools, like rkt. It's too bad their stuff is such an ergonomics disaster. Feature bingo isn't why Docker is popular. It's because it's stupid simple to start using it.

Regarding secrets in builds, I think a long term goal would be to grow the number of ways of building Docker images (beyond just Docker build), and to make image builds more composable and more flexible.

One example is the work we've experimented with in OpenShift to implement Dockerfile build outside of the Docker daemon with https://github.com/openshift/imagebuilder. That uses a single container and Docker API invocations to execute an entire Dockerfile in a container, and also implements a secret-mount function. Eventually, we'd like to support runC execution directly, or other systems like rkt or chroot.

I think many solutions like this are percolating out there, but it has taken time for people to have a direct enough need to invest.

>> Debian is a good example of the opposite end of Docker.

It is not fair to compare Docker with Debian. Docker Inc (who backed Docker) is a for-profit corporation and is backed by investors. It is understandable why they need to push their products into production the soonest time possible.

I use Docker a lot. I also use things like Docker volume plugins and have had to modify code due to API changes/breakages.

"Production ready" in the "container space" for me are Solaris Zones, FreeBSD Jails, and to an extent lxc (it's stable, but I've used it less). I like what Docker/Mesos/etc. bring to the table, but when working with the ecosystem, it takes work to stay on top of what is going on.

It is even harder to consult with a customer or company interested in containers and give the most accurate near/long term option. It becomes a discussion in understanding their application, what approach works now, and guidance for what they should consider down the road.

Networking and Storage are two areas with a lot of churn currently.

What does it matter how fair it is? It's neither fair to compare a monkey to a fish in terms of being able to climb trees, but that doesn't change that one of the two is most likely already sitting on a branch. And ultimately, if you need something that can climb trees, a fish simply won't do, no matter how fair you try to treat it.

I can't get it to work on OSX without the CPU staying at 100%. Still not fixed:

> There are several other threads on this topic already. Setups that docker build an image and rely on in-Docker storage work well; setups that rely heavily on bind-mounting host directories do not. A complex npm install in a bind-mounted directory breaks Docker entirely, according to at least one thread here.


This is another issue that's been preventing my adoption of Docker for Mac: https://forums.docker.com/t/docker-pull-not-using-correct-dn.... The fact that DNS resolution over a VPN still doesn't work correctly makes me wonder how production-worthy this release is. It's a pretty common thing people want to do in my experience.

If you have the time, could you make a report on the issue tracker https://github.com/docker/for-mac/issues and include the contents of /etc/resolv.conf and "scutil --dns" when you connect and disconnect to your VPN? Ideally also include an example resolution of a name by the host with something like "dig @server internalname". I suspect the problem is caused by a DNS server in the "scutil" list being missing from /etc/resolv.conf. We're planning on watching the "scutil --dns" list for changes, but it's not implemented completely yet.

Okay, will do. Resolution of internal hostnames by their FQDN works fine if I set my VPN client (Tunnelblick) to rewrite /etc/resolv.conf. That said, the search domain is not carried into the VM, so name resolution by hostname does not work. Also, Tunnelblick has a name resolution mode that does split DNS (i.e. preserves DHCP-set DNS servers and only forwards DNS requests for the internal domain to the VPN DNS servers). This mode doesn't work at all. Would it be possible to allow forwarding of DNS requests to the host machine like with Virtualbox (VBoxManage modifyvm "VM name" --natdnshostresolver1 on)? I feel like that would simplify things greatly.

Sigh .. I need to disconnect from VPN to use it. I think u can reconnect after creation.

I always thought of production ready to be stable, of all things. Feature complete is not a part of it.

Basically, if you can live with the shortcomings a release has (bugs, performance, lack of features) you can use it in production as long as it's stable (and secure).

I wouldn't consider pegging a CPU until restart to be 'stable'.

This bug's been driving us mad because we can't reliably repro it on our machines at Docker, and it only happens to a small subset of users, but is very annoying when it goes trigger. It seems to be related to the OSX version involved, but there's not enough bug reports to reliably hone in on it.

The other aspect that it may be is a long-running Docker.app -- since as developers we are frequently killing and restarting the application, it could happen after a period of time. I've now got two laptops that I work on, and one of them has no Homebrew or developer tools installed outside of containers, and runs the stable version of Docker.app that's just been released. If this can trigger the bug, we will hunt it down and fix it :-) In the meanwhile, if anyone can trigger it and get a backtrace of the com.docker process, that would be most helpful. Bug reports can go on https://github.com/docker/for-mac/issues

But this release aside, I was more commenting on the whole concept of production ready.

True. So that gives us one issue then?

"Aside from that Mrs Lincoln, how was the play?"

The filesystem is still not as fast as I would like, but it's incredibly improved over the last couple months.

One thing I found, was to be a little more cautious about what host volumes you mount into a container: for a Symfony project, mounting `src` instead of the whole folder sped up the project considerably, as Symfony's caching thrashes the file-system by default.

I have also yet to see a reasonable solution for connecting out of a container back to the host with Docker.app.

On linux and OSX with docker-machine this is easy with:

    docker run --add-host host:ip.for.docker.interface foo
But there is no equivalent to the docker0 interface or the vboxnet interface for Docker.app.

EDIT: I don't use this for any production environments, but it is very useful for debugging and testing.

What about getting the gateway address from inside the container:

    HOST_IP=$(/sbin/ip route | awk '/default/ { print $3 }')

That works for some use cases, but for others (Elasticsearch, Zookeeper, Kafka, etc) the service inside the container needs to bind to an interface associated with an IP that's also addressable by the host. Even in host networking mode, eth0 inside a DFM-powered container will bound something like 192.168.x.y but that 192.168.x.0 subnet is completely inaccessible from the host.

The best solution is to add a new stable, unconflicting IP address to the loopback interface on the Mac and connect to that.

Still not as friendly, as it requires system changes on the host, but not totally unreasonable.

I'll give it a try if I evaluate Docker.app again.

why not just bind a port with -p

I was an early user of the mac beta and the 100% cpu would happen 2-3 times daily. Now it maybe happens once every 2 weeks.

Not sure about the others but the CPU isn't much an issue anymore. Maybe its just me being use to how bad it was.

I've been heavily using it since what must have been early closed beta, and cannot recall ever having this issue. Might be something that isn't quite so widespread.

It's about weekly for me on mac.

It only happens with a few users, and not at all to the majority. It seems to happen more on older OSX versions, but beyond that there has not been anything identifiable in common about the systems it happens on unfortunately.

Not to mention the lack of host:container socket sharing and the fact that the Moby VM time drifts due to system sleep. I love Docker for Mac, I use it every day, and it's definitely still beta quality.

How much does your time drift? We changed the mechanism so that it should sync from the OSX ntp server now, which seems to be giving good results. If you are having problems can you create an issue and we can look into it.

Host container socket sharing will come, but it is complex as sockets only exist with in a single operating system, so we have to bridge them across two. We are using this for the docker socket, and debugging the issues across Mac and Windows, so it is in the roadmap.

Actually GA may have fixed this. I was able to reproduce it, but may have checked too quickly. I opened https://github.com/docker/for-mac/issues/17 against it, and may end up closing it.

Fixing the file system is going to be a very hard/impossible task.

They should just go with nfs mounts it is at least 10 times faster than what they have now

For #3 that is an issue for remote debugging with things like XDebug for PHP. I have been using this command:

sudo ifconfig lo0 alias

And setting the remote host to instead of localhost inside the container to work around that issue. It's been working pretty well.

I've been using the Beta version of Docker for Mac for many months and haven't had many issues with it at all. The biggest issue I've seen was the QCow file not releasing space and growing to 60+GB, but deleting it and restarting Docker did the trick (although I had to rebuild or repull any containers).

I had a similar experience trying to switch to docker-machine as it sounds like you've had with the new apps, and ended up giving up.

It's super simple through Vagrant though, just vagrant up and set DOCKER_HOST to the static IP. Plus there are vagrant plugins that let you sync a directory to the vm in a way that gives you inotify events so live build/update tools can run in your containers (which btw is huge, I can't believe the official apps haven't even attempted to address that, as far as I've seen).

The company claimed back in March [0] that Docker for Mac addresses the filesystem events. I observed that it works.

While Docker for Mac has improved somewhat over the beta, unfortunately it's still quite rough. For example, it was only last week that they pushed a fix for the DNS timeout issue [1] (I think maybe it was fixed? I can't check because Docker for Mac is not open source).

[0] https://blog.docker.com/2016/03/docker-for-mac-windows-beta/

[1] https://forums.docker.com/t/intermittent-dns-resolving-issue...

The DNS resolving code in Docker for Mac is in the VPNkit project which is open-source: https://github.com/docker/vpnkit. A DNS timeout is a fairly general symptom and it's hard to diagnose fully without a packet capture, but there's one issue that I'm aware of: if the primary server is down then the UDP DNS queries and responses will use the host's second server. However if a response is large and requires TCP then unfortunately we will still forward it to the primary server, which obviously won't work :( I've filed this issue about it: https://github.com/docker/vpnkit/issues/96. We hope to improve DNS, VPN and general proxy support for 1.13 -- please do file issues for any other bugs you find!

> Plus there are vagrant plugins that let you sync a directory to the vm in a way that gives you inotify events so live build/update tools can run in your containers (which btw is huge, I can't believe the official apps haven't even attempted to address that, as far as I've seen).

If you don't mind, what are these plugins? This is one thing that's sorely missed when I do development with Vagrant. I did a small amount of searching and trial and error, but couldn't find a solution that worked for me.

There used to be a separate vagrant plugin for rsync but it's now built-in. There is also built-in support for NFS and virtualbox/vmware synced folders. These all work reasonably well until you start having fairly large numbers of files/directories.

Also if you use a native Linux host with LXC or Docker there is no overhead for sharing directories with the container, it's just a bind mount.

I don't believe NFS supports inotify events? At least, that's what I'm using, and I'm forced to use polling for any file change detection. And rsync is one-way IIRC. But yes, LXC on Linux works great when it's feasible; I've just been looking for something that supports file change detection on other platforms.

The official apps do do that. It's one reason their shared fs performance is abysmal so far.

The last one, in my experience, is basically a deal breaker. Simple commands (eg rake routes, npm install, etc) take 100x longer.

I'm don't have a firm opinion on what is or isn't 'production ready', but if there are major bugs, then there should be some way of disseminating that information instead of everyone rediscovering the same issues.

Number (3) is specially painful. The fact that their documentation makes it very explicit that the host is bridged by default and containers are "pingable" aggravates it a little bit further as it seems as a very basic pre-requisite for the tool to be usable.

for 4) you can use http://docker-sync.io - its compatible with docker for mac and others, supports rsync/unison/unison+unox and will have NFS support in the near future.

With unison+unox you have full transparent sync while having native performance (no performance loss at all). This is far better the osxfs or nfs.

I wonder, is microsoft helping to solve those issues? If they are, it shouldn't take too long.

I wish they just adopt what dinghy did, xhyce with nfs mounts and dns server

I just installed D4W a few days ago for the first time. It's been great. It's a seamless experience on W10 Pro with Hyper-V. I've used VirtualBox a lot and I like it but I always have to fuss with the network bridge and such. With this, it's hard to tell I'm even using a VM. Their network port mapping is seamless.

FYI, I was using rc4 and I didn't see any information on how to upgrade (should I uninstall first?). I ran the release setup and it did an in-place upgrade, deleting earlier components and such.

My experience has been the auto-upgrade works without requiring reinstall (and indeed just did so to GA last night). I don't think there's any specific fresh install requirement.

I don't use Docker much now, but in my experience the reliance on Virtualbox (on Mac) was a little clunky and annoying, and I really wished for native support without Virtualbox. I'm super happy to see that's here!

I think it makes sense to depend on the "native" virtualization solutions for each operating system (Hyper-V and xhyve).

We have been using Vagrant and VirtualBox heavily and the new Docker for Windows/Mac is making us reconsider that since you can't easily use more than one hypervisor on the same dev machine without some hassle. We might be building our Vagrant boxes for these other hypervisors soon. VirtualBox still seems easier to work with but there isn't anything much exciting happening with it lately.

Let's see...

It's sad that Vagrant is so married to VirtualBox in practice. VirtualBox on Linux is especially bad (crashy kernel drivers).

(Yes, I know there theoretically exist different Vagrant backends, but Vagrantfiles and public images are married to specific backend so all the reasons to use Vagrant tie it to VB)

Have you tried using vagrant-libvirt on Linux? You can convert public images to run on it using vagrant-mutate.



I believe xhyve works fine with recent versions of VirtualBox, since xhyve is a pure userland app (aka no kernel extensions). Check out the issues section in the xhyve readme...


xhyve needs to interact with all kinds of low-level things so there has to be kernel code involved. xhyve does not install kernel extensions of its own like VirtualBox or VMWare Fusion. xhyve uses the kernel extensions provided by Apple (com.apple.driver.AppleHV and possibly others).

Ah, that makes sense. It doesn't install anything extensions of it's own, only using the kernel bits provided by Hypervisor.framework.

In my experience, Windows is the only OS that makes it difficult to run multiple different hypervisors. I'm running multiple on my Linux and OS X machines without problems.

Virtualbox and Vmware Fusion on Mac also don't like each other, in my experience.

I switched to VirtualBox from Parallels, and I agree that it's clunky in comparison (got tired of having to put in a purchase request every time they upgraded). I have to run IIS and MSSQL, unfortunately, and until I can use those with Docker, I think I'm stuck.

I think when Windows Nano Server is available I'll give it a try, my Win7 image is almost 70GB...

Absolutely love Docker.app, it's made life so much simpler at work for all of us, and performance has been increasing steadily (though it's still not 1:1 with boot2docker-xhyve).

On Windows, Hyper-V doesn't really play nicely with laptops. If you've got it enabled and bridged to your wifi adapter, Windows 10 may think that your connection is Ethernet and turn off all bandwidth saving features. I only found out after Windows Update had exhausted my monthly LTE quota.

Speaking about Windows, it is also disabled on Windows 10 Home and only available on Pro edition. Hope they'll maintain VirtualBox support as a first-class citizen (well, given that it was the most mature option during the beta period, suppose they will).

Basically echoing senex's comment, but this announcement seems bizarre in light of https://forums.docker.com/t/file-access-in-mounted-volumes-e.... In particular, a Docker employee responds with a status update in https://forums.docker.com/t/file-access-in-mounted-volumes-e..., saying this isn't resolved for stable Docker for Mac. It's totally unusable for Rails development right now.

The 'convox start' dev environment enables Rails dev on Docker for Mac with a custom file sync strategy.

This is another case of simple solutions win... You can effectively rsync code changes without all the low level file system madness.


Thanks. There are some workarounds posted in the thread I linked, too. Frustrating that Docker for Mac doesn't just work for the main use case (local development), though.

Can I run my usual VirtualBox VMs I have running for everything else (non-docker related) alongside Docker for Windows yet? When I tried one of the betas it enabled Hyper-V which prevented me from using any of my other VMs.

Wondering about this myself, going to write up a tutorial soon for people I know who use Windows. Last time I tried Docker for Windows it broke VirtualBox completely (just had to disable Hyper-V). Might be an easy fix for that though, didn't spend any time investigating

You can have just one hypervisor working at any given time. So, no, you can't use your VirtualBox VMs while a Docker container is running in this way (because it'd be using Hyper-V/xhyve).

Can I expect a dockerfile that 100% works on Linux to 100% on Mac and Windows?

Short answer, yes. Medium answer, images your Dockerfile is based on are still run in a Linux environment, even if it's virtualised differently.

Wait now, hold on a minute. I'm very confused and curious how does this work?

Can I run any linux-based container on Windows? Can I run (are there any?) windows-based containers? If so, does it work the other way around: windows container on linux host? Does it somehow use the recently published Linux Subsystem for Windows, or is it completely different compatibility layer? If it is different, doesn't it seem like a waste of effort?

Native Windows containers are in beta and will be released later this year. You cannot run them on Linux hosts. They do not use the Windows subsystem for Linux, they are really native for Windows.

> Can I run any linux-based container on Windows?

No, on windows you still have to run a Linux vm which the containers will run inside. Meaning all containers actually run on a Linux host. The new Docker for Windows app only abstract away some stuff so it feels easier working with.

> does it work the other way around


> No, on windows you still have to run a Linux vm which the containers will run inside.

I don't think, that's correct. To me that's the whole point of having a native Windows / Mac version of docker. From their feature list:

> Faster and more reliable – native development environment using hypervisors built into each operating system. (No more VirtualBox!)

No, GP got it right.

The quoted part is that instead of VirtualBox one can use Hyper-V. In either case, it's handled by docker-machine which runs a GNU/Linux VM with Docker (host) tools installed, and containers are ran on that VM.

I would be surprised if there aren't plans to support WSL (to run Linux-targetting binaries on Windows "natively", thus have "native" Docker containers) but don't think that's available yet.

Dockerfiles describe how to build an image. You can build the image on any docker host and it will be identical. The differences occur only when you run the image and it becomes a container. The way networking and file systems behave can vary depending on the host and run command options.

So if I have an application that utilizes networking or the file system it will have to use separate dockerfiles depending on platform?

Usually you have a Dockerfile to build an image then you would deploy it to servers with different run parameters.

You could also have dockerfiles that take a base image and then add some environment specific configuration

IMHO it's better to keep the images identical across environments, and pass runtime configuration when deploying

I still can't bind mount a file in a container if that file already exists in that containter. Is this production ready?

This seems to work fine for me:

docker run -it -v /private/etc/passwd:/etc/passwd alpine sh (not recommended for any actual use obviously)

Is there a particular case in which this failed for you? We'd appreciate a bug report on https://github.com/docker/for-mac/issues (or from the Docker for Mac GUI, just click on "Diagnose and Feedback") so we can chase down whatever issue you're having.

Yes, this use case, it happens on Windows and on Mac as well.

C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: oci runtime error: rootfs_linux.go:53: mounting "/var/lib/docker/aufs/mn t/90d24356afdeb7b9ddad4b3b6903be92063151c33bf34f3d63ede464437060c6/cryptoservice/broker-config.yml" to rootfs "/var/lib/docker/aufs/mnt/90d24356afdeb7b9ddad4 b3b6903be92063151c33bf34f3d63ede464437060c6" caused "not a directory".

(I'm mounting broker-config.yml and that file is already present in the container. Most recent Docker for Win beta in this case, but getting the same on non-beta Docker for Mac.)

The error message specifically says "not a directory" and afaik you can't mount single files, only directories. I at least have never even thought of trying to mount individual files since the bind mounting functionality in Docker seems to always and everywhere have been described in a way that suggests that it's for mounting directories, not individual files.

An interesting fact I think is worth mentioning is that Docker for Mac uses a forked and currently closed version of xhyve, and not the same xhyve that we can find on Github. The last commit to open source xhyve was May 27.

With that said Docker has plans to open source it, I wonder if that will happen soon as they declare Docker for Mac ready for production. That would imply that the xhyve port also should be ready be contributed back or spin out into a new project (the quote below said they were not sure if they wanted to contribute back or make a new project).

Personally I think the "right thing to do" would be to contribute back to xhyve, at the same time I have a feeling it's more valuable for Docker Inc to "own" and control their own fork/project so I would guess they will go down that path instead (it would still be open source, just under a different project name).

Source: https://news.ycombinator.com/item?id=11356293

EDIT: I stand corrected, see talex5 comment below, I had missed the hyperkit announcement.

Oh, my bad, thanks for the correction! :)

I need nested virtualization as well. I don't know if this is possible with the hypervisor being used, but it's hugely important for me.

I'm not sure about xhyve, but I vaguely remember Hyper-V supports nested virtualization but only on Windows guests. Something Microsoft seems to need that for the upcoming Windows containers.

Your best option for nested virtualization seems to be qemu, but you don't need virtualization for Docker on Linux so it's pointless.

VirtualBox doesn't support nested virtualization either.

Absolutely, I too would really like this, especially if you could easily do PCIe / Thunderbolt passthrough for GPUs.

Curious, can you share what you need it for?

Basically for hacking on cloud software that runs hypervisors such as kvm. docker-machine with a VMWare Fusion VM and nested virtualization enabled is the current approach I use - works fine for now.

Looks like you need nested virtualization for running windows xp apps within the new newer windows systems.

I'm also curious about other uses for it. Why is it a must have when you could technically just use a virtual windows xp system. I guess running a legacy application in a more secure and newer OS?

"This version of Docker requires Windows 10 Pro, Enterprise or Education edition with a minimum build number of 10586. Please use Docker Toolbox."


Meaning it doesn't run on LTSB as well (10240). Too bad, it's the only sane version of Windows 10 yet.

That'll be because Windows 10 is the first desktop Windows OS with support for Hyper-V.

If you didn't want to use Windows 10, perhaps you might have some more luck with a Windows Server OS. Does anyone know if the latest version of Docker will work on Windows Server 2012?

Client Hyper-V actually showed up in Windows 8 [1], but only recent versions of Windows 10 have Hyper-V with all the features needed by Docker for Windows.

[1]: http://www.howtogeek.com/196158/how-to-create-and-run-virtua...

My biggest issue with it is that there seems to be no easy way to provide more space to the VM docker runs inside. While it seems trivial it can be useful if you happen to have really large images (yes, for valid reasons). If you run too many containers you just run out of space.

I ran into this too when fooling around with SyntaxNet. It hink the default size of the VM is 16GB. I kept running out of swap space even after bumping the ram on the VM up to 16GB.

> While it seems trivial it can be useful if you happen to have really large images (yes, for valid reasons).

I'm curious what some of these valid reasons are - why would you have a large image instead of a minimal image with all the data linked out?

Does this thing work on Windows 10 home edition? That doesn't have hyperv I think.

No it won't work, you're right it seems:


I've been trying to switch from VirtualBox to Hyper-V twice to use Docker for Windows, but always hit the same wall when using a desktop Linux guest: no 3D acceleration, no resolution scaling, no shared clipboard.

Yes, Hyper-V's Desktop UX for VMs is still in a really sad state compared to the competition, even for Windows guests, let alone Linux ones. I have it enabled to use Docker for Windows by default, but very often still need to reboot to disable it and use VMware Workstation for any serious work inside VMs, for all the reasons you listed, plus the awesome multi-monitor support in Workstation.

Microsoft really needs to get its act together.

Good to see they are moving forward, but I have a working rig at the moment with virtual box.

"If it ain't broke, then don't fix it" is a motto I live by.

Better it's your blood on the bleeding edge rather than mine :-)

Docker for Windows is unfortunately a bit gimped right now:

- docker-compose isn't OS agnostic and as versions go forward Windows is lagging behind

- this uses Hyper-V which blocks both Virtualbox and Vmware from running

I love Docker for Mac but I have had a problem of containers just disappearing after running for a while.

It has a pretty good logging, have you taken a look if it crashes or something?

If you have issues with the Docker for Mac on Sierra, turn off ntp:

  sudo launchctl unload /System/Library/LaunchDaemons/org.ntp.ntpd.plist
If I see that "We are whaly happy to have you" welcome screen one more time though...

Can you actually live without NTP though? Isn't that a pretty critical service?

One of the issues is that if your laptop goes to sleep your linux container becomes out of sync on the time. To fix this you have to restart docker.

Last time I checked it didn't support 32 bits but could be tested on Windows 7. Now I see it requires `Microsoft Windows 10 Professional or Enterprise 64-bit For previous versions get Docker Toolbox`.

Yeah, that's part of why this release baffles me. From 1.11 to 1.12 they dumped VirtualBox in favor of the latest version of Hyper-V which why it only supports Windows 10.

That's a pretty significant change in my mind but it didn't seem to extend their testing/validation timeline at all.

Anyone know if the volume ozone performance have been improved to at least the level of nfs method? This is what keeps me away from using the docket for Mac and just sticking to dinghy until this is fixed

Would like to know this as well. Docker for Mac performance is still horrible.

ARGH! I can't use Hyper-V on this hardware (no SLAT support).

is anyone having troubles with docker-compose (on mac)? It seems that ports are not forwarded/opened.

I'm finding it really funny how even Docker users can't really explain easily what Docker is.

Docker is a process launcher that makes it fast and simple to start processes with a unique network/filesystem/process/user space (via cgroups and namespaces).

It won't work on Windows 7, 8. It needs Windows 10 pro or enterprise to work.

Greate! It's hard to install it before.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact