Hacker News new | past | comments | ask | show | jobs | submit login
Simplifying Docker on OS X (andyet.com)
259 points by joeyespo on Jan 25, 2016 | hide | past | favorite | 85 comments



From someone who has wasted many hours trying to get B2D working, I have ended up using https://github.com/codekitchen/dinghy

Easier to use, more reliable and some nice features like the OPs .docker DNS and NFS shares. The maintainer is very responsive and I recommend it to anyone who is struggling with docker on OSX.


After seeing this article, I just migrated from Dinghy to Dlite. I saved about 5GB of disk space, got much faster VM startup time and basically no hassle. It was a cinch to work my docker-compose startup into a LaunchAgent and now I finally have all my services running on boot. I highly recommend giving it a shot; my eyes nearly popped when I saw how quickly it downloaded and started the VM (must have been all of a 10MB download).


As a Dinghy user too, thank you! I will give a shot at Dlite right now :)


In my experience the OSX tools for docker work absolutely fine. If you've got in a bad state, completely uninstall all existing docker-related stuff, `brew update` and `brew install docker-machine`.

The article sounds like a fun personal project, but I definitely don't see inconvenience on OSX being a justification.

If shell commands are too much to type then create aliases.

The VM's IP doesn't seem to change; add an entry in /etc/hosts mapping it to something like "docker-machine".

I don't understand the people in this thread complaining that OSX support is, e.g., "lackluster". The docker containers don't run on OSX so it's very nice that there's an OSX client that works perfectly as far as I can tell.


My experience is that the toolchain works fine but performance is lacking, especially when sharing many files between the OS X host and the Docker container.


Out of curiousity, did you try docker-machine (the docker toolbox)? I have found that to be fairly straightforward, though I had not used B2D previously.


To be clear on adamsea's point, docker-machine is now the replacement for B2D and is also officially supported.


I used to use dinghy, but setting it up required a lengthy process. Because I'm trying to migrate my department to Docker, I wanted to streamline the onboarding process, and ended up blowing away everything and switching back to Docker Toolbox.

Haven't had any problems since, and it's been easier to get people started.


I don't have much use for Docker on my personal OSX computer, but whenever I've tried to get things running smoothly on there for pet projects, B2D was such a pain and either wouldn't work at all, or would be pretty awful to get set up properly.


I feel like this isn't much simpler than the official docker-machine workflow, but I may be too far in the weeds at this point to objectively tell how complicated it is. In my case I need to test tools across 4 versions of Docker so I'm locked into using docker-machine and dvm.


The lack of /var/run/docker.sock is annoying but its really not that big of a deal in my opinion either. docker-machine covers a lot of other use cases such as multi-host docker swarms and docker in the cloud. Adding this tool to the toolbox is extra mental overhead for a small benefit if you have any need for these more advanced use cases.


I also am not sure what major advantages this brings. And the article was slightly unclear in that you can add Docker environment variables to your `.bash_profile` -- you still are dealing with environment vars, but don't have to run the `eval` command all the time. And, I believe the docker-machine default VM settings mount your User (Users/myname) directory in it's virtual machine


yes you are correct, docker-machine mounts your current user's folder into the VM as well so thats not an advantage over docker-machine either.


But that mount is vboxfs and is slow. You need to do some extra work to get NFS going.


There's a lot of neat work going on in this space because Docker support on OS X is pretty lackluster out of the box. My team and I built Dusty[1] to try to solve many of the same problems, though we seem to be operating at a bit of a higher level than this tool.

I'm excited to try out xhyve for virtualization, but even if you solve the virtualization problem there are still some real usability concerns around using Docker for development in even medium-sized teams. I gave a talk at PyGotham[2] about some of the benefits and issues of using Docker for dev at this point.

All told, this looks like it might be an attractive lightweight solution though if virtualization is your main problem now.

[1] http://dusty.gc.com/ [2] https://www.youtube.com/watch?v=pYLOuuR7HI0


I also really hatte the official Docker way in OS X. It's even hard to uninstall! If anybody is interested how to uninstall official Docker for OSX: https://therealmarv.com/how-to-fully-uninstall-the-offical-d... Running Docker in a Vagrant VM is or was the best way. Speaking of Vagrant... does anybody know if there is a xyhve Vagrant project?


Nice work joeyespo, it's definitely a much-needed step forward. NFS, though, is a huge pain point for me on OSX performance-wise. The only reliable solution I've found so far is Unison sync between Mac & Vagrant (Ubuntu). That also comes with some caveats, as Unison has to be compiled with exactly the same version & dependencies on both OSX and Ubuntu.

If anyone's interested, I've created a list of steps involved in making it work on El Capitan: https://gist.github.com/pch/aa1c9c4ec8522a11193b


FS sync, may it be for the code, the assets, the data, the logs, is painfully slow on osx, because of vboxsf poor performance.

If you intend to use docker containers on osx as an isolated runtime, with all your "data" sync'd in realtime between your host and your container, then use linux.

Fswatch / Rsync / unison over NFS, ssh or cifs are hacks of a greater order than having to source an env script.

Out of curiosity, why would you need directional sync/unison?


Sorry for any confusion; this wasn't my work. I only shared it on HN. Nathan LaFreniere is the author of this post and project (https://github.com/nlf).


Pretty cool trick to make the OS X docker hacks feel transparent. I've been tempted into getting a Linux laptop just because using Docker on OS X feels so hacky.


I did and never looked back. Lenovo X1 Carbon Gen 3


Been using Linux on my work macbooks since 2009. There are some bumps involved, but it works really well on the awesome Apple hardware.


I dual-booted Mint on my 2011 MBA the other week and it was a nasty experience. I feel like Linux users have low standards, or maybe I've been away too long. Here's four things I found immediately:

1. The fans don't automatically react to temperature without a background daemon (?!).

2. Multi-touch trackpad support is terrible. The default synaptic driver can't handle you resting your thumb on the bottom of the trackpad. The alternative driver doesn't support momentum scrolling (and is still a bit shit). At least palm detection has been solved; well done Linux, it only took you 10 years.

3. It cannot seem to handle using left Alt as a third-level activator (e.g. for # key on EU keyboards) at the same time as actually letting you /use/ your Alt key (e.g. for activating menus). I think I eventually solved this, but how hard should it be?!

4. As others have mentioned; battery life.

I was impressed that sleep, audio, wifi and display brightness all worked out of the box though. Progress, eh?


I always run Ubuntu.latest. 15.10 on a 2014 macbook pro at the moment.

1. I currently don't have a problem with that (the fans/temperature management). In my seven years of running Ubuntu on Macbooks, I've run into that in the past, but it's been a while. 2. I've never been a 'full' OSX user, but I understand that they are doing a lot of very nifty things with the touch pad, not surprisingly. My experience with the touchpad under Ubuntu is generally not as good as under OSX. 3. Can't comment on this one 4. As described elsewhere.

For most people, I would not recommend running any Linux flavor on a Macbook. I am a 'UNIX guy' for decades now. Hell, I even run a minimalistic windows manager (http://www.6809.org.uk/evilwm/)

For me, the upsides outweigh the downsides. I'm pretty sure that's not the case for most people though.


How's the battery life?


I use a Macbook Pro (2014) with Kubuntu Linux installed full-time. The battery life is about half of what it is when running OSX.

If you forget your charger, there is serious range anxiety. Luckily, I've been running PC laptops for years and have gotten completely used to bringing my charger any time I take my laptop.


I get ~3/4s of the battery life that i have on osx if i do not run anything on flash.

if flash starts, i get less than half. (using archlinux on a mid-2012 mbp)


I estimate that I get better than half of the battery life.

Years ago, when I started doing it, the battery life of Linux on Macbook was really terrible.

I only need a couple of hours of battery life for my mac at most, so that's fine for me at least.


Would it not just be easier to dual boot Linux? If you've got a big enough hard drive it works very nicely.


If you're going to go through the trouble of having two computers anyway, why not make the second one a home Linux server? Point the docker client's socket at that machine, instead of at a VM.

One of the benefits of doing this, I find, is that all the power-hungry stuff happens on the server, so I can have a really lightweight development machine (e.g. one of the 2015 Macbooks.)


...or just get one in a colo for $40/month.


Depends on your internet speed. Pushing docker images over a LAN is about the same speed as writing them to disk. Pushing them to a remote server somewhere can introduce noticable latency in your workflow.


It's easy, you just do everything on the remote machine(s). I've taken this to such an extreme that I've given up on the Macbook and moved to a far less expensive and more secure Chromebook.


Not bad. Personally, I can't stand the hit to latency that doing work over SSH or screen sharing introduces, unless the ping time is very low... and your average cloud server has way less CPU power and RAM than a good laptop too, though I suppose you can avoid that by selecting a more expensive server to do work on. There are certainly huge advantages to the remote method for those that can pull it off.


Linux works quite well on Macbooks. No need to buy more hardware.


It works great indeed. My main pain point though is that if you want to dual-boot then every update of OS X will destroy your Linux workspace.


Just use Vagrant! Start a Vagrant VM on your Mac and do the Docker stuff inside it.


If I "just used Vagrant," I think I'd end up with a virtualbox VM, just like if I were to have used docker-machine.


Vagrant is more repeatable, though. It's straightforward to provision an environment. In my (limited) experience with direct Virtualbox, you wind up manually building a lot of stuff.


docker-machine automates setting up a VM running Docker. There's nothing manual about it (besides running "docker-machine create dev" or whatever)


Docker-machine drove me up a wall. Getting file sharing to work so I could use my nice Mac editors on my in-docker code was a huge waste of time and effort. Docker-machine was just another one of those crappy half-measures that keep Mac owners struggling to find a way to make Docker graceful.

I was also motivated by bringing on new team members who were on Windows rather than Macs. Vagrant works fine and transparently on both Mac and Windows, with the same commands. It's not necessary for everyone, but I was pleased that my best Mac solution also works just as well on Windows.


It's unfortunate you had such a bad experience with Docker. No doubt there's a steep learning curve to use more advanced features. If you're still willing to look in to it, I'd recommend taking at look at https://docs.docker.com/engine/userguide/dockervolumes/ under "Mount a host directory as a data volume."


`eval $(docker-machine env docker)` is great, I can stick it in scripts and separate my development environment from testing, staging, and production for every project I need to track. It works on Windows, OS X, and Linux so all my developers can use whatever OS they want and I still have one platform-agnostic environment setup script that gets people up and running in less than 15 minutes. The only problem with docker-machine is you can't share environments without copying over the entire machine configuration, which is somewhat baffling in the case of EC2 machines but is by no means a deal-breaker.


Thank you! I'll have to check this out. It baffles me that a company such as Docker has yet to make using its primary product on Mac OS X simple plug and play. Needing to remember to run `eval $(docker-machine env docker)` is not a sustainable solution. Hell I know about this and would occasionally forget and become confused at the error messages.

In fact I put this horrible hack into my ~/.bash_profile (I wouldn't recommend doing this but, hey, it works.):

docker-machine start dev

eval "$(docker-machine env dev)"


One thing I do to avoid this is to write out the Docker environment to a file in my home directory. Then I source that file, if it exists, for each new terminal session. Not perfect, but you can see it here: https://github.com/bentruyman/dotfiles/blob/392914f92e5a8d8d...

But DLite appears to resolve a lot of this 'hackiness'.


From my understanding, the limitation is from OSX and not from Docker, as OSX does not have native support for kernel namespacing. This is why Docker on OSX runs inside of a Linux virtual machine.


Making OSX Linux is hard, but making Docker smarter about running on OSX is easy (have docker-machine provide a socket like this project does, have docker client detect the environment and connect to the correct virtual machine's docker socket) and would be an obvious huge step forward.


Yeah, sure, I agree, there's a lot of room for improvement. But, I still talk to a lot of developers who are new to Docker and use OSX, who are confused as to why they need to run a VM or enter docker commands into a different terminal than they are used to.



docker-machine is almost completely seamless to me. The only thing I have to do is start the VM, the eval is in my .profile.

This whole problem took me <5 mins to solve, and about 30s to solve each boot. I could quite easily start the VM on boot as well, but it's such a non problem for me that I don't even bother spending the 3 mins it'd take to set up.


Honestly, for me it looks like a wasting time. If you want to have simplicity - install Docker Toolbox, which will help you with getting access to docker with all environment variables. If do not need simplicity - you probably want to handle multiple docker hosts, including VM, Linux machines, Cloud instances - this is why you want to have an option to switch between docker environments using `eval $(...)`. Also you can use something like https://github.com/Tarrasch/zsh-autoenv to automatically call this command in specific folders.

And if you really need docker-lite- you do not need to write it in go, just do

Run that once (do sudo or any other crazy stuff):

``` echo $(docker-machine ip dev) local.docker >> /etc/hosts ```

Add that to `.bashrc`

``` eval $(docker-machine eval dev) ```


Unfortunately it seems that Xhyve cannot connect to the Internet when traffic is routed through OpenVPN. Otherwise I would use it at my job. I tried https://github.com/coreos/coreos-xhyve, https://github.com/ailispaw/docker-root-xhyve and Veertu, and they all won't work.

Apparently is a known issue https://github.com/mist64/xhyve/issues/84 ; it might have to do with Hypervisor.framework


Same goes for Cisco AnyConnect VPN. Updating the routing table seems to temporarily fix it, but it would be nice not to hack around it to get it to work.


> Updating the routing table seems to temporarily fix it,

can you share a link ? thanks


It's been a while, but I believe the fix is in this thread:

https://github.com/boot2docker/boot2docker/issues/628


This looks great and is now high on my todo list to test.

I am all in on docker development because I am all in on using images and containers in production via ECS. Getting a laptop and AWS talking the same abstraction is the future.

I too suffer from problems with Docker Machine and virtual box on OS X. Frequently I find myself debugging these subsystems. It's been ok because the worst case to get back to a good state is uninstall, reboot and reinstall. Annoying but passable b

I have also experimented with xhyve and it looks extremely promising but does need some new tools to make it a turnkey docker dev environment.

From my survey of this technique and the tools we need more time, energy and cooperation to get there.

Thank you and everyone else for your efforts!


> Getting a laptop and AWS talking the same abstraction is the future.

You can already do this with Vagrant and VirtualBox. Docker is just additional abstraction.


I used to struggle with this. Then I set up a Vagrantfile and put my docker code in a Vagrant instance. It's just far and away better than any other hack I've tried. File mount point transparency problems? Solved!


I just use bash aliases in my .profile

----

alias dm="docker-machine"

alias dc="docker-compose"

alias denv='function __denv() { eval "$(dm env $@)"; unset -f __denv; }; __denv'

----

dm create --driver virtualbox local

denv local

Also works with swarm params:

denv local --swarm swarm-master


The whole Quay team at CoreOS uses OSX to develop a container registry. As a result I have a bunch of useful aliases as well[0]. A lot of them have to do with just plain cleaning up after yourself, but I also wrap my calls to `docker` with a zsh function that will eval my dev docker-machine environment if it hasn't been already.

[0]: https://github.com/jzelinskie/dotfiles/blob/b2d33f8c601d1b7d...


Thanks for this, I'd been typing way too much for too long. However, dc overlaps with the dc program so I've moved to dkm and dkc (since I actually use dc from time to time.)


Mine are similar:

dssh() { docker exec -ti $1 /bin/bash }

dexec() { img="$1" shift docker exec $img /bin/sh -c "$@" }


Hey, this is pretty sweet.

This is a simple alternative than the solution I wrote about here: https://allysonjulian.com/setting-up-docker-with-xhyve/ which uses docker-machine-driver-xhyve (https://github.com/zchee/docker-machine-driver-xhyve).

I've been trying to figure out how to deal with the env variables used by docker, and used a workaround ~/.profile (https://gist.github.com/astrohckr/efceb07887225cbc2ba2). Like the OP, running eval in every terminal session makes me a bit concerned as well.

I'll give dlite a Go.


I created a very similar post not too long ago:

http://shahruk.com/code/snippets/2016/01/18/Definitive-Guide...


Docker is one of the tools that was concocted (by brilliant people!) after I began programming professionally. From my understanding, Docker allows you to containerize your applications so they function properly anywhere you (so to say) "place those containers".

Now, does if I'm working with say Django or Flask, and things behave differently on an AWS server than they do on your local machine, for example — relative paths for file uploads work on localhost, but mess up on AWS. Also file permissions are irritating to work with on AWS, but never an issue on localhost (on a Mac). In such a scenario, is Docker for these issues? I imagine I use Docker, make a "container" and setup everything properly in that, and then just move it to AWS when I'm deploying?


You can also use Docker on AWS. Just use it as a target "runtime" for all your environments.


So that would mean telling Docker to setup a container to model the configuration on AWS on all my environments, correct?


Is there a technical reason why docker-machine on OS X doesn't set up /var/run/docker.sock in a similar way? It sounds so simple when put this way in the article.


So I imagine the solution is using xhyve to run a simple linux.

Why is the xhyve virtualization framework through low-level go-bindings better than using VirtualBox for the same task?


  > Why is the xhyve virtualization framework through low-level
  > go-bindings better than using VirtualBox for the same task?
Presumably, because xhyve uses the native os virtualization framework, and as such does not require custom third party kernel modules. It also apparently works a bit better with the native os scheduler.


It is more light-weight?


Yes. VirtualBox is emulating a whole PC including controllers for all internal stuff like CPU, graphics, HDD etc. Where as xhyve is more like an interface which can be used by VMs to access the native hardware indirectly.


My main problem with using Docker on OS X is that I have multiple OS X user accounts - one for myself, one for my main client. And I ran into a bunch of trouble trying to run boot2docker and dockermachine on both of them, it just wouldn't work. Unfortunately I forget the details since I haven't had need to use docker since then.


for me, using docker on my own workstation (OSX) makes it easy for me to test deploy's to multiple machines quickly (even if these are only locally hosted VMs). docker-machine facilitates that by allowing you to quickly spin up and spin down virtual machines locally or in the cloud.

I can see advantages for running docker as a way of streamlining dev environments, and keeping them consistent across team members, but by automagicaly awaying the docker-machine env commands you have effectively lost the most powerful facet of container technology.

Perhaps we would all do better if Docker's toolset docs were written in a way to communicate these design decisions (their purpose, and how they conceptually work behind the scenes)...


If I understand, using this doesn't preclude you from using docker-machine, it just gives you a default Docker installation, similar to what you'd get if you installed the Docker daemon on your Linux workstation.

It might be neat if this were integrated into docker-machine though. Looks like there's already a docker-machine plugin for xhyve (https://github.com/zchee/docker-machine-driver-xhyve).


I actually just use aliases, it feels like this DLite/dhyve is more of a hack.


Add this to your bash profile and it will feel simpler

function docker_activate() { eval $(docker-machine env $1) }

alias docker_activate=docker_activate

Then:

docker-machine start mymachine

docker_activate mymachine


Question, is there a homebrew package for DLite?



Docker does run natively on OSX.

Just forward a port with ssh/socat/etc and set the environment variable correctly to point to your build host that can run lxc.

There is so much over-engineering going on in this arena. :/


There is no definition of "native" where your statement is true.


Containers don't run on osx, but docker isn't responsible for the containers anyway - lxc is.

The "docker" program builds and runs natively under osx just fine, like any other Go binary.


Docker doesn't and hasn't used LXC for a long time. It uses libcontainer, which is their own implementation of a container runtime. Even when Docker did use LXC, the Docker daemon was required for management. The Docker daemon only runs on GNU/Linux and Windows, so there is no sane definition of "native" where the statement "Docker runs natively on OS X" is correct.


Docker hasn't used lxc for a long time. And the Docker daemon doesn't compile on OS X.


I think the meaning is the docker tools/client runs on OSX, so one could just run a remote host or VM with docker installed and use the default tools for running/connecting against the remote host...

Personally, I wish more of it worked as transparently as what Joyent provides for their server implementation.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: