Hacker News new | comments | show | ask | jobs | submit login
Moby: An open-source project to advance containerization (docker.com)
209 points by craneca0 99 days ago | hide | past | web | 77 comments | favorite



I have a real issue with Docker Marketing Speak bullshit. "Democratizing containers"? When were they not democratic? What were they before? Tyrannical? or - gasp - communist? And now they are going to ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT what does that even mean. Hacker News commenters tend to be pretty smart, and at the time I am posting this, most comments are along the lines of "eh? say what now?" When their core consumers don't understand the message, you can bet your hat that they are not the real target of the message. Or the people at Docker are not good at marketing, which I find harder to believe.


Docker essentially hijacked the LXC project, failed to give credit, mislead users on the LXC project and as soon as it got traction rewrote the project in Go.

So the 'smartness' of HN when it comes to containers is in question when it failed to hold the 'marketing message' to account and single handedly hyped Docker without any context to LXC, in development since 2007 a whole 6 years before Docker 'discovered' it at dotcloud.

The reason why Docker took LXC containers and ran them without init was never explained or the trade off in complexity compared or questioned, something basic you would expect in a technical forum. The fact that most users remain confused about LXC and Docker on threads even today speaks for itself.

Now it promotes itself as the originator of container technology as if LXC on which it was based does not exist.

And not just LXC but the work of other critical projects like aufs and overlayfs is pushed into the background so Docker can 'claim ownership of containers'. Has Docker contributed code or resources to any of these projects it built on? Do docker users even know the name of the developers?

So no recognition, no support. The message to open source developers is toil for decades and any marketing savvy VC funded company can take your work to market and basically erase your contributions and no one will hold them to account. HN has been ok with this kind of predatory marketing since 2013.


> The message to open source developers is toil for decades and any marketing savvy VC funded company can take your work to market and basically erase your contributions and no one will hold them to account.

I don't want to make a political message here, but this is essentially the risk you take with a more permissive license. Someone absolutely can come along and build a product that subsumes yours. They can create proprietary extensions that you can not use, essentially gaining the benefit of your work without having to return the same favour. Basically, you then have to compete against yourself.

With a copyleft style license, it is much harder (though not completely impossible) for someone to do that. You will most likely be able to use their extensions and will be able to compete on a basis of who is executing better.

There are serious downsides to copyleft style licenses for things like containerisation, though. The barrier to entry is really large and you are going to cut out a lot of players who could help you. So it's really a matter of strategy. But if you are going for a permissive license, you'd better have all your ducks in a row because you should expect something like this to happen.

Free and open source business models are still pretty naive these days and I think it's going to take a few more decades before we have a really good idea of the best way to proceed.


It's exactly what has happened to Docker. Their one product that makes money now has massive competition with all of the other orchestrators that exist. If people want simple they use ECS or Docker CE (which is free). If they want complex they often turn to Kubernetis etc.


I got that reaction too when I read it. There were other, sneakier NLP-like use of language. For example, "Being at the forefront of the container wave, one trend we see emerging in 2017 is containers going mainstream ..." It's a way to sneak in the idea that Docker is "at the forefront of the container wave". By accepting the rest of the sentence, it is easy to accept the earlier clause without challenge.

If anything happened in 2016 in the containerization space, it was that Docker was playing a lot of catch-up to Kubernetes. It seems like they rushed out things and then they botched the OSX native release of Docker in Q2 2016, and lost a lot of goodwill in the community of individual developers (more so than CoreOS did with rkt? That's how it feels like to me. Back then, people thought CoreOS were the assholes for forking. I think that sentiment has shifted). I have not tracked how Docker is doing in enterprise, but I guess they are doing well?

I kinda doubt Docker (specifically, Docker Swarm, and the orchestration tools Docker forced into docker-engine) will ever catch up to Kubernetes, but hey, things move fast, right?

Having said all of that, I like this description: https://news.ycombinator.com/item?id=14141832 ... which makes _much_ more sense. Maybe now, someone will use Moby to create a native OSX Docker kit that actually has decent file sync performance (either with unisonfs, rsync, or nfs, and not try to layer it as another FS).


I agree, Docker is in a real pickle. Kubernetes has the momentum and it is decoupling from Docker rapidly.

Docker's biggest asset right now is that everyone has written Dockerfiles already, but unfortunately, Dockerfiles are terrible and people can't wait to get off of them.

Long-term, I expect the platform to be k8s + CoreOS + rkt. Docker's future, if anything, will be in hosting the image repository.


I have been using the Google container registry to store my private images. Costs just a few cents a month for my use case. If you don't want to pay the minimum 7$ per month for more than 1 private image.

https://cloud.google.com/container-registry/


> Long-term, I expect the platform to be k8s + CoreOS + rkt.

I have been dabbling with that same thought for a while too. One thing I have not yet figured out, though, is what CoreOS' vision is for using rkt as a developer on Mac, something a lot of developers are.


rkt on OSX would be neat to see.

We're using Google Container Engine with containervm, which I understand is very similar to CoreOS.


Democratic in this case means bringing it to a wider audience, making it more accessible to regular devs in this case I guess. See definition #3 here https://www.merriam-webster.com/dictionary/democratic


I know you're just helping out, but it's kind of ironic that we need a dictionary reference before we can understand Docker marketing lingo.


Hm no, this is a pretty common usage of the word and I didn't have to look up the definition. Democratization of a technology in English always refers to this definition (same in French and Spanish and most Latin languages).


I am sorry to correct you but in (Spain) Spanish no, in technology it is a fairly recent buzzword that is used mainly in marketing speak. Cannot talk about English, but I feel it's somewhat similar.


Maybe in Spain but in Latin America it is commonly used (I used to live there and my wife's from there). Obviously the expression "democratization of things" is not something people talk about in everyday's life but it is definitely correct.


The project was open sourced yesterday and was only internal to Docker. Since this is open source and not anything they will sell I don't think they are using marketing speak per say.


Honestly this is probably top-down language usage, not a marketing team. Solomon has been talking about containers with this same wording for years, and he's talked about it on podcasts... etc. Granted, that doesn't mean it's not marketing double-speak. Definitely grandiose in a way that marketing language often is.


I didn't know the founder talked like that. Now the whole bruhaha with CoreOS and rkt makes more sense. I remember a lot of Docker fans were angry about it back then. I think 2016 showed that Docker was increasingly getting out of touch and out of sync.

To me, the leading edge and center of gravity has already moved onto Kubernetes (and the various orchestration systems that is attempting to compete with Kubernetes). Even though the buzz around Docker is still growing, the ideas coming out of the Kubernetes community is what is influencing and leading the containerization technologies. Small movements in Kubernetes will result in large movements around people using Docker.


The people at Docker are excellent at marketing. There is very little that's new in Docker. It really just wrappers a lot of core Linux functionality around namespaces and fine grained permissions.

The core functionality of Docker can be implemented in ~100 lines of bash script:

https://github.com/p8952/bocker


IMHO, Docker Marketing has been always been very confusing. I have only looked at Docker very casually over the years. I remember the first time I was browsing through docker.com to figure out what Docker is exactly. It would have been so much easier to understand if they had used the phrase "Linux applications" instead of "applications".


FWIW, democracy and communism are not mutually exclusive.

More on topic though, most of the article is marketing BS so developers have lots of buzzword to give to their bosses when requesting approval for stuff related to Moby.


But this can lead to 'Secular Growth' of containers in the industry.


You're right: as an individual user of Docker, you're not the customer. You're the product being sold to the customer, which is enterprises that see adoption by users as a proof point Docker (the company) boasts about.


Red Hat, Google, Amazon and Microsoft will probably make more money from Docker than Docker.

Disclosure: I work on Cloud Foundry on behalf of Pivotal.


At Dockercon it was all Enterprises giving talks about successful Docker deployments and I never heard mention of adoption by users. Only growth on things like downloads etc.


A marketing-speak filtered out version of this announcement: You will be able to assemble your own docker engine by stripping out components you don't need (that have until now been shipped in a single docker binary) and keeping the ones you do. This is like assembling PCs, for docker.

I think this was mainly intended as an answer to the criticism docker has been receiving (by kubernetes maintainers and others) ever since they decided to ship swarm with docker. I think this move is great as it goes a step further and allows you to swap out build systems, volume mgmnt too. Even though I did not mind docker shipping with swarm, others in the community were, and this shows docker listened, which is great.

EDIT: grammar fix


Aren't they doing it kinda backwards? Shouldn't you start with the pieces and then let people assemble them how they want?

It's fine to offer off-the-shelf supported configurations but why start with everything and the kitchen sink and then try to backtrack?


Because nobody would use docker if installing it required a custom complex build process. This way docker can keep docker CE for its standard use case but also satisfy power users with specific requirements by allowing customization.


I use docker because I can go from

"Hey cool project, I want to try it!"

To already setup in under a minute depending if I'm at home or not.

Also the VMs start really fast, so it's good for getting exotic compiler toolchains on Windows running quickly. Everything has a dockerfile.


This is a transcript of a keynote I just gave at Dockercon. But the keynote had in-depth demos and the blog post doesn't. It will make more sense with the demos.

I will try to summarize: when we build Docker for Mac, Docker for Windows, Docker for AWS etc, we assemble a lot of individual components into a complete platform. Then we pack that platform into a bootable artifact for the target environment. That's a lot of work, and it gets harder as the number of targets multiply. We developed a framework to make this more efficient. That framework has become the de-facto upstream of the Docker platform - it sits between the individual upstream projects and the finished product. So we're open-sourcing it as Moby, moving all of our open-source process under it, and inviting the community to come play. Think of it as the "Fedora of Docker".

Here's more technical details from the readme: https://github.com/moby/moby/blob/moby/README.md

TLDR: - If you're a Docker user, nothing changes: Docker remains the same - If you're a Docker open-source contributor, you're now a Moby contributor. Everything is basically the same, except more modular and more open, and you are less tied to Docker. - If you're building non-Docker container platforms, it's easier to share components and ideas with the Docker community, without being forced into anything you don't like.

The Moby tooling itself is pretty neat: you define all the components in your system (including the OS and hypervisor, if required), then pack them into the artifact of your choice. For example you can assemble LinuxKit+ContainerD+Redis into a tiny "RedisOS", and then boot it straight from bare metal; or virtualize it with HyperKit and run it on a Mac; or virtualize it with HyperV and run it on Windows. Moby does all of this for you automatically (this is one of the keynote demos).

We also showed a "Kubernetes in a box" assembly, to show that you don't have to stick to Docker-built components.


Here's a different reason (jbergstroem also alludes to [1]) why Docker (the company) would rename Docker (the open source project) to a name that cannot be confused with Docker (the product): Trademark (Enforcement).

How do you (a for profit company built on open source) enable your community, to build and sell (their own) products, as well as enforce your trademark? If anyone (everyone?) can build and distribute and call their build `docker`, it becomes really hard to protect your investment (as a company and their fiduciary responsibility to their stock holders) from dilution of TheBrand(tm).

This is not cynicism and I don't fault Docker (the company) at all, it's just the reality of doing business. It's why Canonical requires a contractual relationship for infrastructure clouds to distribute Ubuntu images, it's why you have things like Firefox and IceWeasel, so on and so forth.

https://news.ycombinator.com/item?id=14140543


It's also an easy way to acquire more paying users. "Docker" has the name recognition now, if someone wants to adopt it after hearing about it they're likely to choose the paid product.


Nice move imho ! Moby will be to Docker what Fedora is to Red Hat.


Sounds like the folks from MirageOS have been hard at work.


Sorry if I'm dense, can you explain ?


In early 2016 Docker acquired the startup building MirageOS, a unikernel system. And from how Solomon describes what you can do with Moby, it very much sounds like the unikernel idea at work.


Will I eventually be able to 'moby build' a directory with an 'OCIfile'?


If I am reading you correctly, this is somewhat like Packer?


On further reading, it's clear I was not reading correctly.


I too am confuzzled by this. I'm at a Docker shop where we currently run on DC/OS and Marathon. I think the most confusing thing about Docker right now are all the different scheduling and networking frameworks (DC/OS which adds a web ui to mesos and marathon, Kubernetes, Nomad, CoreOS/Fleet, etc).

None of these systems scale up from a single 1 node system to a full distributed cluster. Here was my attempt of trying to get DC/OS to run in a minimal cluster on Digital Ocean:

http://penguindreams.org/blog/installing-mesosphere-dcos-on-...

We use DC/OS at work and the web UI frequently transfers over 1MB of json a second and the failed containers tab can max out CPU at 100% O_o

Marathon, Kubernetes, Nomad and Swarm all have their own orchestration json files in totally different formats. It gets more confusing with you take about pluggable network layers (WeaveNet, Flannel).

I'd _like_ to think Docker is trying to bring some standardisation to clusters, networking and scheduling ... but I'm not sure if that's the case.


Kubernetes runs fine on a single node, and scales to thousands. Lots of developers use a single node in a VM to develop apps locally.

You wouldn't want to run a single node in production, of course, because that defeats the point (and introduces a single point of failure). But there's nothing technically preventing you from doing it.

As a Kubernetes user, I hope Docker as an application is going away, because it's increasingly in the way. Kubernetes would benefit from managing its containers with a more direct, more lightweight mechanism (like rkt or containerd).


Took me some time to figure it out, but Kubernetes actually does, with kubeadm.

If you're on a Debian/Ubuntu-based OS, you basically `apt-get install kubeadm kubelet kubectl`, then `kubeadm init` (and you have 1-node cluster)

Then you `kubeadm join --token $token $master_node_ip` and you have the second node. Repeat as necessary, throw in some proper automation when you have enough (3+) nodes.

There is a disclaimer that it's beta, though.

Still, Docker feels much simpler, and I really believe "simpler" means "better". Maybe that's just my prejudice, but Kubernetes has awfully enterprisey aftertaste and feels to be full of magic. If Docker or Swarm goes haywire - and every software has bugs - I can check almost every component, piece by piece, down to what kernel is told to do - thankfully, they're still not too far from the actual OS primitives. If affected node number is tolerably small, I can even "downgrade" to Compose and manual scheduling and networking kludges, with relative ease. I'm unsure what I would do if some day Kubernetes would insists on malfunctioning.


I've played with Kubernetes and I wish I had finished my post on it. I got frustrated when looking at logs from a node wasn't supported by nodes joined with kubeadm (that was a few months back .. I really hope they fixed that by now).

I agree with you, Kubernetes seems insanely complicated. I haven't used it in production though. I'm in a Marathon shop, and it is really slick once you have a working DC/OS cluster .. but setting up that cluster requires a full time team. It's not trivial and far from simple.


> None of these systems scale up from a single 1 node system to a full distributed cluster

What do you mean? This is exactly what they do.


No, most scale up from a 3 or 6 node cluster to an N node cluster.


https://kubernetes.io/docs/getting-started-guides/minikube/

> Running Kubernetes Locally via Minikube

> Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.


Yes, so the problem is I think that for the parent it is hard to scale minikube out to a full cluster.


I don't contribute to Kubernetes, but I am curious about what the parent or what you mean by that.

We use 1-node for dev work and then n-node cluster on staging and production. My team's dev workflow spans from 1-node to n-node just fine.

Granted, I also wrote Matsuri to consolidate managing dev and production systems. But I also know that K8S 1.5 and 1.6 introduced a lot of resources that makes it easier to manage things like that (such as ConfigSets). I also know that Helm has been doing a lot in this area. (I had never used it because it overlaps with what Matsuri does).


I think they want to start their production environment on one node, and then scale as their product grows. I'm not saying that it's a particularly good approach, but I've seen situations where people frown when you say that your 10 user product needs at least 3 machines to function, just because it's run on cluster software that's designed to build products for millions of users on. It would be easier to sell if you could run it on 2 or even 1 server, and still grow the cluster to proper size when the circumstances demand it.

Of course, there are very good reasons for it to require 3 machines, and since machines are so cheap, it should be absolutely a no brainer to spawn a $5 node to get that quorum running, but that's I think the argument for it.


I think maybe people are under the mistaken impression K8S requires forming some sort of quorum. You can run 1 node in production if that is what you really want.

You run 3-nodes for availability and resilience. You can spread the 3-nodes across availability zones (isolating failure domains). If there is sufficient capacity, then one out of three nodes can go down, and K8S master will reschedule the pods in the down node. This also allows for controlled upgrades. If those are not a requirement (your prototype doesn't need that kind of uptime), then why not run it in 1-node?

The only subsystem I know of that has a quorum mechanic is etcd backing k8s master. The docs will recommended to run the backing etcd in a 3-node configuration-- again, for availability. However, I run k8s worker, k8s master, and 1-node etcd on the same dev box. (The trick is to tell etcd to listen to three ports and tell the single etcd node that those other ports are part of the quorum). There is no reason the 10-user app that can tolerate some downtime cannot run like that either.

I do think though, K8S has not yet closed the gap for people who want to prototype quickly and get something out. Heroku is a great platform for that. There is an migration path that goes from Heroku to Deis (which is built on top of Kubernetes and will work with Heroku build-paths).


Correct, minikube emulates a cluster via virtual machines for local development. It's a hack.


It's not a hack, it's a product: "A version of Kubernetes that works anywhere, with zero configuration."

You can run Kubernetes locally without a VM just fine, but a VM encapsulates everything nicely without needing to configure the host's network and so on. It doesn't "emulate" a cluster, it creates a cluster that happens to contain only one node. Plus, it works on non-Linux machines.


I run Kubernetes on a 1-node cluster for development on my laptop (as do the rest of my team), and we scale up just fine to our 6-node staging and 9-node production cluster.


May I ask how you do it?

I've thought about using minikube for local development myself as a way of being able to reuse helm charts so instead of having one setup for dev (perhaps docker compose or simply the stack standard way) and one for staging/production (helm), it's all the same helm chart with just different values. Another reason is to be able to encapsulate dev environments better. Right now, every stack I use have different ways to do it (pythons virtualenv for example). If I were using minikube, everything would be using the same encapsulation. I've tried setting it up, and it works fine, using host mapping so files are still reloaded when files change locally on and so on, but something I haven't solved yet is that in this setup, the only way stack specific packages/modules/whatever is installed, is inside the container, which is a problem because my editor/IDE requires the packages/modules to be available locally for it to be able to do "intellisense".

Have you and your team delt with the same problem or other problems? How have you solved it? Any tips and thoughts are much appreciated!


No problem.

I wrote Matsuri as a way to manage this. https://github.com/matsuri-rb/matsuri This was written with K8S 1.1 in mind, and later somewhat updated to 1.4.

It was primarily intended to solve:

(1) Consolidate interfaces for working with dev and production (or other environments such as staging, demo, etc.)

(2) It implements this by being a manifest generator and feeding the manifest into `kubectl`.

This was designed and written before ConfigSet, so some of what Matsuri solved is now solvable with ConfigSet.

Matsuri is written in Ruby, and so it takes advantages of class inheritance and method override to achieve what it does. The class definitions used generate the manifest is broken down into composable chunks, which can be overridden at any point.

In this way, I can have a base template for say, a web pod for dev, staging, and production. In dev, I would then method-override what is returned for volume mounts and volumes. In that way, I can specify source mounting and SSH agent sock mounting for dev ... but not for staging or production. In another example, I can use method override to augment the environment variables passed into the container in order to capture the uid/gid so that the container can reconfigure itself to use those.

There are helper methods to generate things like, correct format of ports or volumes. None of those are necessary if you already know how to format it.

Likewise, since a manifest is just a class definition, replication controller, replicasets, and deployments all reference a pod and use those to create the pod template spec.

I then have an extra resource called an "app" that does not map to K8S. They define a bundle of K8S resources or other apps that can be converged together. That allows someone to quickly setup a dev environment with all the moving pieces. Those also include hooks for you to do things like, define the command for shelling (with correct uid/gid) or console (which depends; redis console is invoked differently than a rails console). This allows a consolidation ... for example, to log into my dev rails console, I would do something like:

bin/dev console myapp

And to do that on production:

bin/production console myapp

(Powerful, but I suspect there will be some asks around security here).

I had flirted with the idea of allowing plugins or packages ... and then Helm was announced. I figured more people will use that than Matsuri. But one possibility might be to get Matsuri to work with Helm.

There are a lot of rough spots with Matsuri. The command-line interface could be done a lot better (right now, it assumes you will always be in the root of your platform repo). There are no documentation or examples for Matsuri. It does not implement all of the resources. It also helps a lot if you know Ruby.

Infrastructure is not my full-time role on my team. Being a small team and early stage, we have had to wear multiple hats. I had not had the energy or time to document and promote this. If you want to chat about this further, feel free to email me at hosheng.hsiao at gmail.


On the stack-specific modules -- we're using Rails. The gems are installed on a host mount. None of us are using an IDE that needs to reference where it is, but if we did, we could point it to that path.

One of the headaches we run into comes from using Alpine as a baseimage. They are compiled against muslc instead of libc. You always have to install gems from inside the container -- but we can do that with host mounting and then executing the install from inside the container. That hasn't been a problem until we tried to install google-api gem and found out grpc doesn't play well with muslc.


Thanks for the write-up!


If I understand this right it sounds like Moby will be a grab bag of components you can use to create your own container platform. This means the Docker that we know of before today is an implementation of Moby plus some other things (GUIs, SDKs, etc) that they hope to strip out.

Like others, I found the press release incomprehensible so the pull request linked to in the docker/docker README is where I'm drawing this understanding from: https://github.com/moby/moby/pull/32691

As far as why I think Docker might be doing this:

It seems like a continuation of a similar strategy that they did with containers in the first place back when they were known as dotCloud. Rather than continue to compete with Heroku as a PaaS they found it better to open source their container technology and start a container movement (for lack of a better word).

Now that the container movement has happened there are a lot of competing tools at the container runtime (Docker, rkt, systemd-nspawn, lxc, etc.) and management (Swarm, Kubernetes, Mesos, Nomad, various ones that delegate to AWS ECS, etc.) levels. Rather than compete with all of those they are pushing themselves up a layer of abstraction to make it easy for companies to create their own container management tools specific to their needs.


As I understand it (though I could be wrong) - this repository (in relation to the old one) is intended to be a parent.

Rather than having just the Docker engine, it will coordinate docker, swarmkit, infrakit, linuxkit in a single project.

These will be swappable, so for example you could a) swap swarmkit for Kubernetes, b) swap linuxkit for Debian, c) swap infrakit with Terraform.

Like "Docker for Mac/Windows/AWS/Azure/GCE", etc. already exist - Moby will likely house all these variations and allow the creation of custom "Docker/Other for X/Y/Z".


wtf? So the docker project is now "moby"? And I have no idea what the major changes are? How is this going to affect my usage of docker? The readme is incredibly confusing and lists no concrete features, especially none that justify reorganizing the entire docker project so drastically. It took me 5 minutes just to realize that github.com/docker/docker now redirects to github.com/moby/moby.

Is this really necessary? Seems like it's just going to create tons of confusion.


Seems like it's just going to create tons of confusion.

When was the whole Docker circus not confusing?


It might be temporary as they split the UI (the cli tool docker) and the new moby library.

Seems like a move made in a rush. It would have been better to spin out the dependencies of docker into moby instead of moving the project back and forth.

Agreed that it is very confusing at this stage.


> Seems like a move made in a rush.

You've just summarized everything Docker Inc. does.


creating confusion was always the point


Thinking marketing split? Docker is the product you pay for and Moby is "may break, use at own risk".


Except for this blurb from Audience section on the Moby Project site:

  Moby is NOT recommended for:

  Application developers looking for an easy way to run their applications in containers. We recommend Docker CE instead.

  Enterprise IT and development teams looking for a ready-to-use, commercially supported container platform. We recommend Docker EE instead.

  Anyone curious about containers and looking for an easy way to learn. We recommend the docker.com website instead.


Can someone do a eli5 ? I don't understand what the product is.


AFAIK they want to spin out the reusable components off Docker (more than already done in libcontainer), including the code responsible for storage volumes, networking, etc.

Basically, make all the building blocks of Docker available as a library, with some framework to use to connect them.

Probably the Docker project would become the blessed customer of this new framework.

I can't really see rkt, snap, OpenShift adopting this technology, though.

Edit: as many of the things they want this framework to do belong to orchestration (if there are several containers involved--they often are), this might also look like a pinch at Kubernetes.


This is github.com/docker/docker with a note added on top of the README.

For now you have to watch the DockerCon keynote to understand; it's intended as a "common assembly line for container systems".



I cannot help but think that Docker the company is in deep trouble reading such product announcements. It's incomprehensible.

That said, what do I know. The folks at docker are brilliant marketers (with a great product). The marketing is the main reason docker is so wildly popular.


So, their breaking up the monolith? Hold on to your hat!


I was pretty sure something was broken when I was browsing Docker docs and suddenly I ended up in a moby/moby repo.


I don't really understand this, but I think the special feature from Dockercon will help explain.


Didn't Docker already have a project called Moby? Is this the same thing?


It seems so - the `docker/moby` repository[0] now redirects to `linuxkit/linuxkit`, but the `moby` cli from `linuxkit/linuxkit`[1] is being moved to the `moby/moby` repository[2].

[0]: https://github.com/docker/moby

[1]: https://github.com/linuxkit/linuxkit/tree/master/src/cmd/mob...

[2]: https://github.com/moby/moby/pull/32693


Yeah:

- Moby VM is (was?) the name of the Linux VM that Docker for Mac (and Windows too, I think) runs as the host for Docker containers. - Moby Dock is the name of the whale in Docker's logo

A Google search for "moby docker" restricted to results before April 2017 shows plenty of results

Edit: sample search result: http://lucjuggery.com/blog/?p=753


I can't stop reading this as Mooby -- the golden calf. http://kevin-smith.wikia.com/wiki/Mooby_the_Golden_Calf




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: