Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft announces Hyper-V Containers (microsoft.com)
344 points by swernli on Apr 8, 2015 | hide | past | web | favorite | 144 comments



This will be what takes containers into the mainstream businesses. Companies may adopt docker or other instead of this, but Microsoft creating their own version of it means its a viable technology. Im more interested in the new frameworks and technologies that get adopted because of this than the fact that its in use. Traditional Java web projects that are hosted on Tomcat/JBoss don't run well inside containers but there are technologies like Node.js that lend themselves to containerization. Open source .NET is now a viable option for linux deployments, and Microsoft's new containers. It will be an interesting couple of years as this shakes out.


Microsoft creating their own version means the technology is viable? I think people have been using 'the technology' for years, without any input from Microsoft. It doesn't need to be anointed by Microsoft to be 'viable'.


For the tech enthusiast and the visionaries, you are correct; It is a very viable technology. However the majority of people that deploy software are rather conservative and unless they see a market leader such as Microsoft with a solution they don't deem the technology safe to use. This is well documented in a lot of literature like Crossing the Chasm but can be observed frequently if you work at a larger non-tech oriented company. Whether or not this notion is actually correct is debatable but it doesn't change the validity of it.


This also means a lot of Microsoft shops who have a lot of Docker enthusiasts will be able to pitch this to thier bosses, who might not have been on board prior to this since it wasn't an enterprise MS product.

This will open a TON of doors for everybody.


I think that when a "market leader" finally adopts technologies that were in use for more than a decade by their competition, it's safe to assume the tech got mainstream...


BSD had jails for well over a decade before Linux containerization took off; Microsoft entering the space could very well be good for all participants. A rising tide, etc.


To be fair, there were things like usermode linux (UML) that were (roughly) contemporary to jail, circa late 2001 / early 2002.


Assume a generous reading of the comment, like, "It must be good if Microsoft is jumping on the bandwagon."


> It doesn't need to be anointed by Microsoft to be 'viable'

Well, in a lot of corporations its viable when the big analyst companies say its viable (Gartner and Forrester Research) and they tend to be "nudged" by Microsoft. It once was does IBM have an offering in this space and then it became does Microsoft.

// one more soul crushing things done in corporate IT


It doesn't mean it wasn't viable before, it's just another piece of evidence, which many more people will be exposed to, that it is viable.


> Microsoft creating their own version means the technology is viable?

Keep in mind nearly all technologies are used mostly at non-tech companies (e.g. far more software is written by developers not working for a software company) and the software/IT teams at these companies usually prefer solutions from the large/major tech names they know/trust. Microsoft having their own version of something means a lot more businesses will consider using it.


Microsoft creating their own version means it will be perceived as viable by many more people.


Could you please explain what kind of issues did you find running tomcat/jboss inside containers? We've been running several apps on docker and so far no problem. Thanks a lot in advance.


Essentially, Java is already run like containers, that container being an application server of some sort. Very rarely is Java software really dependent on specific packages being installed or even which operation system aside from a particular Java VM. When you add another container layer you generally are just adding more overhead and further you have issues with correctly setting some tunable parameters like heap sizes vs container memory sizes, etc. It works, it just isn't very ideal.


Thanks for your reply spullara. We might be one of those rare cases where we do need different packages and libraries for each applications. It's the pain that we have to suffer having to maintain several "legacy" apps.

Regarding your view on an app server being a container of some sort, I do agree. We are actually starting to develop apps to be run using embeded app servers (with Spring Boot), as it fits better when running apps using docker.


Makes sense. I've been recommending Dropwizard for years for similar reasons.


Tomcat/Jboss are resource hogs and tend to not do well in sandboxed environments as it takes away a lot of the configurability/tuning that they provide/need for scaling. In addition, people tend to deploy multiple applications per application server to keep resources requirements down: containizeration tends not to lend itself to multi-application stategies (in particular resource heavy applications like those that run on the JVM). This is far from something that makes java an impossibility but it changes the techniques enterprises are accustomed to. Technologies like Jetty provide a good solution to the problem, and will probably see wider adoption as containers become more prevalent.


As someone that has deployed multiple production applications in Tomcat and Jetty, I would never choose Jetty again. The concept is good but it has severe quality issues and does not offer the same stability as Tomcat. At one point we had to fork Jetty to fix critical problems, never again.


Interesting point. I've never had an issue with Jetty, though I've never ran it at huge scale before. I've used it for internal business systems which at peak require 1000 req/s which isn't a whole lot. Do you run an wrapped tomcat to create fat jars or Tomcat as a standalone application server?


Thanks a lot for your answer. We create a separate container for each application, as it seemed like the correct approach. In our company we do have several "levels" of library (even jvm) version requirements. Containers have been very helpful to easy the pain that was managing that on the server side.


I've found that building fat JARs with all dependencies bundled solves a lot of the same dependency management issues containers can be used for. And it does it without container overhead.


That's what we do now: fat JARs, including all dependencies and use embedded app server. On top of that, we use docker containers so we can control de JVM version as well. The overhead is not that high and it gives us the benefit of knowing that the same container that the developer/jenkins tested is the one that passed QA and will run in production.


This is exactly what should be done.


How do you apply security updates to the dependencies?


To update on this since I am a Java programmer who is picking up c again after 10 years:

In modern Java world people often maven or another project tool where upgrading a library is as simple as changing the version number in a "pom" file, push and wait for Jenkins to finish build, unit and integration tests.

Not kidding here, this is one of the things I love about Java development.


This is literally every ecosystem except classical c and c++.


You generate a fresh build with the updated libraries.


Exactly. This also ties in nicely with a test-heavy build process to make sure that said security updates don't cause any regressions.

EDIT: To give you a chance to catch those regressions, at least.


Just worked on deploying OpenAM on tomcat with Docker. A few things stuck out to validate the "hard to deploy in containers" point:

(1) node.js/Ruby/Python scale with processes, not threads. There's no supervisory/control environment over the processes, just the OS. JVM on the other hand expects to do a lot more process/thread control itself so it's kind of another "layer" between the OS and your code.

(2) Port binding doesn't work the same way, either. Most of our dockerized services have one port/process with simple load balancing built into our "routing fabric", which is something ops controls at my company. My understanding of JVM scaleout is that the servlet container is responsible for multiplexing incoming connections onto free capacity, which isn't how most docker shops work.

(3) I'm not sure what the typical deployment patterns are for servlet containers but they seem more multi-tenant w.r.t # of applications running in them, vs. a typical docker setup where containers are very thin and meant to be run in the dozens or more per-machine.

It's not that the JVM is inherently inferior, more than Docker has grown up around unix/linux ops-minded folks and they're bringing a lot of their assumptions about how software should be deployed and operated (e.g. "things should be scriptable") with them, and that their thinking is dominant among the current container-using crowd.


My understanding of JVM scaleout is that the servlet container is responsible for multiplexing incoming connections onto free capacity, which isn't how most docker shops work.

No, not really, typically you would just run N of your JVM processes with either some sort of load balancer (or your "routing fabric") to balance between them or a discovery mechanism.

I think you might be referring to some sort of big-box "enterprise" servlet container like Websphere or something quite different than Tomcat.


Can you elaborate on, or point me to some reading on the issues with containerizing tomcat and/or jboss? This is not something I've encountered before and may become an issue for me soon. Thanks.


In my experience the Tomcat/JBoss tend to be a relatively large overhead, but since they can run multiple "war" files under the same overhead this is not as much a problem when running a single container for multiple applications. But when you containerize them you'd like to run one instance per application which will multiply the already significant overhead.

Not sure if this is what the GP was referring to, but just my 2 cents.


> ...Microsoft creating their own version of it means its a viable technology.

...Microsoft creating their own version of it means it's _viewed_ as a viable technology by _managers_.


At first I was like :D

Then I was like :(

App containers are SORELY needed in microsoft ecosystem.

But whats up with the HyperV vendor lock-in? Looks like to those of us already investing in vmware or EC2 etc get the shaft...


Windows Server Containers will absolutely work in other hypervisors. You need to be running Windows as a guest OS obviously.


Hi John

Former-MS guy here working in SV. I could tell you work at MS by some of the terms you use which aren't common outside the company (e.g. "SKU") but MS people say a lot ;)

You should set up your profile on HN so that people know you're an MS guy and write a bit about yourself. I would email you directly, but your profile is blank.


It's been a viable technology ever since IBM started doing it in the 1960s, with VM.

Oh, wait. This isn't VM. This is something less-featureful than VM which will, possibly, eventually evolve into VM after a lot of hair-pulling. My mistake.


Pardon the skepticism, but do "Hyper-V Containers" with "enhanced isolation powered by Hyper-V virtualization" sound suspiciously like, err, Hyper-V virtual machines? And "Server Nano" has a description rather reminiscent of 2008's "Server Core".

Is this just about management tools? Because that's cool, too, but why the spin?


From the TechNet Announcement:

"we removed the GUI stack, 32 bit support (WOW64), MSI and a number of default Server Core components. There is no local logon or Remote Desktop support. All management is performed remotely via WMI and PowerShell. We are also adding Windows Server Roles and Features using Features on Demand and DISM. We are improving remote manageability via PowerShell with Desired State Configuration as well as remote file transfer, remote script authoring and remote debugging. We are working on a set of new Web-based management tools to replace local inbox management tools."

http://blogs.technet.com/b/windowsserver/archive/2015/04/08/...


Since this is all remote powershell, it would be nice if MS/Windows introduced a native SSH server. That would probably help drive some conversion for people used to the posix world.


In order to make a native SSH server, Windows need pseudoconsoles (analogous to pseudoterminals in POSIXland). That is, it must be possible for a random program to create a handle that supports operations like SetConsoleCursorPosition without having to call AllocConsole. Calling AllocConsole is a problem because there's no way to monitor what programs are doing with that console except scraping it. (The accessibility hooks are insufficient because if you try to access the console from inside them, you deadlock, and if you queue an access request for later, you race.)

Since consoles are real kernel objects since Windows 8 and talk to conhost over IPC anyway, this feature is eminently doable. It's been my top feature requests for years. Nobody's gotten around to it.

Pseudoconsoles would be a bit more complicated than POSIX pseudoterminals because Windows consoles have more features, but the basic concept would transplant beautifully. It'd also make Cygwin a lot better.

I miss working on operating systems.


This!!!! One of the biggest things I really miss in Windows-land is SSH. Just today I had to create a SSH tunnel for SQL Server. While it's not a big issue with third party tools, it should just be built and ready for use as with every Linux distro, FreeBSD, Solaris and OSX.


While they are at it, they could create a posix compatible layer for Windows. That would really drive some conversion.

edit: after rereading my comment and seeing the downvotes, just to clarify, it was a serious, not negative suggestion. :)


There used to be Windows Services for UNIX (a.k.a. Subsystem for UNIX-based Applications):

http://en.wikipedia.org/wiki/Windows_Services_for_UNIX

https://technet.microsoft.com/en-us/library/cc771470.aspx


They ditched in Windows Server 2012.

Now recommend cygwin.

"The Subsystem for UNIX-based Applications (SUA) is deprecated. If you use the SUA POSIX subsystem with this release, use Hyper-V to virtualize the server. If you use the tools provided by SUA, switch to Cygwin's POSIX emulation, or use either mingw-w64 (available from Sourceforge.net) or MinGW (available from MinGW.org) for doing a native port. " https://technet.microsoft.com/en-us/library/hh831568.aspx


msys2 is a lot better than mingw+msys and cygwin, in my opinion. I switched months ago and its been a lot easier to deal with.


They've done that before and, as far as I know, it was not an extremely popular product.


It was not an extremely functional product. It was largely there to get government contracts where one of the requirements was POSIX conformance, even if they weren't using it.


Imagining they had first-class support for it (a major undertaking, I'd guess, but anyway) how many people would use it? I'd guess it'd be about the same people who use Cygwin now.


Why would you guess that? Because cygwin is nothing close to first class support. It has a lot of friction associated with its use. You're better off just running linux in a vm if you want posix on windows, and trust me plenty of people do that these days.


Anyone porting applications from Linux would be able to use it to reduce the effort.


Yes, but I think Microsoft's ultimate concern is how many people will want to use such applications. How many will? Think about how few people run, say, PHP applications under Windows. Even though it's possible. (Well, actually, often it won't work right because PHP developers don't bother to test with anything besides Linux)


I think with Azure, you can log into your Azure instance and then send commands. I recently tried to set up automated deployments for our non-Azure infrastructure and investigated bringing things to Azure.

It's not the same as SSH, but then again powershell is not the same as linux shells.


"we removed the GUI stack, 32 bit support (WOW64), MSI and a number of default Server Core components. There is no local logon or Remote Desktop support. All management is performed remotely via WMI and PowerShell."

Is this unprecedented ? I think it is, but I've been divorced from the windows ecosystem for a very, very long time ...

Is this, in fact, the first time that there has been a Windows release that had ... no windows ? Had no GUI ? Was administered with a CLI only ?


Not really the first time. Windows Server Core has existed since Windows Server 2008 [1]. Sounds like they stripped some more parts like MSI and 32 bits support.

[1] https://msdn.microsoft.com/en-us/library/dd184075.aspx


Thanks for the clarification. I at first assumed they'd bring OS level virtualization, apparently I'm not the only one. But it's basically just minimal images of windows in regular VMs then...

A step in the right direction but still disappointing imo. Linux and BSD are still miles ahead.


So this is like boot2docker ... on Windows? You have a VM and you have containers inside it. This is not the same as containers on baremetal. But that is fine .. my confusion is the OS inside the VM. Is that Linux or Windows? Normally, I can run Ubuntu and Centos-based containers on my box. Can I run these as Hyper V containers? What about dot net? Can that be containerized.


No. When you talk about containers, you talk about operating system level virtualization[0]. This means you have one kernel, with multiple user spaces. You can run a CentOS container on Ubuntu because both use a Linux Kernel. What will actually happen is that CentOS will use your already booted Ubuntu Kernel.

So unless Windows switches to a Linux Kernel or vice versa you will never be able to run one as a container on the other.

You can however do that with Virtual Machines. But installing a stripped down version of windows in a virtual machine does not make it a container, it makes it marketing bullshit.

"Containers on baremetal" and containerizing dot net are thus a bit silly concepts since .NET has nothing to do with the operating system and you can't run a container on "bare metal" whatever you might mean by that.

[0] http://en.wikipedia.org/wiki/Operating-system-level_virtuali...


You might be able to run Windows containers under Wine....


That referrs to Nano, not Hyper V virtualization?


Microsoft is doing OS-level containers and they are also going to allow running containers in VMs.


Trying to clarify. We are talking about 3 things: 1) OS virtualization for Windows. We announced this last year: http://azure.microsoft.com/blog/2014/10/15/new-windows-serve... 2) Nano Server -- A small Windows Server sku. Perfect for containers, but also useful for other scenarios where you need a small, cloud optimized Windows 3) Hyper-V Containers -- Think if you wanted to optimize a hypervisor with assumption that it is only running a container. What enlightenments would you enable? What management interface would you put on it? We'll have more details later, but this is the core concept.


I guess the question is, are these containers a shared kernel, near zero overhead kinda thing? So I could just run, say, DNS or a file share in a container without paying any overhead. Like what containers/jails or OpenVZ can do on Linux.


In other words, if you want to run a Linux based Docker container on Windows you're still going to need Virtual Box.


> In other words, if you want to run a Linux based Docker container on Windows you're still going to need Virtual Box.

Is this a surprise?

Containerisation is not magical pixie dust -- it's a particular approach to implementation that is specific to the OS. You have a single kernel, and it follows that in general that single kernel will only allow corresponding containers to be run.

That there will be a Docker server backend that can speak Hyper-V doesn't magically make a Windows kernel into a Linux kernel, or vice versa.


You have a good understanding of why this is the case. Hyper-V would be doing the job of Virtual Box and boot2docker which is what most developers have been using to run Docker daemon on non-linux hosts. I've tried the Hyper-V driver with Docker Machine and had some issues. So I'll be sticking with Virtual Box until that changes.


Or Hyper-V, or VMWare Equivalent (Fusion/Workstation/etc)


They're doing OS-level virtualization in Linux Azure instances, sure. They don't seem committed to OS-level virtualization in Windows, unless I've missed something.

edit: http://research.microsoft.com/en-us/projects/drawbridge/ ?


That's what you would assume when you look at the image. But

> Hyper-V Containers, a new container deployment option with enhanced isolation powered by Hyper-V virtualization.

Everything written tells a different story. "Hyper-V virtualization" means virtual machines, making it not a container. They just try to make that sound like a feature.

Or do you have more information than I do?


All we have is a press release. The diagram and the constant references to containers would seem to indicate these are, well, containers. You're picking at a few things and assuming they mean that the rest of the release is wrong. Why?


> All management is performed remotely

Wow, I've been out of the Windows world for years, can you really fully manage a Windows box without the GUI?


No. But you can use your machine to run the tools which will connect to the server. You can do this with desktops, not only windows server. Try Computer Management on your machine.


It means their memory isolation is using hardware accelerated extensions. I would imagine it's still shared kernel and thus not "virtual machines".

It makes sense for their container solution to make use of existing Hyper-V components like the virtual switch etc.

But for that to be possible it's likely they needed to make use of VT-x and VT-d (if using stuff like hardware accelerated network device isolation like SRIOV).

If anything this is closer to Bromium [1] than anything else.

Will be interesting to see if this requires Hyper-V to be running in Type-1 mode (or if this will be the default in upcoming Windows versions) or if they are able to make use of the virtualisation extensions without actually running the host as a Hyper-V partition.

So much cool stuff to hear about at BUILD.

[1] http://www.bromium.com/


How is the performance of Bromium?


It looks like they may be putting the container in a Hyper-V VM while allowing it callbacks to the underlying OS.

Done correctly this allows the hardware level protections to apply to the code running in the container, assuming the penalty of your OS calls routing through the VM-bridge doesn't kill your performance.


This is pretty close, but there is not actually a VM in the Hyper-V Container. The key thing is, these containers will take advantage of Hyper-V enforced isolation and virtualization but without requiring the full VM. So, while it has this increased isolation, it is still a container, with what you would expect from a container, including better density, faster start-up times, and portability. And will have Docker platform support to make it more flexible across environments.

Hope that helps?


Sounds like Mirage OS / exokernels / unikernels, where the app is compiled to run directly on the VM talking to paravirtualized APIs.


Is there an architecture diagram that shows the boundary between VMM, OS, container and storage layers?


REALLY looking forward to this. We've needed container style deployments on Windows forever. This is actually going to make my life better...at least this part, anyway.


Imagine what switching to linux could do for you.


http://blogs.msdn.com/b/oldnewthing/archive/2006/03/22/55800...

> In particular, if the solution begins with "First, install..." you've pretty much lost out of the gate. Solving a five-minute problem by taking a half hour to download and install a program is a net loss. In a corporate environment, adding a program to a deployment is extraordinarily expensive. You have to work with your company's legal team to make sure the licensing terms for the new program are acceptable and do not create undue risk from a legal standpoint. What is your plan of action if the new program stops working, and your company starts losing tens of thousands of dollars a day? You have to do interoperability testing to make sure the new program doesn't conflict with the other programs in the deployment. (In the non-corporate case, you still run the risk that the new program will conflict with one of your existing programs.)

> Second, many of these "solutions" require that you abandon your partial solution so far and rewrite it in the new model. If you've invested years in tweaking a batch file and you just need one more thing to get that new feature working, and somebody says, "Oh, what you need to do is throw away you batch file and start over in this new language," you're unlikely to take up that suggestion.


The fud from Microsoft is interesting. They imply that by using open source, you can't get support for when you're company is losing money. Additionally, they imply that by using Microsoft, they will actually do something useful in this contrived situation losing thousands per day.

Here's a hint, whichever solution is more complex is going to bite much harder from a downtime perspective, regardless of the underlying technology. I would much rather depend on a few line script that uses sendmail rather than a 5,000 mail client half implemented in a batch script.


I actually don't hear FUD from MS about open source any more. I'm doing tests on my workstation of .net core and asp.net 5...all open source. Mark Russinovich said the other day that they are considering open sourcing Windows one day. They contribute to the Linux kernel. I can spin up a linux VM in azure with a powershell command. I don't know how much more friendly to open source they could be.

The article that was quoted and that you are talking about is an old article by Raymond Chen where he is talking about the importance and value of backwards compatibility. He's talking about the pain in the ass that large businesses face when trying to update the base image for a fleet of servers. I can tell you from personal experience that its a painful process.


That's a personal blog, not some official Microsoft thing. I think that, regardless of the source, it's an important point -- at a really big environment it's hard to introduce new software packages for nontechnical reasons (like the licensing stuff) and for technical ones too (gazillions of machines with different configurations you have to worry about breaking). I've been a system administrator at a small place and even then it's not fun to try to roll out something like that.


wow. in which world do you live? since puppet, its really easy to update fleets of machines. and the tools even emerged. new software could be upgraded easily, as long you have a valid license or if it is handled by a "free" license.


In 3 years Microsoft will deliver MS Virtual Deployment Technology that uses powershell and ftp under the hood, but the integration with Visual Studio will be swooned for by millions. I'm speculating of course, but it feels like familiar territory. It always sounds like stockholm syndrome...


not everyone is in a place where they can use puppet or a similar orchestration system. Lot's of places are actually really scared of automation because they did it in the past and someone left without documenting something that caused some mayhem. I know, things like that can be avoided. And yes, all of the places that are gaining the benefits of scale are using lot's of automation...not everywhere is like that.

Things are better than they used to be, yes. But in lots of big businesses you wouldn't believe how slow processes are for all kinds of very valid sounding reasons. Don't get me wrong...its something that I'm personally working on changing everywhere that I can. I think everyone should be able to code, system admins should ALL be able to code in at least one language.


Imagine what having constraints imposed by existing applications and business requirements could do for you.

C'mon, kids. It's not all rails apps from your MacBooks out there.


The software that I work on is Windows based.

I usually get down-voted for comments like yours...but hey. HN...what ya gonna do...


It'd be nice if Visual Studio tooling could let you hit F5 and your app could Compile, Deploy to an On-Desktop Container after running a dockerfile, Start Debugging from a 'remote' debugger.


Good feedback!! Something for us to look at...


Would certainly reduce the difference between dev and live testing environments!


There's a short video on Microsoft's Channel 9 website showing Nano Server in action [1].

[1] http://channel9.msdn.com/Blogs/Regular-IT-Guy/Quick-Nano-Ser...


Looking forward to hearing more detail about how this works in the near future. I am curious though what are the plans to orchestrate and pull together multiple containers into an application, like Kubernetes, Mesos, CoreOS, etc? Is that coming in the Win 10 timeframe?


Yes, via the Docker-native orchestration tools: Swarm [1] and Compose [2].

[1] https://docs.docker.com/swarm/ [2] https://docs.docker.com/compose/


Right on, Solomon. Here were some of the details on Azure and Windows Server support for Swarm and Compose (and Machine): http://azure.microsoft.com/blog/2015/02/26/sunny-and-swarmy-...


Am I confusing something? That looks like Linux guest support on a Windows Server host, which is rather different to the Windows Container topic of this thread.


I think that the goal was to show that Microsoft already supports the Docker orchestration stack with its current products - and in doing so is laying the groundwork for integrating future Windows containers into that same stack.


Microsoft is working on Kubernetes support for Azure (along with the Kismatic people); and so I'd bet Kubernetes on Windows itself can't be far behind this announcement.


Many things that appear in Windows Server were already running as part of Azure


If you want a full PaaS, Cloud Foundry is currently implementing .NET app support as well.

Disclaimer: I work for Pivotal Labs, which is part of Pivotal, the main contributor to Cloud Foundry.


Leveraging our deep virtualization experience, Microsoft will now offer containers with a new level of isolation previously reserved only for fully dedicated physical or virtual machines

Uh. I don't understand how that sentence has any meaning. Particularly the "a new level of isolation previously reserved only for fully dedicated physical or virtual machines" bit. I mean, isn't that what a container is, a virtual machine? And if so, why is 'container' even involved here?

I don't know much about the container scene. I thought they were literally just virtual machines, with presumably some standardized way of spinning them up programmatically. Maybe someone can correct me.


A container isn't just a virtual machine: a VM involves providing an abstracted machine environment in which you run a whole OS, including a fresh kernel. A Container involves starting an extra, isolated user-space with no extra kernel of machine layer.


> I don't know much about the container scene. I thought they were literally just virtual machines, with presumably some standardized way of spinning them up programmatically. Maybe someone can correct me.

Close but containers share the same kernel. It allows them to do many things more efficiently but it's not a straight up virtual machine.


To build on this, containerized apps have less overhead than a full on virtual machine, since the binaries aren't replicated every time. Like, de-dupe for your VMs, to use a weak analogy.

However, because they all share the same kernel, you're limited to a single flavor of containers per host. So a host can provide for all windows apps, or all linux apps, but not a mix.

It makes the most sense when you have a need for many separate instances of similar applications. You can fit many more containers in a given host than their full VM equivalent, but lose the complete abstraction (and therefor, flexibility), that a VM gives you.


> So a host can provide for all windows apps, or all linux apps, but not a mix.

While this is true I feel like at some point in the future we're going to be able to mix both. I've seen some rough ideas as to how it could happen but they sounded almost impossible to pull off. Still, if we had a way to mix containers it would be absolutely amazing.


It would be cool, but I can see a point of diminishing returns. If you kept it to say, two OS flavors or so, yeah, not bad. But the moment you go down that path, the abstraction needed to ensure both sets of binaries play correctly with the underlying hardware and still remain isolated and separate starts to eat into the overhead you were trying to save in the first place. It'd be cool to pull off, but I have to imagine that it'd be for niche applications.


I recently gave a talk about the relationships between VMs and containers: http://original.livestream.com/pivotallabs/video?clipId=pla_...


First half of this video will get you fully up to speed https://www.joyent.com/developers/videos/docker-and-the-futu...


I'd be interested to hear more details of what this actually is. At the OpenStack summit a few years ago we were discussing how everything done in containers via cgroups today could also be done via KVM, for greater security. This sounds like it could be a step in that direction (?)


If only WindowsNext Shell/Explorer would contain a sandbox feature like Docker or Sandboxie: http://en.wikipedia.org/wiki/Sandboxie

Many Windows application could run in its own sandbox.


THE STORY OF THE CONTAINER GOLDRUSH

As seen by a verbose, presumptuous 22 year old.

OPEN SOURCE MOVEMENT lays foundation for containerization:

- linux kernel gains mainstream adoption, becomes standardized across distributions

- kernel matures to support containerization (i.e., namespacing critical OS operations)

- lxc project takes advantage of kernel support, builds tooling around namespace containerization

DOCKER (THEN DOTCLOUD) is first company to capitalize on power of containerization:

- dotcloud demonstrates clear use case for containers, encouraging developer adoption

- dotcloud releases internal infrastructure code ("moves up the value chain") for PaaS

- dotcloud develops project into docker, builds existing momentum into early adoption of docker.

AT THIS POINT other companies begin to emerge around Docker, e.g. CoreOS. Key facts:

- Docker is an abstraction around LXC, effectively a set of convenient scripts for controlling LXC

- Docker is building a platform via a package management system preloaded with their repos

- Platform is a threat to new entrants, e.g. CoreOS, because they risk becoming tenants

CoreOS realized the risk of the Docker platform, and also that Docker is unnecessary for many of its value-adds. Everything Docker can accomplish, raw linux containers can also accomplish. The problem is that scripting LXC is less convenient than using Docker, but Docker depends on LXC, therefore LXC featureset will always be ahead of Docker.

In the developer community, there is a growing acceptance of the fact that Docker is an abstraction over LXC. CoreOS is trying to standardize the abstraction as an implementation of the "app container spec" [0]. This spec puts Docker, Rocket, and lxc-tools on level playing ground.

Despite this apparent acceptance, the market continues to build tooling and platforms around Docker, instead of raw LXC containers. This announcement from Microsoft is just the latest example. If a new product wants to support containers, it needs to support Docker.

Docker is benefitting from network effects even though its product is not defensible from a technical standpoint. Docker is signing deals with competing enterprises like Microsoft, Google, and Amazon, because those companies are its customers.

The risk for Docker is that these big companies eventually cut Docker out of the equation. They may eventually choose to replace Docker with their own "app container runtime," with features only supported on their own platform.

Docker was one of the first companies to capitalize on advantages of containers, probably because they have a seriously talented group of engineers writing their code. But the market has now woken up to these advantages, and Docker is being chased by massive companies with massive resources. I hope they can fend them off and keep the upper hand in the relationship, but unfortunately I think it far more likely that Docker will eventually be cut out of the equation or acquired by one of them. This will result in a fragmentation of container technology as each company rushes to develop their own app runtime engine. Ultimately developers will suffer as platforms divide and silo, increasing developer friction and reducing cloud market competition as users consolidate around the single platform with the most momentum. Eventually, I suspect one company will control 80% of cloud computing.

[0] https://github.com/appc/spec/blob/master/SPEC.md


You forgot the part where Solaris did containers first in 2005.

Others "realised" the value of this later and started improving upon the ideas contained within.

Although I'm sure many might argue that this is the natural conclusion of virtualization.


And where FreeBSD did jails in 2000 (or actually a bit earlier).

https://www.freebsd.org/releases/4.0R/notes.html


Yes, everyone refers to jails, but I think most people would agree that jails weren't really containers. They didn't provide true isolation for a set of applications. I guess you could argue they were the original prototype for them though.

Solaris containers are the first "lightweight virtualization" technology that I'm personally aware of that provided true isolation of one more processes from the host operating system and host processes.


Not sure if "lightweight" counts when talking about a mainframe, but when first encountering Solaris zones they seemed equivalent to LPAR's in the mainframe world.

http://en.wikipedia.org/wiki/VM_%28operating_system%29

There are a lot of things from the mainframe world that are being newly "discovered" that seem quite mundane to the greybeards...

http://en.wikipedia.org/wiki/IBM_System_z#Comparison_to_othe...


Yes, there are LPARs, but we were discussing software-based virtualisation. LPARs are more partitioning than virtualization which is very different from a multi-tenancy perspective.

The equivalent to LPARs in the Solaris world would be LDOMs on SPARC.


"Yes, everyone refers to jails, but I think most people would agree that jails weren't really containers. They didn't provide true isolation for a set of applications. I guess you could argue they were the original prototype for them though."

The first VPS provider (JohnCompanies, 2001) was based entirely on jail and it absolutely provided (even then) true isolation for a set of applications.

Every customer had their own unix root and their own rc.conf configured their own system and everyone ran their own sendmail/named/httpd/etc.

It is absolutely correct to refer to jails in this way, and that is why you see everyone doing it.


If you're talking about chroot jails, no, it was possible to "escape" jails they did not provide true isolation.

If you're talking about some other jail, possibly, but my understanding is they didn't actually provide true isolation. Certainly not a kernel-level of abstraction.


Good point. The idea of containerization has existed for a long time. A widespread implementation of it has not. The levels of abstraction are "idea of containerization" -> kernel implementation -> userspace tools. LXC, Solaris Containers, BSD jails all exist at kernel level of abstraction. Docker, Rocket, lxc-tools exist at userspace level of abstraction.

For any component at a given level of abstraction to gain widspread adoption, it needs to beat its competitors. Linux kernel needed to beat FreeBSD and Solaris. That's why I started the story with "linux kernel gains mainstream adoption." Consolidation at the kernel abstraction level is complete. Linux won. Now it's time for consolidation in the userspace abstraction level.


Solaris containers are no longer just a kernel level of abstraction though. As of Solaris 11.2 they're also capable of providing a near-system-level of abstraction via "Kernel Zones":

http://docs.oracle.com/cd/E36784_01/html/E37629/

These allow virtualization of multiple, independent instances of the operating system each with their own version of the kernel and processes. It is not the same as running multiple instances of VMWare, etc. since it is specifically designed to handle virtual Solaris instances:

https://blogs.oracle.com/zoneszone/entry/install_a_kernel_zo...


Windows NT had them since 2000: https://www.microsoft.com/msj/0399/jobkernelobj/jobkernelobj... They are called Job Objects in NT, as opposed to Namespaces or Zones in other kernels.


You forgot the part where FreeBSD 4.0 added jails in March 2000 [1].

You could argue that jails were inspired by chroot(), and that's correct, but that's hardly any isolation.

[1] http://phk.freebsd.dk/pubs/sane2000-jail.pdf


"You forgot the part where Solaris did containers first in 2005."

FreeBSD jail in 2000.


I have a question: if I run a bunch of usermode processes on a hyper-v container and they make system calls to interact with the kernel, will the kernel they will be interacting with be running within the container? I.e. does each Hyper-V container run a distinct Windows kernel for each contained workload? Or is there just one single and common kernel on the host and mechanisms like EPT and other virtualization hardware extensions are used to isolate user mode only?


Will this technology allow running a POSIX kernel alongside like in a VM? Or will these Containers be limited to Windows server software?


Containerisation is a new term for OS-level virtualisation.

So in a current meaning of virtualisation, no. It will not let you put a Linux container on a Windows kernel.

You could run a VM on Hyper-V VMs, and presumably it will respond to the Docker API, but that just means it's a VM.


I figured as much, I just wasn't sure. I figured it's possible that at some point the container host could load another kernel in case a container needs it. I'm thinking this is where VMware and Citrix should be going in the future.


Top it off with SSH access to PowerShell and we're all set? (Something something something dark side)


Yes please. And native rsync interoperability too? I've love to banish cygwin from my Windows servers.

Although I'm finding the remote powershell execution from a Linux machine use case is now handled quite well by tools like Salt (via ZeroMQ to the minion) and Ansible (via WinRM). Native SSH would still be good though for tunnelling etc.


> And native rsync interoperability too? I've love to banish cygwin from my Windows servers.

Hear, Hear! You and me both. I've never liked Cygwin and do my best to avoid when possible.


What filesystem backend does docker use on Windows? Doesn't it require a copy on write filesystem to be efficient?


We're building a new one as part of the Windows Server Container implementation. Also doing copy-on-write registry and tightening up Job Objects: https://msdn.microsoft.com/en-us/library/windows/desktop/ms6... With any OS, containers are actually made up of several low-level components put together behind a management experience (which will include Docker).


Dang, COW registry is pretty neat.


I always thought a hypervisor on top of Windows didn't make much sense.

But this, this makes a lot of sense. Looking forward to it.


Really happy with how this all came together. Congrats Windows team.


This is too perfect.

http://i.imgur.com/WOQGknl.png



Apologies for the negative snark.


Anyone found the link to the Github PR? Or is this just PR? ;-)


Hey Justin, is this what you are looking for: https://github.com/Microsoft/docker.

This is where the Windows team is doing the work to add Windows Server support to the Docker engine. We are working with Docker Inc. to plan the PR up, once it is ready for primetime.

Note, I am an engineer in the Azure team...


Thanks for the link - I checked out the master branch and it didn't have any real diffs on it. Which branch should I be looking at?


Hi Justin, We are doing most of our working in a branch right now (https://github.com/microsoft/docker/tree/jjh-argon), as we stabilize the Windows Server Container and Hyper-V Container foundation the work we are doing to develop new drivers into the docker engine will stabilize and we’ll be pushing it upstream.

-Taylor, PM on Windows @taylorb_msft





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: