
Microsoft announces Hyper-V Containers - swernli
http://azure.microsoft.com/blog/2015/04/08/microsoft-unveils-new-container-technologies-for-the-next-generation-cloud
======
kirinan
This will be what takes containers into the mainstream businesses. Companies
may adopt docker or other instead of this, but Microsoft creating their own
version of it means its a viable technology. Im more interested in the new
frameworks and technologies that get adopted because of this than the fact
that its in use. Traditional Java web projects that are hosted on Tomcat/JBoss
don't run well inside containers but there are technologies like Node.js that
lend themselves to containerization. Open source .NET is now a viable option
for linux deployments, and Microsoft's new containers. It will be an
interesting couple of years as this shakes out.

~~~
jsprogrammer
Microsoft creating their own version means the technology is viable? I think
people have been using 'the technology' for years, without any input from
Microsoft. It doesn't need to be anointed by Microsoft to be 'viable'.

~~~
kirinan
For the tech enthusiast and the visionaries, you are correct; It is a very
viable technology. However the majority of people that deploy software are
rather conservative and unless they see a market leader such as Microsoft with
a solution they don't deem the technology safe to use. This is well documented
in a lot of literature like Crossing the Chasm but can be observed frequently
if you work at a larger non-tech oriented company. Whether or not this notion
is actually correct is debatable but it doesn't change the validity of it.

~~~
at-fates-hands
This also means a lot of Microsoft shops who have a lot of Docker enthusiasts
will be able to pitch this to thier bosses, who might not have been on board
prior to this since it wasn't an enterprise MS product.

This will open a TON of doors for everybody.

------
bydo
Pardon the skepticism, but do "Hyper-V Containers" with "enhanced isolation
powered by Hyper-V virtualization" sound suspiciously like, err, Hyper-V
virtual machines? And "Server Nano" has a description rather reminiscent of
2008's "Server Core".

Is this just about management tools? Because that's cool, too, but why the
spin?

~~~
kolencherry
From the TechNet Announcement:

"we removed the GUI stack, 32 bit support (WOW64), MSI and a number of default
Server Core components. There is no local logon or Remote Desktop support. All
management is performed remotely via WMI and PowerShell. We are also adding
Windows Server Roles and Features using Features on Demand and DISM. We are
improving remote manageability via PowerShell with Desired State Configuration
as well as remote file transfer, remote script authoring and remote debugging.
We are working on a set of new Web-based management tools to replace local
inbox management tools."

[http://blogs.technet.com/b/windowsserver/archive/2015/04/08/...](http://blogs.technet.com/b/windowsserver/archive/2015/04/08/microsoft-
announces-nano-server-for-modern-apps-and-cloud.aspx)

~~~
ochoseis
Since this is all remote powershell, it would be nice if MS/Windows introduced
a native SSH server. That would probably help drive some conversion for people
used to the posix world.

~~~
dordoka
While they are at it, they could create a posix compatible layer for Windows.
That would really drive some conversion.

edit: after rereading my comment and seeing the downvotes, just to clarify, it
was a serious, not negative suggestion. :)

~~~
tkmcc
There used to be Windows Services for UNIX (a.k.a. Subsystem for UNIX-based
Applications):

[http://en.wikipedia.org/wiki/Windows_Services_for_UNIX](http://en.wikipedia.org/wiki/Windows_Services_for_UNIX)

[https://technet.microsoft.com/en-
us/library/cc771470.aspx](https://technet.microsoft.com/en-
us/library/cc771470.aspx)

~~~
mushiake
They ditched in Windows Server 2012.

Now recommend cygwin.

"The Subsystem for UNIX-based Applications (SUA) is deprecated. If you use the
SUA POSIX subsystem with this release, use Hyper-V to virtualize the server.
If you use the tools provided by SUA, switch to Cygwin's POSIX emulation, or
use either mingw-w64 (available from Sourceforge.net) or MinGW (available from
MinGW.org) for doing a native port. " [https://technet.microsoft.com/en-
us/library/hh831568.aspx](https://technet.microsoft.com/en-
us/library/hh831568.aspx)

~~~
DiabloD3
msys2 is a lot better than mingw+msys and cygwin, in my opinion. I switched
months ago and its been a lot easier to deal with.

------
sudioStudio64
REALLY looking forward to this. We've needed container style deployments on
Windows forever. This is actually going to make my life better...at least this
part, anyway.

~~~
alttab
Imagine what switching to linux could do for you.

~~~
emodendroket
[http://blogs.msdn.com/b/oldnewthing/archive/2006/03/22/55800...](http://blogs.msdn.com/b/oldnewthing/archive/2006/03/22/558007.aspx)

> In particular, if the solution begins with "First, install..." you've pretty
> much lost out of the gate. Solving a five-minute problem by taking a half
> hour to download and install a program is a net loss. In a corporate
> environment, adding a program to a deployment is extraordinarily expensive.
> You have to work with your company's legal team to make sure the licensing
> terms for the new program are acceptable and do not create undue risk from a
> legal standpoint. What is your plan of action if the new program stops
> working, and your company starts losing tens of thousands of dollars a day?
> You have to do interoperability testing to make sure the new program doesn't
> conflict with the other programs in the deployment. (In the non-corporate
> case, you still run the risk that the new program will conflict with one of
> your existing programs.)

> Second, many of these "solutions" require that you abandon your partial
> solution so far and rewrite it in the new model. If you've invested years in
> tweaking a batch file and you just need one more thing to get that new
> feature working, and somebody says, "Oh, what you need to do is throw away
> you batch file and start over in this new language," you're unlikely to take
> up that suggestion.

~~~
hueving
The fud from Microsoft is interesting. They imply that by using open source,
you can't get support for when you're company is losing money. Additionally,
they imply that by using Microsoft, they will actually do something useful in
this contrived situation losing thousands per day.

Here's a hint, whichever solution is more complex is going to bite much harder
from a downtime perspective, regardless of the underlying technology. I would
much rather depend on a few line script that uses sendmail rather than a 5,000
mail client half implemented in a batch script.

~~~
emodendroket
That's a personal blog, not some official Microsoft thing. I think that,
regardless of the source, it's an important point -- at a really big
environment it's hard to introduce new software packages for nontechnical
reasons (like the licensing stuff) and for technical ones too (gazillions of
machines with different configurations you have to worry about breaking). I've
been a system administrator at a small place and even then it's not fun to try
to roll out something like that.

~~~
merb
wow. in which world do you live? since puppet, its really easy to update
fleets of machines. and the tools even emerged. new software could be upgraded
easily, as long you have a valid license or if it is handled by a "free"
license.

~~~
alttab
In 3 years Microsoft will deliver MS Virtual Deployment Technology that uses
powershell and ftp under the hood, but the integration with Visual Studio will
be swooned for by millions. I'm speculating of course, but it feels like
familiar territory. It always sounds like stockholm syndrome...

------
Scuds
It'd be nice if Visual Studio tooling could let you hit F5 and your app could
Compile, Deploy to an On-Desktop Container after running a dockerfile, Start
Debugging from a 'remote' debugger.

~~~
CoreySanders
Good feedback!! Something for us to look at...

------
arthurfm
There's a short video on Microsoft's Channel 9 website showing Nano Server in
action [1].

[1] [http://channel9.msdn.com/Blogs/Regular-IT-Guy/Quick-Nano-
Ser...](http://channel9.msdn.com/Blogs/Regular-IT-Guy/Quick-Nano-Server-Scale-
Demo)

------
tdicola
Looking forward to hearing more detail about how this works in the near
future. I am curious though what are the plans to orchestrate and pull
together multiple containers into an application, like Kubernetes, Mesos,
CoreOS, etc? Is that coming in the Win 10 timeframe?

~~~
shykes
Yes, via the Docker-native orchestration tools: Swarm [1] and Compose [2].

[1] [https://docs.docker.com/swarm/](https://docs.docker.com/swarm/) [2]
[https://docs.docker.com/compose/](https://docs.docker.com/compose/)

~~~
CoreySanders
Right on, Solomon. Here were some of the details on Azure and Windows Server
support for Swarm and Compose (and Machine):
[http://azure.microsoft.com/blog/2015/02/26/sunny-and-
swarmy-...](http://azure.microsoft.com/blog/2015/02/26/sunny-and-swarmy-in-
azure)

~~~
bboreham
Am I confusing something? That looks like Linux guest support on a Windows
Server host, which is rather different to the Windows Container topic of this
thread.

~~~
shykes
I think that the goal was to show that Microsoft already supports the Docker
orchestration stack with its current products - and in doing so is laying the
groundwork for integrating future Windows containers into that same stack.

------
O____________O
_Leveraging our deep virtualization experience, Microsoft will now offer
containers with a new level of isolation previously reserved only for fully
dedicated physical or virtual machines_

Uh. I don't understand how that sentence has any meaning. Particularly the "a
new level of isolation previously reserved only for fully dedicated physical
or virtual machines" bit. I mean, isn't that what a container _is_ , a virtual
machine? And if so, why is 'container' even involved here?

I don't know much about the container scene. I thought they were literally
just virtual machines, with presumably some standardized way of spinning them
up programmatically. Maybe someone can correct me.

~~~
BinaryIdiot
> I don't know much about the container scene. I thought they were literally
> just virtual machines, with presumably some standardized way of spinning
> them up programmatically. Maybe someone can correct me.

Close but containers share the same kernel. It allows them to do many things
more efficiently but it's not a straight up virtual machine.

~~~
ckozlowski
To build on this, containerized apps have less overhead than a full on virtual
machine, since the binaries aren't replicated every time. Like, de-dupe for
your VMs, to use a weak analogy.

However, because they all share the same kernel, you're limited to a single
flavor of containers per host. So a host can provide for all windows apps, or
all linux apps, but not a mix.

It makes the most sense when you have a need for many separate instances of
similar applications. You can fit many more containers in a given host than
their full VM equivalent, but lose the complete abstraction (and therefor,
flexibility), that a VM gives you.

~~~
BinaryIdiot
> So a host can provide for all windows apps, or all linux apps, but not a
> mix.

While this is true I feel like at some point in the future we're going to be
able to mix both. I've seen some rough ideas as to how it could happen but
they sounded almost impossible to pull off. Still, if we had a way to mix
containers it would be absolutely amazing.

~~~
ckozlowski
It would be cool, but I can see a point of diminishing returns. If you kept it
to say, two OS flavors or so, yeah, not bad. But the moment you go down that
path, the abstraction needed to ensure both sets of binaries play correctly
with the underlying hardware and still remain isolated and separate starts to
eat into the overhead you were trying to save in the first place. It'd be cool
to pull off, but I have to imagine that it'd be for niche applications.

------
justinsb
I'd be interested to hear more details of what this actually is. At the
OpenStack summit a few years ago we were discussing how everything done in
containers via cgroups today could also be done via KVM, for greater security.
This sounds like it could be a step in that direction (?)

------
frik
If only WindowsNext Shell/Explorer would contain a sandbox feature like Docker
or Sandboxie:
[http://en.wikipedia.org/wiki/Sandboxie](http://en.wikipedia.org/wiki/Sandboxie)

Many Windows application could run in its own sandbox.

------
chatmasta
THE STORY OF THE CONTAINER GOLDRUSH

As seen by a verbose, presumptuous 22 year old.

OPEN SOURCE MOVEMENT lays foundation for containerization:

\- linux kernel gains mainstream adoption, becomes standardized across
distributions

\- kernel matures to support containerization (i.e., namespacing critical OS
operations)

\- lxc project takes advantage of kernel support, builds tooling around
namespace containerization

DOCKER (THEN DOTCLOUD) is first company to capitalize on power of
containerization:

\- dotcloud demonstrates clear use case for containers, encouraging developer
adoption

\- dotcloud releases internal infrastructure code ("moves up the value chain")
for PaaS

\- dotcloud develops project into docker, builds existing momentum into early
adoption of docker.

AT THIS POINT other companies begin to emerge around Docker, e.g. CoreOS. Key
facts:

\- Docker is an abstraction around LXC, effectively a set of convenient
scripts for controlling LXC

\- Docker is building a platform via a package management system preloaded
with their repos

\- Platform is a threat to new entrants, e.g. CoreOS, because they risk
becoming tenants

CoreOS realized the risk of the Docker platform, and also that Docker is
unnecessary for many of its value-adds. Everything Docker can accomplish, raw
linux containers can also accomplish. The problem is that scripting LXC is
less convenient than using Docker, but Docker depends on LXC, therefore LXC
featureset will always be ahead of Docker.

In the developer community, there is a growing acceptance of the fact that
Docker is an abstraction over LXC. CoreOS is trying to standardize the
abstraction as an implementation of the "app container spec" [0]. This spec
puts Docker, Rocket, and lxc-tools on level playing ground.

Despite this apparent acceptance, the market continues to build tooling and
platforms around Docker, instead of raw LXC containers. This announcement from
Microsoft is just the latest example. If a new product wants to support
containers, it needs to support Docker.

Docker is benefitting from network effects even though its product is not
defensible from a technical standpoint. Docker is signing deals with competing
enterprises like Microsoft, Google, and Amazon, because those companies are
its _customers_.

The risk for Docker is that these big companies eventually cut Docker out of
the equation. They may eventually choose to replace Docker with their own "app
container runtime," with features only supported on their own platform.

Docker was one of the first companies to capitalize on advantages of
containers, probably because they have a seriously talented group of engineers
writing their code. But the market has now woken up to these advantages, and
Docker is being chased by massive companies with massive resources. I hope
they can fend them off and keep the upper hand in the relationship, but
unfortunately I think it far more likely that Docker will eventually be cut
out of the equation or acquired by one of them. This will result in a
fragmentation of container technology as each company rushes to develop their
own app runtime engine. Ultimately developers will suffer as platforms divide
and silo, increasing developer friction and reducing cloud market competition
as users consolidate around the single platform with the most momentum.
Eventually, I suspect one company will control 80% of cloud computing.

[0]
[https://github.com/appc/spec/blob/master/SPEC.md](https://github.com/appc/spec/blob/master/SPEC.md)

~~~
binarycrusader
You forgot the part where Solaris did containers first in 2005.

Others "realised" the value of this later and started improving upon the ideas
contained within.

Although I'm sure many might argue that this is the natural conclusion of
virtualization.

~~~
bydo
And where FreeBSD did jails in 2000 (or actually a bit earlier).

[https://www.freebsd.org/releases/4.0R/notes.html](https://www.freebsd.org/releases/4.0R/notes.html)

~~~
binarycrusader
Yes, everyone refers to jails, but I think most people would agree that jails
weren't really containers. They didn't provide true isolation for a set of
applications. I guess you could argue they were the original prototype for
them though.

Solaris containers are the first "lightweight virtualization" technology that
I'm personally aware of that provided true isolation of one more processes
from the host operating system and host processes.

~~~
ianmcgowan
Not sure if "lightweight" counts when talking about a mainframe, but when
first encountering Solaris zones they seemed equivalent to LPAR's in the
mainframe world.

[http://en.wikipedia.org/wiki/VM_%28operating_system%29](http://en.wikipedia.org/wiki/VM_%28operating_system%29)

There are a lot of things from the mainframe world that are being newly
"discovered" that seem quite mundane to the greybeards...

[http://en.wikipedia.org/wiki/IBM_System_z#Comparison_to_othe...](http://en.wikipedia.org/wiki/IBM_System_z#Comparison_to_other_servers)

~~~
binarycrusader
Yes, there are LPARs, but we were discussing software-based virtualisation.
LPARs are more partitioning than virtualization which is very different from a
multi-tenancy perspective.

The equivalent to LPARs in the Solaris world would be LDOMs on SPARC.

------
kern_dude
I have a question: if I run a bunch of usermode processes on a hyper-v
container and they make system calls to interact with the kernel, will the
kernel they will be interacting with be running within the container? I.e.
does each Hyper-V container run a distinct Windows kernel for each contained
workload? Or is there just one single and common kernel on the host and
mechanisms like EPT and other virtualization hardware extensions are used to
isolate user mode only?

------
m_mueller
Will this technology allow running a POSIX kernel alongside like in a VM? Or
will these Containers be limited to Windows server software?

~~~
jacques_chester
Containerisation is a new term for OS-level virtualisation.

So in a current meaning of virtualisation, no. It will not let you put a Linux
container on a Windows kernel.

You could run a VM on Hyper-V VMs, and presumably it will respond to the
Docker API, but that just means it's a VM.

~~~
m_mueller
I figured as much, I just wasn't sure. I figured it's possible that at some
point the container host could load another kernel in case a container needs
it. I'm thinking this is where VMware and Citrix should be going in the
future.

------
simonjgreen
Top it off with SSH access to PowerShell and we're all set? (Something
something something dark side)

~~~
antod
Yes please. And native rsync interoperability too? I've love to banish cygwin
from my Windows servers.

Although I'm finding the remote powershell execution from a Linux machine use
case is now handled quite well by tools like Salt (via ZeroMQ to the minion)
and Ansible (via WinRM). Native SSH would still be good though for tunnelling
etc.

~~~
robert_nsu
> And native rsync interoperability too? I've love to banish cygwin from my
> Windows servers.

Hear, Hear! You and me both. I've never liked Cygwin and do my best to avoid
when possible.

------
sgwealti
What filesystem backend does docker use on Windows? Doesn't it require a copy
on write filesystem to be efficient?

~~~
johngossman
We're building a new one as part of the Windows Server Container
implementation. Also doing copy-on-write registry and tightening up Job
Objects: [https://msdn.microsoft.com/en-
us/library/windows/desktop/ms6...](https://msdn.microsoft.com/en-
us/library/windows/desktop/ms684161%28v=vs.85%29.aspx) With any OS, containers
are actually made up of several low-level components put together behind a
management experience (which will include Docker).

~~~
trentnelson
Dang, COW registry is pretty neat.

------
ckozlowski
I always thought a hypervisor on top of Windows didn't make much sense.

But this, this makes a lot of sense. Looking forward to it.

------
nickstinemates
Really happy with how this all came together. Congrats Windows team.

------
brianpgordon
This is too perfect.

[http://i.imgur.com/WOQGknl.png](http://i.imgur.com/WOQGknl.png)

~~~
oaktowner
[http://www.wpbeginner.com/wp-tutorials/how-to-fix-the-
error-...](http://www.wpbeginner.com/wp-tutorials/how-to-fix-the-error-
establishing-a-database-connection-in-wordpress/)

~~~
oaktowner
Apologies for the negative snark.

------
justinsb
Anyone found the link to the Github PR? Or is this just PR? ;-)

~~~
CoreySanders
Hey Justin, is this what you are looking for:
[https://github.com/Microsoft/docker](https://github.com/Microsoft/docker).

This is where the Windows team is doing the work to add Windows Server support
to the Docker engine. We are working with Docker Inc. to plan the PR up, once
it is ready for primetime.

Note, I am an engineer in the Azure team...

~~~
justinsb
Thanks for the link - I checked out the master branch and it didn't have any
real diffs on it. Which branch should I be looking at?

~~~
taylorbrown
Hi Justin, We are doing most of our working in a branch right now
([https://github.com/microsoft/docker/tree/jjh-
argon](https://github.com/microsoft/docker/tree/jjh-argon)), as we stabilize
the Windows Server Container and Hyper-V Container foundation the work we are
doing to develop new drivers into the docker engine will stabilize and we’ll
be pushing it upstream.

-Taylor, PM on Windows @taylorb_msft

