This will open a TON of doors for everybody.
Well, in a lot of corporations its viable when the big analyst companies say its viable (Gartner and Forrester Research) and they tend to be "nudged" by Microsoft. It once was does IBM have an offering in this space and then it became does Microsoft.
// one more soul crushing things done in corporate IT
Keep in mind nearly all technologies are used mostly at non-tech companies (e.g. far more software is written by developers not working for a software company) and the software/IT teams at these companies usually prefer solutions from the large/major tech names they know/trust. Microsoft having their own version of something means a lot more businesses will consider using it.
Regarding your view on an app server being a container of some sort, I do agree. We are actually starting to develop apps to be run using embeded app servers (with Spring Boot), as it fits better when running apps using docker.
In modern Java world people often maven or another project tool where upgrading a library is as simple as changing the version number in a "pom" file, push and wait for Jenkins to finish build, unit and integration tests.
Not kidding here, this is one of the things I love about Java development.
EDIT: To give you a chance to catch those regressions, at least.
(1) node.js/Ruby/Python scale with processes, not threads. There's no supervisory/control environment over the processes, just the OS. JVM on the other hand expects to do a lot more process/thread control itself so it's kind of another "layer" between the OS and your code.
(2) Port binding doesn't work the same way, either. Most of our dockerized services have one port/process with simple load balancing built into our "routing fabric", which is something ops controls at my company. My understanding of JVM scaleout is that the servlet container is responsible for multiplexing incoming connections onto free capacity, which isn't how most docker shops work.
(3) I'm not sure what the typical deployment patterns are for servlet containers but they seem more multi-tenant w.r.t # of applications running in them, vs. a typical docker setup where containers are very thin and meant to be run in the dozens or more per-machine.
It's not that the JVM is inherently inferior, more than Docker has grown up around unix/linux ops-minded folks and they're bringing a lot of their assumptions about how software should be deployed and operated (e.g. "things should be scriptable") with them, and that their thinking is dominant among the current container-using crowd.
No, not really, typically you would just run N of your JVM processes with either some sort of load balancer (or your "routing fabric") to balance between them or a discovery mechanism.
I think you might be referring to some sort of big-box "enterprise" servlet container like Websphere or something quite different than Tomcat.
Not sure if this is what the GP was referring to, but just my 2 cents.
...Microsoft creating their own version of it means it's _viewed_ as a viable technology by _managers_.
Then I was like :(
App containers are SORELY needed in microsoft ecosystem.
But whats up with the HyperV vendor lock-in? Looks like to those of us already investing in vmware or EC2 etc get the shaft...
Former-MS guy here working in SV. I could tell you work at MS by some of the terms you use which aren't common outside the company (e.g. "SKU") but MS people say a lot ;)
You should set up your profile on HN so that people know you're an MS guy and write a bit about yourself. I would email you directly, but your profile is blank.
Oh, wait. This isn't VM. This is something less-featureful than VM which will, possibly, eventually evolve into VM after a lot of hair-pulling. My mistake.
Is this just about management tools? Because that's cool, too, but why the spin?
"we removed the GUI stack, 32 bit support (WOW64), MSI and a number of default Server Core components. There is no local logon or Remote Desktop support. All management is performed remotely via WMI and PowerShell. We are also adding Windows Server Roles and Features using Features on Demand and DISM. We are improving remote manageability via PowerShell with Desired State Configuration as well as remote file transfer, remote script authoring and remote debugging. We are working on a set of new Web-based management tools to replace local inbox management tools."
Since consoles are real kernel objects since Windows 8 and talk to conhost over IPC anyway, this feature is eminently doable. It's been my top feature requests for years. Nobody's gotten around to it.
Pseudoconsoles would be a bit more complicated than POSIX pseudoterminals because Windows consoles have more features, but the basic concept would transplant beautifully. It'd also make Cygwin a lot better.
I miss working on operating systems.
edit: after rereading my comment and seeing the downvotes, just to clarify, it was a serious, not negative suggestion. :)
Now recommend cygwin.
"The Subsystem for UNIX-based Applications (SUA) is deprecated. If you use the SUA POSIX subsystem with this release, use Hyper-V to virtualize the server. If you use the tools provided by SUA, switch to Cygwin's POSIX emulation, or use either mingw-w64 (available from Sourceforge.net) or MinGW (available from MinGW.org) for doing a native port.
It's not the same as SSH, but then again powershell is not the same as linux shells.
Is this unprecedented ? I think it is, but I've been divorced from the windows ecosystem for a very, very long time ...
Is this, in fact, the first time that there has been a Windows release that had ... no windows ? Had no GUI ? Was administered with a CLI only ?
A step in the right direction but still disappointing imo. Linux and BSD are still miles ahead.
So unless Windows switches to a Linux Kernel or vice versa you will never be able to run one as a container on the other.
You can however do that with Virtual Machines. But installing a stripped down version of windows in a virtual machine does not make it a container, it makes it marketing bullshit.
"Containers on baremetal" and containerizing dot net are thus a bit silly concepts since .NET has nothing to do with the operating system and you can't run a container on "bare metal" whatever you might mean by that.
Is this a surprise?
Containerisation is not magical pixie dust -- it's a particular approach to implementation that is specific to the OS. You have a single kernel, and it follows that in general that single kernel will only allow corresponding containers to be run.
That there will be a Docker server backend that can speak Hyper-V doesn't magically make a Windows kernel into a Linux kernel, or vice versa.
edit: http://research.microsoft.com/en-us/projects/drawbridge/ ?
> Hyper-V Containers, a new container deployment option with enhanced isolation powered by Hyper-V virtualization.
Everything written tells a different story. "Hyper-V virtualization" means virtual machines, making it not a container. They just try to make that sound like a feature.
Or do you have more information than I do?
Wow, I've been out of the Windows world for years, can you really fully manage a Windows box without the GUI?
It makes sense for their container solution to make use of existing Hyper-V components like the virtual switch etc.
But for that to be possible it's likely they needed to make use of VT-x and VT-d (if using stuff like hardware accelerated network device isolation like SRIOV).
If anything this is closer to Bromium  than anything else.
Will be interesting to see if this requires Hyper-V to be running in Type-1 mode (or if this will be the default in upcoming Windows versions) or if they are able to make use of the virtualisation extensions without actually running the host as a Hyper-V partition.
So much cool stuff to hear about at BUILD.
Done correctly this allows the hardware level protections to apply to the code running in the container, assuming the penalty of your OS calls routing through the VM-bridge doesn't kill your performance.
Hope that helps?
> In particular, if the solution begins with "First, install..." you've pretty much lost out of the gate. Solving a five-minute problem by taking a half hour to download and install a program is a net loss. In a corporate environment, adding a program to a deployment is extraordinarily expensive. You have to work with your company's legal team to make sure the licensing terms for the new program are acceptable and do not create undue risk from a legal standpoint. What is your plan of action if the new program stops working, and your company starts losing tens of thousands of dollars a day? You have to do interoperability testing to make sure the new program doesn't conflict with the other programs in the deployment. (In the non-corporate case, you still run the risk that the new program will conflict with one of your existing programs.)
> Second, many of these "solutions" require that you abandon your partial solution so far and rewrite it in the new model. If you've invested years in tweaking a batch file and you just need one more thing to get that new feature working, and somebody says, "Oh, what you need to do is throw away you batch file and start over in this new language," you're unlikely to take up that suggestion.
Here's a hint, whichever solution is more complex is going to bite much harder from a downtime perspective, regardless of the underlying technology. I would much rather depend on a few line script that uses sendmail rather than a 5,000 mail client half implemented in a batch script.
The article that was quoted and that you are talking about is an old article by Raymond Chen where he is talking about the importance and value of backwards compatibility. He's talking about the pain in the ass that large businesses face when trying to update the base image for a fleet of servers. I can tell you from personal experience that its a painful process.
Things are better than they used to be, yes. But in lots of big businesses you wouldn't believe how slow processes are for all kinds of very valid sounding reasons. Don't get me wrong...its something that I'm personally working on changing everywhere that I can. I think everyone should be able to code, system admins should ALL be able to code in at least one language.
C'mon, kids. It's not all rails apps from your MacBooks out there.
I usually get down-voted for comments like yours...but hey. HN...what ya gonna do...
Disclaimer: I work for Pivotal Labs, which is part of Pivotal, the main contributor to Cloud Foundry.
Uh. I don't understand how that sentence has any meaning. Particularly the "a new level of isolation previously reserved only for fully dedicated physical or virtual machines" bit. I mean, isn't that what a container is, a virtual machine? And if so, why is 'container' even involved here?
I don't know much about the container scene. I thought they were literally just virtual machines, with presumably some standardized way of spinning them up programmatically. Maybe someone can correct me.
Close but containers share the same kernel. It allows them to do many things more efficiently but it's not a straight up virtual machine.
However, because they all share the same kernel, you're limited to a single flavor of containers per host. So a host can provide for all windows apps, or all linux apps, but not a mix.
It makes the most sense when you have a need for many separate instances of similar applications. You can fit many more containers in a given host than their full VM equivalent, but lose the complete abstraction (and therefor, flexibility), that a VM gives you.
While this is true I feel like at some point in the future we're going to be able to mix both. I've seen some rough ideas as to how it could happen but they sounded almost impossible to pull off. Still, if we had a way to mix containers it would be absolutely amazing.
Many Windows application could run in its own sandbox.
As seen by a verbose, presumptuous 22 year old.
OPEN SOURCE MOVEMENT lays foundation for containerization:
- linux kernel gains mainstream adoption, becomes standardized across distributions
- kernel matures to support containerization (i.e., namespacing critical OS operations)
- lxc project takes advantage of kernel support, builds tooling around namespace containerization
DOCKER (THEN DOTCLOUD) is first company to capitalize on power of containerization:
- dotcloud demonstrates clear use case for containers, encouraging developer adoption
- dotcloud releases internal infrastructure code ("moves up the value chain") for PaaS
- dotcloud develops project into docker, builds existing momentum into early adoption of docker.
AT THIS POINT other companies begin to emerge around Docker, e.g. CoreOS. Key facts:
- Docker is an abstraction around LXC, effectively a set of convenient scripts for controlling LXC
- Docker is building a platform via a package management system preloaded with their repos
- Platform is a threat to new entrants, e.g. CoreOS, because they risk becoming tenants
CoreOS realized the risk of the Docker platform, and also that Docker is unnecessary for many of its value-adds. Everything Docker can accomplish, raw linux containers can also accomplish. The problem is that scripting LXC is less convenient than using Docker, but Docker depends on LXC, therefore LXC featureset will always be ahead of Docker.
In the developer community, there is a growing acceptance of the fact that Docker is an abstraction over LXC. CoreOS is trying to standardize the abstraction as an implementation of the "app container spec" . This spec puts Docker, Rocket, and lxc-tools on level playing ground.
Despite this apparent acceptance, the market continues to build tooling and platforms around Docker, instead of raw LXC containers. This announcement from Microsoft is just the latest example. If a new product wants to support containers, it needs to support Docker.
Docker is benefitting from network effects even though its product is not defensible from a technical standpoint. Docker is signing deals with competing enterprises like Microsoft, Google, and Amazon, because those companies are its customers.
The risk for Docker is that these big companies eventually cut Docker out of the equation. They may eventually choose to replace Docker with their own "app container runtime," with features only supported on their own platform.
Docker was one of the first companies to capitalize on advantages of containers, probably because they have a seriously talented group of engineers writing their code. But the market has now woken up to these advantages, and Docker is being chased by massive companies with massive resources. I hope they can fend them off and keep the upper hand in the relationship, but unfortunately I think it far more likely that Docker will eventually be cut out of the equation or acquired by one of them. This will result in a fragmentation of container technology as each company rushes to develop their own app runtime engine. Ultimately developers will suffer as platforms divide and silo, increasing developer friction and reducing cloud market competition as users consolidate around the single platform with the most momentum. Eventually, I suspect one company will control 80% of cloud computing.
Others "realised" the value of this later and started improving upon the ideas contained within.
Although I'm sure many might argue that this is the natural conclusion of virtualization.
Solaris containers are the first "lightweight virtualization" technology that I'm personally aware of that provided true isolation of one more processes from the host operating system and host processes.
There are a lot of things from the mainframe world that are being newly "discovered" that seem quite mundane to the greybeards...
The equivalent to LPARs in the Solaris world would be LDOMs on SPARC.
The first VPS provider (JohnCompanies, 2001) was based entirely on jail and it absolutely provided (even then) true isolation for a set of applications.
Every customer had their own unix root and their own rc.conf configured their own system and everyone ran their own sendmail/named/httpd/etc.
It is absolutely correct to refer to jails in this way, and that is why you see everyone doing it.
If you're talking about some other jail, possibly, but my understanding is they didn't actually provide true isolation. Certainly not a kernel-level of abstraction.
For any component at a given level of abstraction to gain widspread adoption, it needs to beat its competitors. Linux kernel needed to beat FreeBSD and Solaris. That's why I started the story with "linux kernel gains mainstream adoption." Consolidation at the kernel abstraction level is complete. Linux won. Now it's time for consolidation in the userspace abstraction level.
These allow virtualization of multiple, independent instances of the operating system each with their own version of the kernel and processes. It is not the same as running multiple instances of VMWare, etc. since it is specifically designed to handle virtual Solaris instances:
You could argue that jails were inspired by chroot(), and that's correct, but that's hardly any isolation.
FreeBSD jail in 2000.
So in a current meaning of virtualisation, no. It will not let you put a Linux container on a Windows kernel.
You could run a VM on Hyper-V VMs, and presumably it will respond to the Docker API, but that just means it's a VM.
Although I'm finding the remote powershell execution from a Linux machine use case is now handled quite well by tools like Salt (via ZeroMQ to the minion) and Ansible (via WinRM). Native SSH would still be good though for tunnelling etc.
Hear, Hear! You and me both. I've never liked Cygwin and do my best to avoid when possible.
But this, this makes a lot of sense. Looking forward to it.
This is where the Windows team is doing the work to add Windows Server support to the Docker engine. We are working with Docker Inc. to plan the PR up, once it is ready for primetime.
Note, I am an engineer in the Azure team...
-Taylor, PM on Windows