Hacker News new | past | comments | ask | show | jobs | submit login
Docker and Microsoft partner to drive adoption of distributed applications (docker.com)
445 points by julien421 on Oct 15, 2014 | hide | past | favorite | 263 comments



This is hands down the best news I have heard in the MS development space in a long long time.

We have 3 apps over 32 servers and 5 environments, and operationally it's like pulling teeth. This has the chance to change everything!


Out of curiosity, why are your applications running on Windows?


Current client is a Windows/.NET shop. There are some parts which use Linux, but that is more infrastructure, all the apps run Windows -- and rather well too!


Since when do applications 'run' operating systems?


Since at least 1999 or so when VMware showed up :)


:) VMWare is an application that runs operating systems, yes.

But a) original poster is talking about a web application; not a virtual machine or hypervisor and b) VMWare itself still requires an underlying operating system-- one could argue that this case is actually an operating system running another operating system with a thin adapter layer in between (i.e., the VMWare application).


> VMWare itself still requires an underlying operating system

VMware is a company, not a product - the product you're [probably?] referring to is ESX[1], which does not require an underlying OS - it's a Type 1 hypervisor (it runs on bare metal).

[1] https://en.wikipedia.org/wiki/VMware_ESX


I was actually referring to 'VMWare Workstation'. This company apparently has a habit of putting the company name inside all of its product names. That doesn't preclude one from referring to each its product offerings by short name.


The point that observer1101 is almost certainly trying to make is that "Apps run operating systems" in the sense that deploying an app oten entails creating a VM for the sole or partial purpose of hosting that app. The app dictates the OS, not vice versa any more. And observer1101 is right - that trend may have had a minor beginning in 1999, but it has grown very strong since. Azure or AWS is basically the business of running operating systems in VMs on demand for apps.


Yes, I fully understand and agree with all of that. However, I think it is still not accurate to say that apps run operating systems. More accurate to say that apps embed operating systems or are themselves bundled with operating systems (such that they can run on bare metal). And, MirageOS (http://www.openmirage.org/) takes this concept to its logical conclusion.


Haha, my bad. They run on Windows


Clearly you've never heard of MirageOS or erlangonxen. Look them up. The app is the operating system in those.


You're wrong on both counts. I mentioned MirageOS in the other thread, in fact. MirageOS merely links in the minimum portion of the operating system required to support a particular application. I still would not consider that to be 'the app running the operating system'; i.e., if you examine the application's call stack (with a kernel debugger and OS symbols) at any point in time, there's still going to be a lot of 'OS' stuff below the main entry point to the application.


whats your current deployment strategy, series of powershell scripts or something chef/puppet based? We had a powershell based setup and my previous shop but hardest part was maintaing and deploying different databases to make the tests relevant.

How do you think docker containers helps (as Im not sure it would have helped in my previous situation - too many other base issues to deal with first)


Git -> Teamcity -> Octopus -> Server.

We also use database migrations to keep everything updated. It's a simple process but works well.

Code and deployment is all fine, it's maintainaing server configurations across environments (IIS, permissions, ports, ips, firewalls to name a few, a different issue every time and on every env!). Likely this is more to do with having completely seperated operations and development, however bring able to control the apps environment will allow us to deliver more, faster.


If I understand correctly, I think Powershell DSC (Desired State Configuration) might be able to do what you need with regards to windows server config.

See: - http://technet.microsoft.com/en-us/library/dn249918.aspx - http://powershell.org/wp/2013/10/02/building-a-desired-state...


Looks interesting, but our problem is more a separation of operation and development teams. It's not ideal, but we're a dev house and clients like to keep their operations in house for a number of reason (PCI compliance is one of them).

From my point of view OS configuration for app level concerns (web servers, services etc), should sit in the development realm, ready for operational teams to distribute and monitor.

Scripting systems does help, but it is not the final solution when you have mixed, varying talent teams. As a side note, every time I have tried scripting Windows systems I get an extra gray hair!


Does anyone know of a reasonable deployment management solution in open source for .Net? Octopus is perfect, but with my limited budget of $0, it's a tad costly.


Ansible recently started to support Windows: http://docs.ansible.com/intro_windows.html

I haven't used it yet on Windows, but it's good enough on unixes... And it's quite lightweight and easy to setup (it relies on python2 on *nix and powershell on windows)


It is a linux tool primarily, but I'm one of the comaintainers of the saltstack salt config management tool. It supports windows pretty well and we have plenty of users who use it on windows for their enterprise server deployments.

It can be used to replace deployment tools like capistrano and fabric on Linux. No reason it can't do the same on Windows if you grok it enough to use it.

http://www.saltstack.com/community/


You can use WebDeploy, here's a walkthrough for publishing ASP.NET Web Applications to AWS with WebDeploy which can also be automated with a Grunt task: https://github.com/ServiceStack/ServiceStack/wiki/Simple-Dep...



Hey, this is Madhan from Azure team.

Great to see this conversation about Octopus. Stay tuned for news about Octopus and Azure.


Octopus has a Community Edition which costs $0. Limited to 5 projects to up to 10 Tentacles and to 5 users


ah nice, we looked into octopus at the time but was still early development days and was too much of a risk, otherwise pretty much the same stack. I can see what you mean re server config, would be a lot easier with docker than what i was hacking about with vagrant.

thanks for the info!


We are running a fairly old version (two years old, haven't upgraded yet) and I would wholeheartedly recommend it!


Octopus is excellent, probably my favorite part of when I previously worked in a .NET shop.


What about windows 10 actually having a usable console with features such as selecting text and line wrapping?


Win10 includes a terminal that has text selection and line wrapping?

This changes everything!


What's next? Tabs? (I wish...)


it's called the Powershell ISE, and it's been around for awhile.


And being able to resize the console window...


That's possible to do already : right-click the title of the window, go to properties - configuration and change the size there. It's stupid that you have to "configure" your window size instead of just resizing it but at least it works.


I assumed the OP meant "resize [by dragging the window borders]".


One can select text in the current cmd window.


I'd love to hear more about your use case. Please don't hesitate to reach out.


I have a question for you, does this alliance relate to this work by Microsoft research?

http://research.microsoft.com/en-us/projects/drawbridge/

Or something else entirely.


My name is Madhan and I am part of the Microsoft Azure team.

Regarding Drawbridge, as you pointed out it is an internal research project that we have been innovating on, and that has helped us gain valuable experience with containers. Much of what we announced today was born from the experience that we had with Drawbridge and we are excited to bring container technologies to Windows Server and the Docker ecosystem along with Linux. We think the combination of our own hypervisor for container virtualization and Docker containers for creating a unified deployment and management experience is a compelling scenario for our customers.


Something else entirely. New, native capability being added to Windows proper.


I was also expecting MS to revive libOS... it's a happy surprise they will go with the docker ecosystem instead


Sure, it's a large ecommerce app and I work with a development firm (operations is handled by another company).

Our dev process is quite nice, we have fully automated builds and deployments which are feature branch aware, allowing us to run several features side by side.

On the ops side however, most of the work is done either manually or the scripts are unreliable (again, external company nothing much we can do about it). Setting up new environments can take months. Many times we want to introduce new services but the time it would take to get our infrastructure updated usually outweighs any benefits.

Being able to ship configured containers as smoothly as we can ship code would be nothing short of revolutionary!


Best news in MS space, but probably worst news for open source


All of this is being contributed under the governance of the Docker project. If you believe in the Apache 2 license, then this should in no way be bad for Open Source.


At one level, yes.

At a lower level, this is strengthening a proprietary platform, and might lower adoption of FOSS platforms.

At a higher level, co-opting an open source technology to deliver closed source software may indicate a strategic flaw in our approach to FOSS. On Microsoft's part this is clearly a continuation of 'embrace and extend'.

They'll use docker to provide the functionality and build hype until they can build their own version into the OS by default. Docker on Windows becomes obsolete, and everyone runs apps on Azure.


My $.02 as someone involved in the space, it means: * Docker isn't going away anytime soon * All the big IaaS and PaaS players will be re-positioning to incorporate * That the pin has been pulled on the future acquisition

They are becoming too big to be solo... and I'm thrilled for their team


If this is true, then we're about to become locked-in to Docker's "kitchen sink" model of application packaging, where we're back to the bad old days of applications that can only run on a single platform.

So much for the portability of modern language runtimes (Ruby, Java, Python, etc), or even being able to cross-compile to other platforms.

Docker solves the wrong end of application packaging by essentially packaging up the entire damn global (and non-portable) OS environment.


> Docker solves the wrong end of application packaging by essentially packaging up the entire damn global (and non-portable) OS environment.

While that's the convention, I don't believe that will be the case going forward. I think it's a pretty negative and short sighted response.

I saw a demo very recently of someone creating an extremely barebones container - they were able to trace the exact dependency tree of an application, isolate it, and put it in to a Docker container. All that existed was apps+dependency, no userland. That's the future, imo.

> So much for the portability of modern language runtimes (Ruby, Java, Python, etc), or even being able to cross-compile to other platforms.

Funny you should mention that, because cross compilation happens in Docker all of the time.

Does that address your concern? If not I'd be happy to discuss further.


> All that existed was apps+dependency, no userland. That's the future, imo.

How do I run that on something that isn't Linux?

The Mac OS X portability story is to run a Linux virtual machine (!).

How is this a sane model when compared to building applications as a self-contained entity?

> Funny you should mention that, because cross compilation happens in Docker all of the time.

I can cross-compile a target for (Mac OS X, Linux, Windows, FreeBSD) and ship it as a self-contained application that runs on any of those systems?


> I can cross-compile a target for (Mac OS X, Linux, Windows, FreeBSD) and ship it as a self-contained application that runs on any of those systems?

Java did that and everyone went away from it since support became "least of all worlds" for anything non-trivial.


For server-side code, where Docker operates, this isn't an issue.


I for one welcome Docker's model, whereby I may remove as much of the kitchen sink as I feel I can get by without. Writing for one target is easier than writing for two. Obviously there is still the user/UI layer left out, but for middleware it's a solid win, IMO.


I mean.. I guess? Maybe?

Much better to focus on what we can control - making docker awesome for as many people as possible.


I wonder why is this the case. Microsoft is now teaming up with Docker, Google and others to make containers ecosystem and orchestration tools (like Kubernetes, libswarm) better.


Microsoft has a history of doing bad things. I don't know that the fear is warranted today, but "embrace and extend" was a real problem for a long time. Hell, Microsoft broke innovation on the web for a decade with IE. That wasn't an accident...it's just how Microsoft do.

There are certainly people within Microsoft who want to cooperate with the rest of the world, and their behavior has been better in the past several years, but I wouldn't be surprised if it's built into their corporate culture to destroy or coopt anything that might pose a threat to them. And, it may be that the only reason they aren't still destroying and coopting on a wide scale is because they have so much less power today. The web is not owned by Microsoft; in the end, they lost that war, despite trying very hard.

Anyway...Microsoft is not Google (and Google is not flawless in their relation to the rest of the web and Open Source). I think it's wise to proceed with caution whenever interacting with anything Microsoft has touched.


At some point, you have to make a leap of faith. I take great comfort in the delivery and momentum of the Azure team inside of Microsoft. I'm optimistic that the future of Microsoft looks a lot like that than they have before.

That said, I could be completely wrong, and it's important to trust, but verify.

As a result, I'm pretty comfortable with the terms that ensure we protect our commitment to Open Source and in ensuring there is no special access given by partners, Microsoft or not. The project is the project, governed under the rules set forth by the community. Even members of Docker, Inc. have to advocate and fight for every change they make, as we do not believe in creating different classes of contribution.

Hope that helps, would love to chat more


The misalignment of interests is deeper than you think. That shouldn't affect this particular arrangement though; good luck!


You may be completely right! If there's something concrete you can point me to around things that are happening now, I'd appreciate that.

Time will tell, for now I'll remain optimistic and continue down the path we've started.


So what you are saying is that you are biased against Microsoft and everyone else should be too?


Yes, I am biased against Microsoft. That bias is based on a history of what I believe is unethical behavior that hurt innovation on the Internet and in Open Source software. Others may not consider their past behavior unethical, or may believe in "forgive and forget" now that they seem to be playing fair, which is fine for those folks. I am not as ready to forget, even if I eventually find forgiveness.

Regardless of what others think of Microsoft, I believe my disdain and mistrust for Microsoft is based on a reasonable understanding of historical facts, rather than some subjective vendetta. I've been a nerd for a long time. Microsoft has (until possibly very recently) never been a good citizen in the tech world.


Give over. Internet Explorer didn't hurt innovation at all. It was more innovative than Netscape by a country mile. IE was the first to pioneer CSS! And DHTML! Netscape pioneered, erm, cookies, I guess? There was no other web browser worth using other than IE or Netscape back in those wild west days of the Web.

Yes there was a period of a year or two when Netscape entered the abyss and it took a while for a new competitor to IE to spring up. But that happened (Firefox) and shortly after Microsoft resumed development on IE too.

You seem to have selective memory. IE brought A LOT of good to the Web. A LOT of good.


Courts in Europe and the United States disagree with you. And, while I'm willing to believe juries and judges can make mistakes, I have to say I don't think the courts did enough...and they went after Microsoft for many of the more minor problems with Microsoft's business.

"You seem to have selective memory. IE brought A LOT of good to the Web. A LOT of good."

From where I'm sitting you have a very imaginative memory. IE was better than the competition because it destroyed the competition using the very unethical (and illegal, according to courts in several nations) tactics I've already mentioned.

Honestly, I'm surprised anyone on HN would have so little knowledge of the history of Microsoft and the web that you would interpret their stranglehold on the browser as a positive thing. I simply can't wrap my head around it, it's so absurd to me.


Thanks for the down vote for disagreeing with your opinion.

The courts took issue with bundling it with Windows. Not that it was not innovative. There was even a few articles in recent years suggesting the courts went too far over the "bundling" case; sighting comparisons to vendor lock-in all over the industry now not reaching the courts at all. Just one top hit cite: http://readwrite.com/2013/11/12/apple-maps-takes-off-cue-the...

The whole bundling IE case was fundamentally stupid and run by lawyers and judges who had no understand of the technology industry, the web and the wild west stage the web was going through. Back then, it was all about enablement. Getting people just onto the web in the first place was hard. They had to buy a computer, a modem, sign up with an ISP, possibly upgrade their operating system, install a web browser etc. But in those days, installing a web browser meant you had to go buy a Magazine from a shop just to get the CD-ROM. Microsoft viewed this, rightly, as an impediment. So they bundled IE with their OS. A practice that is still common place today by every major operating system that exists including Windows, Linux, OSX, Android, iOS, WinPhone.

Without IE bringing DHTML we would not have the foundations that made modern day JavaScript, SPA, "Ajax" applications today. How is that not fostering innovation? IE 1 to 6 were a technology showcase to show everybody else what the web _could_ be or become. That's one reason why, though they'll never admit it, Mozilla abandoned Mozilla Suite and started work on Firefox.

"So little knowledge of the history of Microsoft and the web"? Er, what? I date back to having to when you had to install a TCP/IP stack manually in order to get onto the web. What you're actually surprised about is that anyone would dare challenge you on what you perhaps believed would be a widely held opinion when it's far from that clear cut.


According to this: http://news.bbc.co.uk/2/hi/in_depth/business/2000/microsoft/...

MS was making "illegal market-splitting suggestion"

and

"Microsoft began to use its market power to extract exclusionary deals with many of the largest [PC manufacturers and internet service providers]", threatening Netscape customers such as Compaq that if it tried to replace the Internet Explorer icon with the Netscape Navigator icon "

We can assume they've done many more bad things since their EEE attitude was proven many times (Bill Gates included - vide leaked email in antitrust case http://antitrust.slated.org/www.iowaconsumercase.org/011607/... )


You want history? I'll give you history. The bottom line is that Microsoft used it's monopoly position to destroy Netscape, who -- really, let's be honest -- made "the web" a thing to begin with at all.

If Microsoft would have made Internet Explorer a boxed piece of software to sit on the shelf beside Netscape Navigator, and if they had priced it similarly, and if people had voted with their wallets in IE's direction (because it was actually a better product for the money), I wouldn't have had a problem with IE taking over the world. As it was, they bundled it, and it was crap in comparison (at least to start with, and many would say up until recently), but Netscape couldn't compete with free.

THAT'S what a lot of us still remember. It was a perfect, easily-visible example of a lot of business moves they have made, and for which they ultimately -- not only weren't punished -- but actually were allowed to prosper because of. THAT'S why people like me are still sore about it.

They won because of BUSINESS SAVVY and LEGAL moves, not TECHNICAL MERIT, and people in the software development world (and everyone else) have paid the price for it for 20 years. (I can't get Sametime in the web version of Lotus Notes to work unless I use Internet Explorer, and in "compatibility mode", to emulate their non-standards-compliant behavior that everyone was forced to code around, as one immediate example.)

All the touchy-feely "openness" they're trying to foist on the world now is going to have to be everything they hope people will interpret it as for the next THIRTY years for me to believe they really want to interoperate with the rest of the world like Linux always has.


IE was better than Netscape by about version 4/4.5, and it was more standards-compliant by IE6. The Netscape code was awful and Microsoft had two IE teams working in parallel, with the second one working to "componentize" IE and leapfrog Netscape. Which it did.

Netscape always had a free version* available, so it wasn't really down to price. (*though it was usually the buggier beta version.) Otherwise, its plan was to make money on server side, and despite buying in several server companies, it failed.

While Netscape was technically inferior, it is true that Netscape's own marketing and managerial mistakes contributed to its downfall. For example, you could only download it from Netscape, you couldn't customize or rebadge it, and at one point it decided to withdraw it, so you could only get it as part of a Netscape suite.

All of this was suicidal when Microsoft was shipping a free IEDK and allowing computer mags and ISPs to distribute IE.

Finally, in the anti-trust lawsuit, Microsoft WON the browser case 2-1 on appeal. So bundling wasn't actually illegal, as alleged.

I watched all this closely at the time. It is also well documented in several books, including How The Web Was Won, Competing on Internet Time, and the great High St@kes, No Prisoners.


If IE was ever more standards-compliant than it's competition, it was a very short-lived period. It made cross-platform (i.e., normal, ordinary, and real) web development problematic for over a decade.

You can say Netscape imploded, but they experienced a fight-or-flight response forced on them by Microsoft's scummy actions. I never said that bundling was illegal, but that's a purely legalistic distinction. I certainly implied it was unethical.

Before the trial, Microsoft gave about $10K a year in political donations. By the time it ended, they were giving over a million to EACH side. You weren't the only one watching, but not everyone watched the same things.


Microsoft made IE more standards-compliant than Netscape because this was a competitive advantage for Microsoft. It's a common tactic for companies when the dominant supplier has a 90% market share.

Remember "Best viewed with Netscape Navigator"? Netscape wanted to control the web by defining and unilaterally introducing web standards, so Microsoft naturally allied itself with the W3C.

It was short lived because Microsoft effectively abandoned browser development following anti-trust action, because it decided it could/would only ship browsers with operating systems. This turned out to be a bad choice because of the long delay between XP (which included IE6) and Vista. The slow take-up of Vista made it even worse. Microsoft didn't want people to use IE6 or XP, but the market decided otherwise.

All this goes back to the consent decree that Microsoft signed with Janet Reno. This specified that Microsoft was allowed to improve the operating system by adding new features. This more or less required it to build IE into the OS.

Microsoft was already under investigation during this period, and for the decade after the case, Microsoft operated under the close supervision of a US Judge. Whether or not you think its actions were scummy, it's unlikely they were illegal. Indeed, during this time, Microsoft was certainly easier for third parties to deal with than Netscape -- see High Stakes for examples.

As for political donations, this was also inevitable. Netscape couldn't compete in either technology or marketing so it played politics instead. Microsoft had operated on the basis that it wouldn't get involved with politics, and then it got screwed for its neglect. It simply decided not to make that mistake again. As anyone would.

Either way, Netscape was doomed in the long run. It was deluded in thinking that people would pay for a client access program. (You're welcome to provide examples of companies that have made pots of money out of this, but usually the client is free and people pay for the service.)

Netscape was even further deluded when it thought that it could charge PC manufacturers to ship its browser. The reality is that companies have to pay PC OEMs for distribution.

The final delusion was that users would pay a price for Netscape Communicator http://en.wikipedia.org/wiki/Netscape_Communicator in order to get their hands on Netscape Navigator.

You really don't need any conspiracy theories to understand why Netscape lost. It had a combination of arrogance and incompetence the like of which I have not seen before or since.


You weren't on the web back then were you? It seems pretty obvious from this that you weren't. IE was not a bad product. It beat Netscape on both technical and commercial merit.

It's almost like you would rather still be paying for web browser software today. Would you?


I've been on "the web" since Usenet on real Unix machines in college. I was rummaging around gopher servers on Linux. I had a SLIP dial-in account using Trumpet Winsock on WfWG. I had a DSL line hosting a video sharing site out of my house in '96. Heck, I ran IE on Solaris x86 for awhile. And, yet, somehow, it's "obvious" to you that I wasn't "on the web back then?" Uh. Wow. No, I was never on ARPANet, but, sheesh. I don't usually expect that sort of thing from HN, but I guess times are changing.


HN does not allow one to downvote replies to your own comments.

I do disagree with you, in every regard, but I don't downvote over disagreement (and could not possibly downvote you in this instance). Someone else must also disagree with you.


You don't make much sense. You disagree with me "in every regard" and yet you agree with Ken who made an almost identical point to my own.

Microsoft defined the business model of the web browser market as it exists today. I.e. free, widely and easily available, and bundled with the OS. This has not changed. The court cannot prevent free market economics, but they tried to.

Yes Microsoft made Netscape go bust. But Netscape were weak and short sighted. They were like squashing an ant.


Ken made the point that Google and Apple behave the same way as Microsoft in the markets where they have monopoly-like dominance, and I agreed with him. In that instance, I am not talking about browsers. I am talking about the markets in which Google and Apple have monopoly-like power, and the ways they abuse that power. Browsers are a small part of that picture (though, it is telling that all three build browsers, despite their being numerous good browser options on the web...the browser clearly holds a power position on devices, and OS vendors are guarding that position jealously).

Anyway, I think what the justice department went after Microsoft for was a minor part of the wrongs Microsoft committed. Bundling the browser, and effectively prohibiting computer vendors from bundling Netscape, was a nasty trick and it killed Netscape (just as Microsoft killed Lotus, WordPerfect, and numerous others, often through backroom deals to separate those companies from their customer acquisition channels). But, my concerns are less about what happened to Netscape than about what happens to consumers and the market, when consumer choice isn't what decides the outcome.

IE was a proprietary overlay on the web. It was not an HTML browser, it was a Microsoft delivery platform. Having a stranglehold on the browser locked every competing OS out of the web for a decade. I switched to Linux on my desktop in 1995. But, I had to keep a Windows installation around on every desktop I ever owned in order to be able to access bank and government web sites, so I could use IE to access it. That's why IE was destructive, and that's abuse of monopoly power.

And, that's why Microsoft created IE: To embrace and extend the web, to use their existing monopoly in several enterprise markets to subtly take over the new market of the web and destroy the openness that allowed competitors to thrive. And, it succeeded for a long time. For years, we were trapped in this horrible Microsoft-owned world, where the dominant browser was incompatible with every other browser and with the standards and in ways that were intimately tied to the Windows operating system.

So, the courts didn't tackle the most damaging issues, unfortunately, and it took several years for the web to recover from the damage Microsoft caused.


The courts questioned the fact that browser was delivered as a part of operating system, not that it was blocking innovation.

Speaking of innovation it is true that IE was terrible in adopting standards (this was pain for long, long years and costed tones of money), but, on the other hand, they have invented XMLHttpRequest, which opened doors to modern web applications development.


This is complete rubbish. IE was on the cutting edge of web standards, to such an extent it was having to make up some proprietary technologies itself such as XMLHttpRequest (just one example).

The problem is a lot of Ruby/JS/dynamictypedlanguage hipsters these days who shout so loudly about IE having poor standards support weren't actually around at the time. Everyone loved IE back in those days as it was modern and every release had new exciting features in it (both for users and developers).

The fact that IE got some things "wrong" with CSS and had bugs like transparent PNGs not rendering, er, transparently... were just necessary growing pains. This didn't matter in the wild west days of the web as, well, PNGs weren't even that popular back then and CSS was still something web developers were getting to grips with, slowly.

Compare IE4-6 not to later-generation browsers, but to its direct rivals of the day which was Netscape and Opera and not really much else. Compared to these, IE was king.


It's simply not true that IE was terrible in adopting standards: it was very good at it up to and including IE6. That included the rapid adoption of Netscape "standards".

The real problem was that Microsoft stopped browser development for half a dozen years, and that put it miles behind. The rapid releases from IE8 to IE11 show Microsoft trying to catch up, ie being pretty good at adopting standards.

There are plenty of things to blame, including the US Justice Department, the unexpected delay from the Longhorn disaster, and the backlash against Vista. This left XP dominating the PC market, and XP shipped with IE6.

If that's been holding up the web, it's not because Microsoft willed it. Just the reverse: it's being trying for years to kill it off.


In fairness, what MS did in the past is what Apple and Google are going now, just with larger market share.


Ken, this is an amazing day. I don't believe I have ever agreed with you in a discussion about Microsoft (or any discussion for that matter). But, I can say with complete conviction that I agree with you. Apple and Google (much moreso Apple) are behaving badly on a number of fronts, and abusing their near-monopoly position in certain markets.


xhr.


Yeah, who uses SSL/TLS


Microsoft has more than earned bias against them. You're free to form your own opinion, though.


Nick Stinemates @ Docker here. I run BD/Tech Alliances. A lot of blood and sweat went in to this one! I'm here to answer any questions you may have about the announcement, the details, or anything about Docker.


Sorry if this is obvious somewhere, but i have to ask: Will this allow me on windows to pull a linux-based image (ubuntu for example) and run it on the windows platform or pull a windows based image and run it on the linux docker platform host?


> Will this allow me on windows to pull a linux-based image (ubuntu for example) and run it on the windows platform

You can do this today with boot2docker (running a linux VM behind the scenes)

> pull a windows based image and run it on the linux docker platform host?

I suppose the opposite of what I said could be true using something like KVM, but not a use case we are seeing a lot of right now.


If this is all a wrapper around boot2docker, everyone will be very disappointed... I highly doubt this is what the original question was about.


It is not (a wrapper around b2d;) I tried to answer the specific question asked instead of guessing what the question is.

The goal is:

Package your Windows app in a docker container, use same tooling you would otherwise use to deploy to a docker engine running on a Windows host

Package your Linux app in a docker container, use same tooling you would otherwise use to deploy to a docker engine running on a Linux host.


Right... what the problem was is that you started seeing people get confused about running Linux containers on Windows and vice-versa. Hence all the questions around that... I guess that wasn't as clear from the announcement as it could have been. One of the main reasons is probably that the people that have been using Docker are Linux folks and aren't necessarily as familiar with deploying Windows-based applications.


This is great feedback, thank you.


I'm still unclear as to whether the linux apps can be deployed to a docker engine running on a windows host, and, vice-versa, whether docker windows containers can be run on a linux host. The announcement seemed to be clear that both of these are the goal. But your GP post https://news.ycombinator.com/item?id=8458603 causes me to doubt.

Is there a missing comma in each sentence of the goal? It seems like there should be one after "use" or after "engine":

1) [after "use"] Package [on a Windows client] your Windows app in a docker container, use same tooling you would otherwise use [on a linux client], to deploy to a docker engine running on a Windows host Package [on a Windows client] your Linux app in a docker container, use same tooling you would otherwise use [on a linux client], to deploy to a docker engine running on a Linux host.

2) [after "use"] Package your Windows app in a docker container, use same tooling you would otherwise use to deploy to a docker engine [running on Windows or Linux], running [the packaging step] on a Windows host Package your Linux app in a docker container, use same tooling you would otherwise use to deploy to a docker engine [running on Windows or Linux], running [the packaging step] on a Linux host.

I added the implications I understood to highlight the difference the comma placement makes. If there is no comma it's pretty ambiguous / confusing to me on which platform the docker engine is running. I believe from this post of yours that I have misunderstood the announcement, and that windows apps will be able to be made into docker containers that can only run on windows docker engines.


No, you will not be able to deploy a Linux app onto a Windows container, or vice-versa.

Containers share a kernel, this would make it impossible. However, with things like boot2docker (25MB Linux distro), this makes it really leight-weight/easy to deploy into a VM and run that way.


> I believe from this post of yours that I have misunderstood the announcement,

I'm sorry to hear that, we will work to do better.

> and that windows apps will be able to be made into docker containers that can only run on windows docker engines.

This is correct. There's also a thread on how, to the user of docker who just wants to `docker run` something, the distinction doesn't really matter in the end


Is the mix of that true as well?

Package your Windows app in a docker container, use same tooling you would otherwise use to deploy to a docker engine running on a Linux host and vice versa? If not initially is that an ultimate goal?


You can do that with VMs + Docker today.

There are no current goals to make Linux executables run natively in windows and vice versa.


I'm not sure whoever wrote the MS press release really understood this. It seems somewhat vague/misleading on the issue:

"Docker will be able to use either Windows Server or Linux with the same growing Docker ecosystem of users, applications and tools." (emphasis mine)

"bringing together Windows Server and Linux"

"making available some of the best images for Windows Server and Linux."

http://news.microsoft.com/2014/10/15/DockerPR/

Of course, it's hard to talk definitively about something that doesn't exist yet.

UPDATE: It makes more sense after reading this comment: https://news.ycombinator.com/item?id=8460164


We understand it, but its really hard to explain briefly. So many people assume Docker==Linux Containers. Throw in virtual machines and start talking about running a Linux app in a Docker container in HyperV on Windows and it only really works with pictures. And that's before you talk about multiple containers running across a public cloud provider. Demos will help.


I apologize for the confusion, thanks for pointing out exactly what caused that. We'll get better over time!


Ah okay thanks. The partnership sounds great and I know it's really freaking challenging but I hope in the future to see:

1. Native Mac OS X support (so Mac Apps can also be in containers) 2. Being able to mix container operating systems with other host operating systems.

If that ever happens then operating systems and their versions would pretty much no longer matter; you could run anything anywhere without compatibility issues. I feel like this is something that has to eventually happen no matter what I'm just always curious what form(s) it will take.


> 1. Native Mac OS X support (so Mac Apps can also be in containers)

Are there Mac servers out there you want to run apps on? Or just consumer apps you want to run on your mac? Curious about the use case!

> Being able to mix container operating systems with other host operating systems.

VMs are that abstraction today. I think we can do better going forward, but still a lot to do with what we currently have planned.


The use case for our startup is: we have lots of compute and graphics intensive services being created and tested on Macs by developers, targeted for Linux Docker instances in the cloud. It would be nice to be able to run these same containers natively on OS X rather than requiring developers to bring up a local hypervisor (e.g. VirtualBox) and a local virtual Linux instance). This would help with both dev and test. Additionally, sometimes our engineers would like to be able to run compute jobs locally and merge the results back into the cloud based system.


Get your developers to run Linux natively?


That sounds like a difficult proposition without your target being able to run Mac binaries.

Having some sort of translation that's not a VM as a part of the distribution mechanism doesn't make much sense, and breaking the portability of Docker comes at a significant cost.

I struggle to see how to make that work cleanly.


I think you're interpreting his words too literally. Nothing that gets deployed to a QA/staging/production system should be built on a developer's workstation in the first place, those specific containers need to come off a build server. The binaries on the developer's workstation would be built on/for OS X, and the binaries in QA/staging/production on the (Linux) build server for (Linux) targets.


> Are there Mac servers out there you want to run apps on? Or just consumer apps you want to run on your mac? Curious about the use case!

Maybe they haven't but I feel like Apple has abandoned the Mac OS X Server so no apps at least for me in that case but I think consumer apps would be a pretty cool use case.

> VMs are that abstraction today. I think we can do better going forward, but still a lot to do with what we currently have planned.

Roger; thanks!


I second the opinion regarding OSX servers; not only is Xserve long dead but the "server" component of OSX is an application that bundles mostly open source applications with a GUI, if I'm not mistaken?

This appears to be aimed at SOHO users and not racks and racks of servers in a data centre.


Is there a public roadmap for currently planned features? I know there was discussion to make Docker more community driven, but I'm not sure if that went anywhere?

I saw this... https://www.docker.com/community/governance/ but couldn't find any more information...


Docker is already pretty community driven. But given the level of impact, we think we need to make it even more robust and have some proposals.

The first DGAB meeting is on 10/28, where we will propose a different way of working based on all of the feedback.

Stay tuned. We'll have a lot on this in the upcoming weeks.


I'm also a bit confused. Let me give you a specific scenario. I have a windows .net application that works with windows 8 (no gui - some server side stuff). Can I package this up as a container and run this on a linux machine that has docker?


That is not currently a goal. It's the goal that you can tell docker - run this application, and it will find (or create!) a suitable host to run it on.


Of course not.


It's definitely not a wrapper around boot2docker. It's the kernel work etc. to enable native Windows containers running on Windows Server. No VMs need apply. That said, we're also working with Docker on the future of the boot2docker concept so that it easy to work with Linux-based Docker images from a Windows laptop, but that's just tuning the experience.


Excuse the ignorance, since I'm fairly new to Docker. I'm excited about what Docker allows for during development. Will this partnership eventually lead to being able to run containers on my Windows workstation without needing Ubuntu running in a VM?


Yes - windows images on windows hosts, powered by Docker and all tools using the Docker API.


Very cool! THANKS!

ETA? :)


ETA: when the next version of Windows Server is released.


Aren't you concern that technical cooperation with a huge company like MS will slow down your progress?


No - if technical cooperation with a huge company slowed us down, we'd be dead in the water. We've partnered with IBM, Google, Red Hat, VMWare.. lots of big names, lots of great ambition.


Does this mean that we can now easily containerize Windows applications allowing them to be easily moved from machine to machine?

This could be huge for DR.


That's the promise. It's not done yet, we are just getting started.


What's DR?


Disaster Recovery


Will the MS bits be open-source? Will it be in Go?


> Will the MS bits be open-source?

The MS-bits they're contributing to the Docker project will be contributed in the same way everyone does - under the Apache 2 License, in the open, etc.

> Will it be in Go?

It will be contributed in the main docker repo - github.com/docker/docker

As a result, it will be a community/maintainer decision what language it's written in, but obviously we're heavily biased toward Go.


Nick is right. Our default answer will be to use Go to be consistent with the rest of the Docker project. But we'll use whatever language is right for that part of the project.


Which is a .Net language, just saying.


Of course. It will be in C#. Far too much work to be done on Go's Window's support to use it, and no advantage to do so.


Exactly. I'm a firm believer that if you're doing orchestration/automation on Windows and you're not using .Net based technologies, you'll wish you were eventually(coming from a HEAVY Chef on Windows user). Plus, there's Powershell modules to consider, and Powershell is .Net based.. And just the general ecosystem to consider, especially Azure itself. And etc, and etc. Though some F# wouldn't be a bad choice, I suspect the prevalence of C# will win out.

Just makes this "partnership" a bit more, complicated? Or at least, not what it seems on the surface. If the containers aren't compatible between Linux and Windows, and the tooling will end up .Net in the long run(which I firmly believe), then the only common ground will be some semblance of API compatibility?


Can you comment at this point about what containerization on Windows will look like. A tree of processes, a lighter, file-based HyperV, or something else? What effect will this have on the filesystem layout? Will native ACLs be supported or mapping to Unix permissions? Has a timeline for initial code in github been announced?


John Gossman from Microsoft here. Windows Server containers use the approach sometimes called Operating System Virtualization, just like Linux containers, Jails, etc.. The Wikipedia article is a pretty good summary: http://en.wikipedia.org/wiki/Operating_system%E2%80%93level_...

They are process based. They do not depend on HyperV and can run on Windows Server on bare metal or inside any hypervisor or cloud.


Can you compare this to AppV?


I'll take a stab at it (no relation to Microsoft). As described in the following link, App-V does streaming of client app code from a server, appears very client-oriented. The technology behind it might not be, though: http://blogs.technet.com/b/gladiatormsft/archive/2013/11/05/... (describes the registry key with exceptions to the underlying namespacing of handles, I think?) This appears to be called Named Kernel Object Virtualization (a.k.a. the VObjects Subsystem) from the title of the post. But again, client-side oriented. I expect Docker integration to be server-side oriented code for this. Oh and if anyone's saying Microsoft's late to the party, there have been similar attempts on Windows platforms since at least 2006 using such techniques: https://www.usenix.org/legacy/events/vee06/full_papers/p24-y...


Disclaimer: I am not speaking for Microsoft here, but my understanding of the facts that have been released.

> Can you comment at this point about what containerization on Windows will look like.

I cannot directly.

> A tree of processes, a lighter, file-based HyperV, or something else?

HyperV integration will exist (and is currently being worked on in the open for Docker) but the technology is fundamental to Windows, like hyper-v. As far as I am aware, not a hyper-v decendant.

> What effect will this have on the filesystem layout?

Each container will have its own.

> Will native ACLs be supported or mapping to Unix permissions?

Can't comment on these details, sorry!

> Has a timeline for initial code in github been announced?

It has not.


I guess Windows Server already has some kind of "containerization" [1].

Will Docker be shipped as a part of a Windows Component & a paid service from MSFT?

http://www.microsoft.com/en-us/windows/enterprise/products-a...


That's part of the announcement, windows will be updated to include Container primitives which docker will use.


All versions of Windows? I wouldn't expect their entry level offering to but having it on the pro versions would be nice.


are you working with Apple to bring native docker to that platfrom?


I don't know that Apple would ever feel that pressure. They really just don't cater to developers, if you look at the issues with the Mac App Store for an example.

That said, it would allow for a more consistent developer experience for application/server development towards how things are deployed with containers on Linux. The biggest advantage OSX really has is that you get a clean UI for when you are simply a user, and a unix environment with the same tooling you use to build applications that will be deployed on Linux.

My true hope for this, was actually to see something beyond boot2linux. And, oh I really wish that HP didn't disable the virtualization support on their lower-end desktops (what I'm currently stuck with at work). VMWare workstation is really the only option for me, and running Ubuntu under that for *nix dev/testing.


I really hope the Microsoft partnership does put some pressure on Apple to provide this support as well. But it just feels like something that isn't really their style. While having software components appear like "just another container" is great for developers, its very anti-Apple. They don't view the iphone as "just another smartphone". Everything Apple is standout/special in some way. I hope I am wrong though because I love Docker and would welcome this with open arms.


Outside of the frame of talking about partnerships but talking as an apple/Docker fan, Docker running on ios would be awesome


you mean on osx, right?


Nope, I meant ios :)


I suppose "open arms" is now a pun then :)


:)


I can't comment on unannounced partnership or details, sorry!


will Windows base images be made available on the Docker Hub? How will that work?


> will Windows base images be made available on the Docker Hub?

Yes!

> How will that work?

The same way it works now. Find an image you want to run, and docker run it.


so Window's licences will be free?


I suspect that you'd still need a Windows Server license to start off with. It may or may not need a VM under the hood, but that's a MS thing to figure out. They certainly couldn't license a container the same way as a full Windows VM or bare metal install.


Id expect that since the idea is having your windows container running on a windows host, that each version of windows server will have a "this edition allows X number of containers" limit. This would line up with how they treat VMs. Lets hope the number is 5x the current VM limits.


> Lets hope the number is 5x the current VM limits.

Why, let's open they are unlimited. :)


Great question.

I'll be honest and say I don't have an answer. However, we're working on derive one and you'll know as soon as we do.


I thought the whole point of containerization was that you ran apps in a container, but the container doesn't contain an operating system. (You have several containers on one instance of the OS, not several VMs, each with its own OS. That's how it works in Linux, anyway.)


Also announced by Microsoft:

http://news.microsoft.com/2014/10/15/DockerPR/

"Microsoft Corp. and Docker Inc., the company behind the fast-growing Docker open platform for distributed applications, on Wednesday announced a strategic partnership to provide Docker with support for new container technologies that will be delivered in a future release of Windows Server."

I strongly suspected that Windows Server vNext would have some sort of 'container' support after the wild success of Docker.


Frankly I don't see this working unless Windows Server has MUCH better support for IO and Network prioritisation than it does today.

You can set a process's CPU schedule, you can also set it to a "background" priority which reduces both its CPU and IO to the lowest, but you don't really have fine-tune control of how much IO/Network a process has (irrespective of CPU).

This means on Windows in general it is very easy for a single process to run away with all of the machine's IO and there isn't a lot you can do about it except killing the process (I've seen backup clients, anti-virus scanners, etc do this).

It wouldn't be as big of a problem on Linux because first off the IO scheduler seems to be better, but even if it wasn't you can go in and manually configure a process's maximum throughput in a bunch of different ways.

If Windows didn't support this then what is stopping a single Windows-Docker container from stealing all of the system's IO or network capacity?


The recommended way of doing this is to use Hyper-V and multiple VMs. Windows VMs start very fast on Hyper-V.

That gives you virtual LAN bandwidth control, "Storage QoS" (IOPS limits), partitioning, affinity and CPU limits. Not only that you have virtual SAN and network fabric. It's pretty awesome!

If you utilise it by deploying your app to a .wim file and then use wim2vhd and fart it at a Hyper-V host, docker already exists on Windows so I'm not sure what all the fuss is.

Yes we do this.


Because running a Hyper-V virtual machine is significantly more resource intensive than running a true Docker container.

Hyper-V and similar are great. They aren't as good as containers however. You gain more security with Hyper-V but even assuming full hardware support it is still an expensive thing to be doing.

On a normal desktop machine you can run maybe 2-4 virtual machines (depending on a lot of factors). On that same machine you would want to see 4-10 containers each container doing one "thing" and one thing only.


> Because running a Hyper-V virtual machine is significantly more resource intensive than running a true Docker container.

I suspect that no one really cares. CPUs are pretty cheap. The problem that docker solves is "0 dependency single-step installs, and reliable rollbacks" not "VMs are too slow".

This is true on Linux too, btw- linux containers are still pretty feeble from a security perspective, without draconian seccomp sandboxing.


No: the problem is that VMs are too slow, and therefore you need to spread them out over more machines, and therefore VMs are too expensive. Otherwise, why not just make every container a full EC2 instance?

All the analogous tech to Docker has existed in the AWS+OpenStack ecosystem for a decade; it's just that nobody is willing to pay $50/mo per container. An entire huge company (Heroku) lives off the profit margin that is created by paying for VMs, then selling containers at VM prices.


I agree with both of you, for different reasons.

The infrastructure benefits are completely there in terms of driving up utilization/density of the hardware you're buying. I feel (and know!) there will be a lot of optimization on making this happen.

On the other hand, there are some insecure multi-tenancy concerns for which VMs still offer better isolation. That's ok, because Docker is not about virtualization, it's about a platform for distributed applications.

Thus, finding the right/secure/optimal/? place to run your container should be straightforward and intuitive. That's the direction we're looking to take the industry.


> the problem is that VMs are too slow, and therefore you need to spread them out over more machines, and therefore VMs are too expensive

I think what you really mean is that you can't control the overcommit policy on AWS. Otherwise, if VMs were too slow, why would you be running on AWS at all?

> Otherwise, why not just make every container a full EC2 instance?

A lot of people have full EC2 instances running a single container. Or full machines. If you want to upgrade your database software, atomic, full system rollback looks pretty enticing, especially if your database machines are huge, hugely expensive, and you don't have that many spares.


There's no reason to have full EC2 instances running a single container—EC2 instances are containers. (And Docker images are AMIs, and fig.yml files are CloudFormation templates, and...)

You can treat EC2 instances exactly the same as you treat docker containers—attaching EBS volumes to them in the way you'd attach data volume containers, attaching ENIs like you'd publish ports, etc. You can get exactly your "atomic, full-system rollback" just by having a CF template with a template parameter for the DB AMI to start in an Autoscaling Group, and then pushing a change to that variable. (Effectively, this gives you the same semantics as using Heroku's "config:set" CLI subcommand.)

But people don't do things this way. Why? Because the different pricing models create a different system of incentives around the two ecosystems. EC2 instances are thought of, fundamentally, as machines, rather than as application containers. There are adapters meant to make their usability for such more clear (e.g. Elastic Beanstalk), but in doing so they expose the relative absurdity of the VM pricing model.


I disagree. With a generation 2 VM, I run 24 VMs quite happily on my 32Gb machine if you turn on dynamic memory. My 8Gb X201 runs 6 and Visual studio, sql server well. I haven't tried any more than that yet.

As for apps, IIS makes a pretty good container system if you want to use it for that sort of thing.


Unfortunately you need a Windows license per VM.

I guess that at many places you'd just buy datacenter edition, but that's a bit too much for small businesses.


If you're using Hyper-V, you get an unlimited number of guests if you've purchased Windows Server Enterprise Edition. Last I checked, it's not even that expensive of an upgrade.


Or just use Windows Datacetner Ed on your hypervisor. Most cloud providers have that as an option. It's like $200 a month and you can run as many VMs as your hardware will support.


Yes we have datacentre edition and VL stuff. Small business: Use azure.


One of the most encouraging things I've heard so far - Litmus test for success? Containers running on both Windows and Linux which make up your application.

That's when I knew we had a great potential to partner.


does this mean docker support will only come from wont come before the next drop of windows server? i.e. current windows platforms wont be docker-compatible


Docker Support already exists via boot2docker.

That said, yes, the current plan is that this kernel/service enablement comes with a future version of Windows to be able to run Windows Containers.


Reading the comments, this is the official announcement of the bifurcation of the docker ecosystem. This completely shatters docker's original promise of a universal application container. Now we will have standard gauge cars that won't run in the South. Maersk containers that can't be shipped on MSC.

Congratulations to MS though, I think this is a good initiative. Not sure why you even need a partnership with docker TBH as they didn't create the underlying OS technologies that make containerization possible on Linux.. But, with the acquisition of Mojang it is apparent MS is placing a lot of emphasis on acquiring mind share.


They didn't create the underlying technologies for containerization on Windows, either. Docker is just providing compatibility with the new container API in Windows Server so that you can use the same tools regardless of the OS. It's no different from what they already do, which is wrap the LXC API.


Theoretically I'm guessing this could enable multi-platform gaming? Ex: "Download this docker image, and play our game: Windows or Linux, or boot2docker Mac!"

Secondary, a native Darwin Docker server would be killer.

The mobile implications could be huge if this made it out of Windows Server.


Very very doubtful. You will not be able to run ELF (linux format) executables on windows any more than you'll ever be able to run PE (windows format) executables on Linux.

You'll have 2 hubs, the "windows hub" and the "linux hub".


> You'll have 2 hubs, the "windows hub" and the "linux hub".

This is incorrect.

That said, it's true that Windows won't all of the sudden gain the ability to run ELF.


Well I meant that in the sense that you can't run a windows app on Linux and you can't run a Linux app on windows. Even if it is the same hub, you'd want to filter "by windows" or "by linux" containers is what I meant.

Running boot2docker on windows is sort of cheating in that it is a virtual machine running docker apps and this effort appears to be a native dockerd running on Windows. Thoughts?


I think the UX is slightly more nuanced.

From any machine, you should be able to `docker run` any application, and docker will be smart enough to find (or create!) the best place run it.

The base pieces of the lego are currently in flight in Docker proper to enable this, with great tools built around the Docker API for more advanced topics (like clustering and scheduling)

> Running boot2docker on windows is sort of cheating in that it is a virtual machine running docker apps and this effort appears to be a native dockerd running on Windows.

I wouldn't say it's cheating. If your target is all linux machines, you need a cheap, local, efficient way of developing and testing those applications. Boot2Docker is an option (but not the only one - you could use a cloud service, internal vm infra, etc. etc.)

Love the dialogue - keep it coming.


Ok so I read that as you saying:

If you're running docker on windows and you try to run a linux container, it will magically start boot2docker and start your linux container inside that vm if you are running on a windows box.

Now if that is true, my question would be: How do you boot2windows for windows docker images when you can't freely distribute a windows vm?

As I think both being able to support both would be amazing, I don't see how it fits into the current scheme of things.


> If you're running docker on windows and you try to run a linux container, it will magically start boot2docker and start your linux container inside that vm if you are running on a windows box.

Maybe! Or you've used something like the new `docker hosts` feature, which could create a new instance for you on any infrastructure provider. Or you could be pointing to a boot2docker node, etc.

> Now if that is true, my question would be: How do you boot2windows for windows docker images when you can't freely distribute a windows vm?

I don't have a good answer for this, as we do not have a boot2windows product or announcement (I assume this is the opposite case, if you have a linux machine and want to run a windows image)

My guess instead is you'd be pointing to a set of infrastructure that can find (or create) Windows Docker Hosts and it would run there.


I'm excited to see what the future brings! Can't wait to see some of the service discovery stuff like you can build with etcd and consul make it into libswarm.


Not libswarm, docker proper.

See https://github.com/bfirsh/docker/tree/host-management

There's also a relevant thread on the docker-dev mailing list


So Windows will be able to run Docker containers built using a Linux image, and Linux will be able to run Docker containers built using a Windows image?


Using a VM, yes. But nothing fundamentally changes about their ability to execute cross platform natively.


from reading the other replies in this read (and the above) it seems that the next windows server release will have ability to run both linux and windows containers.

that sounds promising...


wrong, read it all again!


You can do the latter with Wine. Unfortunately, there's no "Line" equivalent on Windows.


I'd love to partner with Steam (or:Origin) on something like this.

I actually got unofficial approval to push my League of Legends linux image to the hub, which is an absolutely PITA to get installed right. I may do that; I don't play enough to maintain it.


Could you try reaching out to GoG.com as well? They have been pushing Linux gaming lately, but I have run into uneven support for their titles. It seems that if the dev gives them a tarball, thats all they use, while other titles they package. I have been having a tough time getting Shadowrun Returns up and running and it would be awesome to just grab the image.

What do you mean by get approval? Isn't LoL F2P? Did you have to get approval from the creator or Docker?


> Could you try reaching out to GoG.com as well?

Great idea. I will definitely reach out.

> What do you mean by get approval? Isn't LoL F2P?

Just because it's F2P doesn't mean I have a license to distribute it.

> Did you have to get approval from the creator or Docker?

Docker is an open source project. So, in effect, all contributors (including me!) are creators of Docker :)


email me, johnv at valvesoftware.com, and we can discuss it


Will do!


there are already steam/csgo docker images.. what would a partnership change?


I didn't know that. Pretty cool.


We're only looking at Windows Server for now


Would be awesome to have Microsoft Office as Docker container.


Wouldn't it? Also imagine, every time you do a build inside of something like Visual Studio, being able to test and distribute that in its own container.

The tangential benefits/integrations start to get really interesting.


yes this would be interesting. We have application severs with multiple windows apps running on them with different versions of the .Net framework. I had to pull teeth with our SysAdmin to get the .Net upgraded on the server. It would be nice to have a container for each so I (as a dev) could run the each app with it's own needed dependencies instead of having to install them on the server.


This is already happening in .NET land independently of this announcement. ASP.NET vNext Katana (1) completes the move away from a perspective monolithic framework and instead individual NuGet packages. Ie - when you deploy your app you deploy your assemblies and into that folder goes everything needed to run the app independently, including and up to .NET core assemblies or BCLs.

Your app is a folder and that folder contains everything needed to run the application even up to the point of self-hosting with OWIN - IIS is not needed anymore!

(1) http://www.asp.net/aspnet/overview/owin-and-katana/an-overvi...


So the point is Windows server is going to be non-CLI docker manager here to make it easier deploy Linux server apps on Windows ? Or am I missing something here ?


Microsoft is adding the raw ability (think: cgroups and namespaces ala Linux) in to the Windows Kernel to enable strict process isolation/sandboxing/etc.

Docker will sit on top of that and provide the same client API you're accustomed to, but run Windows based images natively.


I would love to see first class support for GUI and 3D accelerated applications with docker. Right now it's not easy to get it setup, and it's not entirely clear to me what the various performance overhead is of different solutions.


This work is completely Windows Server focused.


I am sure MS want $ for this. How are they taking care of licensing issue?

Office 365 client only?


What would a base Windows image look like, just a PowerShell prompt? How would you use the build file to install your dependencies without a package manager?


From what I've read (I'm not expert in this space) there is Chocolatey [1] availabe now for package management and Microsoft's OneGet on the horizon [2].

[1] https://chocolatey.org/ [2] https://oneget.codeplex.com/


Ok, so Windows Server 10 Core with OneGet would make a lot of sense for a base image. Thanks for the answer.


Likely exactly how Windows 2008 R2/2012 Core installation looks at initial installation time. Just a PS prompt and minimal additional software.

Windows already has a package manager called, creatively, PKGMGR. It is already used to install packages from the command line in core edition (or regular edition).


> What would a base Windows image look like, just a PowerShell prompt?

I can't speak for Microsoft, but, there's been a ton of thought put in to this, but details subject to change as they start to contribute to the project.

That said, I imagine each container would need those base services (svchost.exe, as an example), its own registry, etc.

> How would you use the build file to install your dependencies without a package manager?

Lots of examples (in child comments), but more will be announced here soon.


Fetch installer and run it?


John Gossman from Microsoft here. I'm an architect on the Azure team and have been working with Nick and others from Docker. Happy to answer questions.


What does the future of Docker deployment look like an Azure? Will it be more Heroku-like? Can I just provide a build file or a link to Docker repo image and have a server running? When I push an update to an image will my servers update? Can I control how many instances and resources each running Docker image will use? How do I orchestrate several instances that need to work together?


Hey, this is Madhan from Azure team. Thanks for your comment.

All great ideas and we are currently thinking though exactly the same ideas as well. Which of these ideas would you like to see happen first, what would be most important to you?


Great move for MS.

Not sure this is a good idea for Docker though. Doesn't it mean they lose their focus on Linux? Seems to complicate things a lot, and potentially introduce conflicts when prioritising what to do next. Simple is good. Serving a closed source operating system looks unwise, given the nature of their business.


I completely understand the feeling about focus and slowing down.

I referred to this above, but big partners working on meaty objectives is actually a positive success criteria that the governance model is built for.

That said, we try very hard to ensure that the flow of the project remains unencumbered.

Docker is a community project; one of the beautiful benefits of that is that Docker can change as long as you can reach critical consensus on any design proposal.

Without the partnership, Docker could be brought to Windows at some undetermined point in the future as soon as those native capabilities existed.

With the partnership, we have a set of dedicated Engineers from Microsoft and the community who are eager to build consensus around getting Docker integrated to the new API/Services that they're creating as soon as possible.

There's no free lunch - if the contributions don't stand the scrutiny of the maintainers, they will continue to work with them until it does.

I hope that helps - happy to discuss further!


This has been coming for a while, because it screams common sense..however with Microsofts previous I'm still a little surprised.

I wonder if Steven Sinofsky would have been game for this?

A lot of Windows developers use commercial Windows for development (i.e. Win7) with these features I anticipate more developers using (the more expensive) Windows Server.

If your build output could be a container (VS build process?) that you can ship, or as in a lot of "enterprise" organisations pass on to QA / UAT then this is a big deal and a massive huge step.


That process you describe excites me on so many levels. I'm literally giddy at the possibilities of developer and ops productivity. All we have to do now is not mess it up :)


How does the licensing work for Docker (Apache2 license, i know... It's more about the pricing behind it) together with Windows? I can't seem to find any info about this. I don't assume Windows is suddenly free of charge. The Windows virtualization licensing isn't obvious anyway: http://www.microsoft.com/licensing/about-licensing/virtualiz...


Docker is an open source project governed by the Apache 2 license. Referenced here: https://raw.githubusercontent.com/docker/docker/master/LICEN...

Any contribution by Microsoft to the Docker project (currently slated, the Docker Daemon for Windows) will be governed over said license.

What Microsoft chooses to do with Windows/kernel features/how that is licensed is left to their discretion.


I'm actually wondering more about the cost of spinning Docker with Windows Server 2012 for example.. I suppose Docker should enable me to spin up an MVC website with Windows Server 2012.

How much would this cost and how is the pricing evaluated?


Docker is free to use.

I don't know the price of the next version of Windows Server, but, Microsoft has committed to make Docker available on that.

So, to use Docker on the next version of Windows Server, it will be the cost of the Windows Server license.


Things like this make me wonder if Windows (Server) ecosystem will be able to compete with Linux in long term.

I do realize corpo-world is filled with Windows Servers now but it seemed Linux/Docker could change this with containers as a 'standardized server app format' with super easy provisioning process.

Now since Windows will get more or less the same - Linux/Docker and Windows/Docker will compete on tools and raw perf.


> Now since Windows will get more or less the same - Linux/Docker and Windows/Docker will compete on tools and raw perf.

Pretty cool evolution, isn't it?


Does this change one of the fundamental tenets of docker, namely that docker images are portable (thanks to the linux kernel providing a very high degree of backwards compatibility).

In order words, will there now be Windows images and Linux images, and Windows images will run on Windows hosts and Linux images on Linux hosts?


Docker images already weren't portable between Linux/x86-64, Linux/ARM, and Linux/Power, so Windows/x86-64 is one more platform. I wouldn't be surprised to see Illumos/x86-64 at some point.


OK, it's architecture vs OS, but I get the point. Thanks for the clarification.


There is a budding Docker/ARM community I cannot wait to support.

More on that soon.


It mentions extending Docker to support numbers of distributed Docker containers. I am building a PaaS around Docker so can I get a hint how that will work? I would rather add value than duplicate an existing effort.

Is it something like an integration of Mesos, Fig and Docker?


I'm building one as well. Want to discuss?


Sure.. I think yours is called Matrix AI right? "orchestrate massive distributed infrastructure so that they become self-healing, self-organising, and self-adaptable"

It sounds like Docker is going to eventually put something out that touches on some of this stuff (although not necessarily self-healing etc.), just wondering how we can take advantage of that or avoid duplicating functionality.


There's alot of parts to orchestration that isnt just container implementations, and there are already tons of tools that do the different parts of orchestration, so there's already some duplicated effort right now, in fact everybody reinvents the wheel slightly when building new distributed systems. The problem is putting it all together in an scalable model is very difficult. Send me an email at roger.qiu ([{at}]) matrix.ai


I'd definitely recommend the both of you discuss your plans on #docker-dev / freenode. I'm excited to participate in the conversation, it sounds really cool.


Is this the "embrace" step of "embrace, extend, extinguish"?


I think it's right to question that, but I haven't seen any indication of that. I look to the Azure team, and how they're operating.

Oracle DB is available. That's got to say something.


very smart move from Microsoft's part.


I think there may be a growing consensus in their organization that have to keep making decisions like this if they want to remain relevant.


I think its more that they have the cash to do everything so that's what they are doing. When you do everything you are unavoidable.

Not a bad strategy.

They haven't lost a bit of relevance with the majority of the paying user base. However the tech press like to spin it that way. When I see a Mac or a Linux box in a 2000+ seat corporate network, then perhaps I'll believe it. The only markets they aren't winning are the freshly created volatile ones.


I should have added that I'm a .NET developer who's recently returned working with in an open source shop. The problem I see is not that they don't have good tools or relevant solutions for real world problems. The problem I see is I'm in my late thirties and I find myself to be one of the younger developers when I go out to MS related user groups and events.

Spending time in the open source world with younger developers, it's clear the only value they get from Microsoft is for their gaming machines at home. While a number of them respect the development tools, it doesn't matter because Microsoft has done a great job tying it to the Windows titanic.

I recall interviewing with a number of companies in the mid 90's and talking with the programmers who worked with DEC based tools. The ones I talked to had nothing but good things to say about their tools and systems and how they had real implementations of various systems that PC's were trying to implement at the time. I can't help but look back and find myself in the same position on a course towards becoming irrelevant in 10-20 years.

I guess what I was trying to get at is Microsoft is going to have to make a considerable effort to focus on luring the generation they lost if they want to remain relevant. That generation of developers isn't simply going to pony up for slightly better tools. They demand having access to software when it's available so they can download it immediately with one of twenty different package managers.

I'm not predicting Microsoft's doom, but I am saying that Microsoft is facing an incredible number of challenges that will make "do everything" a bad strategy. On too many fronts they're faced with competition that ranges any from inferior to superior, but free. In development tools, they're up against a huge community of open source tools with some of them funded by huge companies whose revenues don't depend on the sale of software. MS Office's share is eroding, not just to direct competition, but to indirect alternatives that posit that complicated word processors and spreadsheets are the wrong answer.

I also don't think Microsoft can do everything because software is going through a Cambrian type of explosion where you're seeing all sorts of manifestations of species and hybrids. Sure a lot of these species will die out, but as we witness new species of databases and operating systems it will be difficult for a large company like Microsoft to predict which ideas it needs to pay attention to and which to ignore. Adapting the complicated markets is an incredible challenge, but there are companies that have done a remarkable job staying on top.


The formatting on your comment is messed up, hopefully this helps:

I should have added that I'm a .NET developer who's recently returned working with in an open source shop. The problem I see is not that they don't have good tools or relevant solutions for real world problems. The problem I see is I'm in my late thirties and I find myself to be one of the younger developers when I go out to MS related user groups and events.

Spending time in the open source world with younger developers, it's clear the only value they get from Microsoft is for their gaming machines at home. While a number of them respect the development tools, it doesn't matter because Microsoft has done a great job tying it to the Windows titanic.

I recall interviewing with a number of companies in the mid 90's and talking with the programmers who worked with DEC based tools. The ones I talked to had nothing but good things to say about their tools and systems and how they had real implementations of various systems that PC's were trying to implement at the time. I can't help but look back and find myself in the same position on a course towards becoming irrelevant in 10-20 years.

I guess what I was trying to get at is Microsoft is going to have to make a considerable effort to focus on luring the generation they lost if they want to remain relevant. That generation of developers isn't simply going to pony up for slightly better tools. They demand having access to software when it's available so they can download it immediately with one of twenty different package managers.

I'm not predicting Microsoft's doom, but I am saying that Microsoft is facing an incredible number of challenges that will make "do everything" a bad strategy. On too many fronts they're faced with competition that ranges any from inferior to superior, but free. In development tools, they're up against a huge community of open source tools with some of them funded by huge companies whose revenues don't depend on the sale of software. MS Office's share is eroding, not just to direct competition, but to indirect alternatives that posit that complicated word processors and spreadsheets are the wrong answer.

I also don't think Microsoft can do everything because software is going through a Cambrian type of explosion where you're seeing all sorts of manifestations of species and hybrids. Sure a lot of these species will die out, but as we witness new species of databases and operating systems it will be difficult for a large company like Microsoft to predict which ideas it needs to pay attention to and which to ignore. Adapting the complicated markets is an incredible challenge, but there are companies that have done a remarkable job staying on top.


Thanks. I guess Hacker News doesn't like leading spaces with paragraphs.


> They haven't lost a bit of relevance with the majority of the paying user base. However the tech press like to spin it that way. When I see a Mac or a Linux box in a 2000+ seat corporate network, then perhaps I'll believe it.

I had the same response to the predicted demise of Palm and Blackberry. I'm not saying that I know the outcome but that current success is not a predictor of future performance, especially in an industry where disruption is such a focus.


I happen to work at one of those companies where all developers get Linux desktops. In fact, I'm typing this comment from said Linux desktop. You just need to get out more.


  There's a big difference between a software company equipping developers with Linux based machines and a midsize to large non-tech company equipping their users with Linux or even Macs.

  Granted, I think Microsoft is losing their grip and non-tech companies (I work for one) are seriously beginning to consider PC alternatives.  On the manager and executive level, and power user level, IT departments seem to be accommodating Macs more.  For non-power users, web based machines like Chromebooks are becoming a more attractive each day.


It has been pretty amazing to see, if I'm honest. I'm really encouraged - especially with the way they pitch the Azure product line.



[deleted]


> To me, this makes it sound like Microsoft is slowly starting to join the rest of planet earth, and adding features to its OS kernel to make it more Unix-like

Could you be more specific, what "features" precisely? Windows NT already takes a lot of concepts from traditional UNIX kernels and builds on them (unlike, for example, Windows 9x).

> From what I remember, Windows Server is already a step in that direction, but Microsoft hasn't advertised much of that functionality so far, maybe in order to maintain customer lock in.

Could you be more specific, I know a lot about modern Linux and Windows Server, and that comment is mysterious to me. Are you talking about the deprecated UNIX Services for Windows which has existed for well over fifteen years?


> Could you be more specific, what "features" precisely? Windows NT already takes a lot of concepts from traditional UNIX kernels and builds on them (unlike, for example, Windows 9x).

cgroups and namespaces.


What are you referring to? Windows Server has been industrial strength for years now.

>>join the rest of planet earth, and adding features to its OS kernel to make it more Unix-like


I really hope this doesn't distract from Linux development - there are a number of bugs that have been hanging around for some time without updates now.


Partnering with Microsoft [almost] never ends well.

Good luck to Docker.


I wonder how this will impact spoon.net, since they recently launched their windows container tech.


I wish them the best of luck.


The site apparently only accepts RC4 as the cipher, I get: Error code: ssl_error_no_cypher_overlap


Thank you.

Our website (docker.com) and the Docker Hub (hub.docker.com), our hosted service, are managed directly by Docker employees and offer a better maintained and security-focused cipher suite than our blog.

Unfortunately, we are aware that the blog's SSL/TLS configuration is not ideal. It is on externally hosted infrastructure and we have been working with our service provider to rectify these issues. We have high expectations of the services we consume and will do what is necessary for the long-term security and confidence of our users.


What kind of Windows server/kernel features does Docker need/want/like from MS?


The equivalent of c groups and namespaces, along with services that need to be container aware (like svchost.exe) and a separate registry per container.


Is the docker for Windows is mainly CLI only?

Or the GUI exe will be also isolated from each container session?


A future version of Windows will have Container capabilities, and Microsoft has committed to contributing Docker Daemon code.


DockerHub-based Windows Package Manager - coming soon to an OS near you


Don't know why you were down voted - this is exactly what is in the press release.


shrugs. In all seriousness though, thank the people around your office for me! There are some substantial hurdles to make this integration a practical reality, but if Docker and Msoft can actually pull it off we will see a new world of crazy possibilities. I can't wait!


microsoft has been cornered for a long time, it has lost many ooportunities, glad it's not reacting.


Worst news ever.

Docker people will spend more time fixing and making bugs due to Microsoft, the documentation will become a mess (it cant possibly be same documentation for that different systems), the Linux-side of docker will be more mediocre compared to what it could have been if all effort went to it.

Remember Internet Explorer 6, Visual Basic, the horror that is Excel and the whole Office suite, Asp.net, Windows Millenium, The attempt to kill Linux by Microsoft through SCO.


The announcement clearly states that Microsoft are becoming "Docker people" -> they're contributing the code in the Docker ecosystem, which means precisely that our problems become their problems.

This isn't any different than Red Hat contributing the Device Mapper backend to Docker so it can run unmodified on RHEL. Nothing about the UX changed/fragmented from there.


They are a bit lame now - trying to adopt legacy stuff like HTML instead of fixing it with something nice like XAML.


Microsoft is going about this the wrong way. They should adopt GNU/Linux, supporting containers, file systems, etc. as their kernel, and then add the proprietary Windows layer on top, e.g. WINE.

Otherwise existing Dockerfiles will not build on Windows - right?


This is about getting container/docker tooling running on windows, for windows applications. You are correct, this will not run Linux containers directly on Windows, or vice-versa.

It's still a compelling option, but I'm not quite certain of the value that it will bring to the table here.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: