Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mirantis acquires Docker Enterprise and Docker raises $35M (techcrunch.com)
376 points by chuhnk on Nov 13, 2019 | hide | past | favorite | 232 comments


Docker is the single best thing to happen to software deployment in 20 years, not just because of what it did for eliminating "works on my machine" build problems, but because of what it enabled. A huge ecosystem has sprung up around containerization, with immense value created for other businesses. Their "Docker for Windows/Mac" apps are one of the first things I install on any new dev machine.

It's unfortunate they didn't figure out a way to make money off the best thing to happen to building and deploying software in 20 years.

A lot of other people did, from the startups now selling value-adds to Kubernetes like Kong and Tigera and TwistLock (since acquired) and others, to the public clouds which all offer Docker-based build services, image registries, and PaaS deployment tooling, to Kubernetes itself which for most users today still relies on Docker.


Happy to read this here, somehow any topic related to Docker on here ends up with someone saying Docker brings nothing new to the table except for a propietary API and you could just as well use LCX or whatever. I can personally see from inside a big corporation how Docker (or containerization) has a huge impact in how "easy" it becomes for developers to build and run their own code.

Without Docker everything new becomes a project on its own where you need to call in "ops" for the smallest of things. Not anymore. Ofcourse this has some other downsides, but in general it's a massive step forward.


> Without Docker everything new becomes a project on its own where you need to call in "ops" for the smallest of things.

I hope that one day we will stop echoing that or framing it as a bad thing.

It's always good to involve people. Writing, build and running software is a shared responsibility.

I truly believe that if you be nice with your ops, explain what you want to achieve, they will help you. It is a two-way street.

It is not a tool that will "fix" how humans interact.

Also, put yourself in their shoes, it could be that:

"With Docker everything new becomes a snowflake on its own where you need to call in "dev" for the smallest of things." -- or uncompress the image and look around how/what a specific image does things when it start, place app/config/data files, etc...


Depends on the incentives placed on ops.

All too often, they're responsible to handle things breaking, but not responsible for getting features out the door.

Hence, they become a huge gate to get anything out; they have no incentive to help you.

Meanwhile, you get dev owning some of the operational burden, and it works pretty well. Until you have compliance needs to separate them.


I remember working at a startup with separate ops and no docker. I just remember it causing a lot of friction and it slowed us down. I see no reason why ops people wouldn't enjoy docker just as much as devs. I found it cleaner on average than system config trackers like ansible or puppet, and could reduce the complexity of your configuration management.


If a dev can do it on his own without assistance with minimal issues then having someone to "maintain" the install is just dumb.


> someone saying Docker brings nothing new to the table except for a propietary API and you could just as well use LCX or whatever

Probably the same people who said Dropbox brings nothing new over rsync. Making something easier and bringing it to more people (two halves of the same thing) is a BIG value add. Often the majority of the value.


:-)

Not quite!

I wrote my own rsync-like-toolset before rsync itself was released (far too late, when you think of it!), and I immediately loved DropBox.

Maybe because I am dev, AND I am ops, or maybe because (unlike DropBox) I am not sharing folders with my mother, but more likely, because I'm an idiot (everyone tells me "Catcher in the Rye" is great - and after having read it every 3rd year for nigh on 30 years, I still don't get it), but I don't really understand the appeal of docker.

Sure, I can use it, because you kind of have to these days, and I don't really find it objectionable, but it seems like there is a whole lot of "something there" that I'm missing. At this point, I'll probably never understand the benefit of. It won't be the last tech ritual I will have to endure before I die...


Not OT for the thread, but you’re not alone with Catcher in the Rye. I first read it in my 20s and just did not like it at all. I’ve got friends who say that it helps to have read it as a teenager, because it’s easier to identify with Holden, but man, reading it now I just get so tired of his whining.

I’m going to keep trying, probably, because so far it’s one of the only big classics I haven’t liked. Anyway, maybe we’re both idiots!


Ha!

I started in my mid-teens. I was told: "It changed my life!".

I've read it every three years or so (even for the decade-plus that I've lived out of the US, once in Japanese!), and ... I just don't get it.

I probably don't get Moby Dick either, but I find it to be remarkably funny, and not a whiney teenager, so I'm going to give it a pass.

Thanks for the comment. I have a bit of a hang-up about how much I don't get "The Great American Novel" (as a born-and-bred rural American, I should get those, you know? :-) )


Being in a technology focused site, there is nothing wrong in trying to give some credit to underlying technology and those who wrote it instead of just focusing on those who commercialized it.


This is not wrong but it leaves out a huge amount of hard work which wasn’t in the cool kernel hacking space. LXC is a significant accomplishment but Docker solves a number of other non-trivial problems which made mainstream adoption harder – it’s like the people who say Apple just copied someone else who had a touchscreen device without comparing the whole product.


But, frequently, commercialization feeds the "development" half of R&D. Companies like Docker Inc. often end up paying the lion's share of the salaries of the people doing the FOSS work to upstream and maintain the technologies they build upon.


Can you explain how you run things using Docker? Not on your machine, but on production, how is that supposed to work using the official tool chain?


It's a shame they are so hostile to developers. I wonder if they'll ever fix this: https://github.com/docker/docker.github.io/issues/6910


The real problem was the downright hostility towards everyone outside of Docker... with shykes (Solomon Hykes) leading the belligerent charge on the community at large. The countless arguments shykes got into with potential users right here on HN, and all the new fake accounts that would suddenly crop up to defend his positions. It was transparently an attempt to wield HN as a weapon against anyone that questioned what Docker was doing.

Starting with the outright lies about what Docker could or could not do (Docker does X, Y, and Z - but Y and Z were actually still in development), to the outright refusal to play nice with the open source community (remember the Docker vs. CoreOS fights for standardization of container formats?). If Docker didn't invent it, or think of it first, it was a terrible idea and had to die.

They even went so far as to start a convention before they had a production product.

Then Kubernetes, Rocket and all the other container platforms and tooling came along and ate their lunch. Why did people adopt them so quickly? Because they played nice with each other and made it easy to build your system as you wanted it... not just as Docker wanted it.

The entire thing was a money grab. And when things went sideways, they made some terrible decisions (the entire Moby fiasco). It's not surprising Docker finally fell...


Most underrated comment of the thread - this is all exactly right. Hubris and poor leadership were Docker's fail whales not the technology.


> Then Kubernetes, Rocket and all the other container platforms and tooling came along and ate their lunch. Why did people adopt them so quickly?

Because Google of course.


Yes - the unchecked bad attitudes by some of their maintainers is awful. I can't believe there isn't a major fork to docker-compose because of it.


> I can't believe there isn't a major fork to docker-compose because of it.

We don't need one. Red Hat is working on podman.

https://podman.io/


Podman is great! It plays much nicer with the rest of my system and doesn’t require root for things that shouldn’t require root! Additionally (afaik, I don’t really use kubernetes) it maps better into kubernetes models of things with pods and kube.yml etc.


Rootless support has been in BuildKit since 2018. I feel like all of the tools RedHat rebuilt, that didn't really offer anything new and in a lot of cases offered a whole lot less, is centric to very diehard RedHat oriented folks. If anything we should hope that all of the Docker bits stay where they are because I surely trust RedHat far less under IBMs leadership.

And, no. Podman doesn't offer any advantages in the Kubernetes ecosystem by mapping better into k8s "models".


It's a lot like Sun Microsystems and Java. That was a big improvement in the lives of developers, but the company behind it never really figured out how to make money on it.


I don't think it's a coincidence. A lot of the value comes from things being open and free, which enables rapid proliferation and ubiquity of the technology, but also drastically increases the complexity of making money out of it.

It's one reason I'm really not a fan of the Silicon Valley model .... pick out the top 10 most important technology advances in the last 20 years and about half of them were uncommercialisable in the Silicon Valley sense.


Sun made big money with Java through licensing. Any Java-compatible implementation vendor paid big money to Sun (including IBM, Azure, etc.).


Was that really "big money"? How many vendors were there, and how many licenses did each end up paying for?


If you wanted to write your own JVM or port the JVM to a different OS you had to pay something like $1M. I wouldn't call it big money because there aren't that many OSes. J2ME phones also paid royalties (which led to the Android lawsuit).


Plenty through the last 20 something years, some of them still going on.

https://en.wikipedia.org/wiki/List_of_Java_virtual_machines

Google is the only one that worked around the licensing, and now we have our phones stuck on a mix of Java 6 - 8 subset.


> Docker is the single best thing to happen to software deployment in 20 years

Something like Docker may eventually hold this title, but Docker itself? Certainly not. It was simply too poorly executed, both strategically but especially technically.

Running production workloads in Docker containers was, is, and will be remembered as nothing short of professional negligence. Often expedient, sometimes worth the risk, but always a liability.


Yeah, totally agree. Most uses of Docker are of the "Junk Drawer" variety.

A bunch of disorganized unpinned dependencies get thrown into a container that will work in a reproducible manner only until one of the upstream deps changes.

This approach also allows people to get away with deploying an app without truly understanding it. That's always been a good recipe for success.

Another poster claimed that it's mainstream practise, so what? So was running your web-server on 90s era IIS. Not that that was good idea either.


The biggest issue I've had with Docker is that Dockerfiles aren't designed to be reproducible, so regressions are practically guaranteed. If there were tooling around pinning versions of Linux distro packages, and maybe a few select PL package managers like Go, Node, Python, Ruby, it would make a huge difference. I tried to do this by hand and finally gave up.


You can stack layers (where you install exactly what you want) and create an image that is deterministic.

Even if you install from repos in the dockerfile, they should have version numbers.


How is this different compared to creating a new VM for every single application?


IDK, I don't use docker or a separate VM per application.

I guess I'm old school but I install the stuff on my linux instance or server and work out the dependencies so they don't clash.

I haven't found it that difficult.

Just cos the devs want to throw some ill conceived container over the fence onto my systems, doesn't mean I let them. ( I mean I guess they can, but only if they get the support calls instead of me.)


This can work for simple projects where the dependencies are all similar but it doesn’t scale well and, if you’ll forgive the unsolicited career advice, I would be worried that the breaking point is “We switched to the cloud and told your boss’s boss’s boss that you had been holding us back”. I much prefer finding some balance which doesn’t position you as the Department of No.


“We switched to the cloud” isn’t a decision made by development orgs in any non-software company. In most businesses, supportability by IT and boring bureaucratic processes are king.

“Adopt whatever buzzword shit developers are into” is about the worst career advice you can actually give sysadmins that work in non-cutting edge companies.


> “We switched to the cloud” isn’t a decision made by development orgs in any non-software company. In most businesses, supportability by IT and boring bureaucratic processes are king.

This might have been true in your experience but it hasn’t been in mine (.com, .edu, .gov). There’s common term (shadow IT) for what inevitably happens: someone hires their own staff, gets their own servers, arranges their contractors to host it, architects their application requirements so it can’t be run by central IT, etc. The reason it happens is that the IT department is not listening to its users and thus giving them a strong incentive to find alternatives to accomplish their jobs — and they’ll almost always win unless the CIO is politically untouchable and has an iron hand on the budget.

This is especially true in these cases: using the cloud or containers is what senior management are hearing from all of their consultants, and likely many peers. If you’re trying to hold that back without a really good reason, it is unlikely to end well for you unless you’re related to the CEO, especially since it’s easy to find people with equivalent or greater experience who can point to good results from having adopted what are now relatively mature technologies with plenty of case studies.

> “Adopt whatever buzzword shit developers are into” is about the worst career advice you can actually give sysadmins that work in non-cutting edge companies.

If you think that’s true, I would suggest you reread my comment and ask whether that’s a fair reading of my suggestion about finding a way to balance concerns. There is a huge amount of space between “no” and “you can run whatever you want”.


I deploy big complex projects this way.

I've built sophisticated automation to achieve this, so it's all reproducible.

I just don't use Docker that's all. It doesn't bring anything to the able that I can't achieve more simply some other way. And it has it's own problems.

I've been a sysadmin for 25+ years and I've administered production DNS server, mailservers, web server, app servers, routers, switches ...

I think stuff like docker and k8 appeals to people today because it promises that you don't have t understand any of that old stuff.

But to me it just makes my life harder because I don't need the underlying tech abstracted away. If anything it makes my life harder to add an unfamiliar layer on top.

I'm not worried, there will always be market for guys like me. When the fancy new stuff stops working, someone has to fix it.


I've also been doing that for about 25 years, with a similar list, and full automation since ~2001 or so. I haven't found that approach to scale very well because it requires a much higher level of sysadmin skill & application-level knowledge, maintaining a somewhat extensive list of backports or custom builds, and the ability to customize each application to work with a filesystem layout which avoids the conflicts which inevitably arise. Automation is important to making that work at all but it still means that any time things change hands there's a non-trivial amount of code to review before you are confident about making changes and it tends to require elevated privileges or greater levels of coordination between the developers and operators.

The reason why containers are popular is that you get a simple way to manage all of that and it's completely standard on every project. App A doesn't need to be modified to coexist with App B because they're running in an isolated environment, the sysadmin doesn't need to learn how the container was built just to manage services or deploy new instances, builds are much faster because of the layering model, you can deploy a single tested image with no chance of differences creeping into your automated deployments in testing & production, etc. All of that is something which you can achieve with other tools but there's a huge win to not having to reinvent each wheel when you encounter it and being able to reuse other people's work easily.

Note also that I said “containers” instead of Docker — some people might prefer, say, podman / buildah but the key concepts of having a clear support boundary with an isolated-by-default filesystem model are really more important than the implementation.


> the sysadmin doesn't need to learn how the container was built just to manage services or deploy new instances

I disagree on that point.

Just taking a docker image from a developer and deploying that to production does not work to well in my experience, and guess who gets the support calls when things break?

If a sysadmin gets a say in how the container images are configured, it usually works out much better.

Also partitioning the filesystem to avoid dependency clashes used to be more important when I used rack mounted servers. Now it's easy enough to deploy separate cloud instances to partition things. And it's a real VM with real tools on it so I can diagnose problems if they arise.


> Just taking a docker image from a developer and deploying that to production does not work to well in my experience, and guess who gets the support calls when things break?

In my experience, that's been about the same for any mode of deployment by a developer without ops experience. The main thing Docker added was standardizing the mechanics and avoiding certain classes of errors like not having reproducible builds or deploying local artifacts inconsistently.


You know why docker is being used instead of this approach?

Because your automation is made by you, and for you, no one else can take it over if you leave. And even if they can take it over, they will have way higher salary - compared to the container based system.


To be fair, pretty much every single place I've seen docker in use there have been custom automation systems in place. Docker creates artifacts, not automation, so it really doesn't help solve that problem. Kubernetes on the other hand, does help, though you'll never get away from some custom parts in a delivery system, as all systems are different. That's not a bad thing.


This sounds like an immature attitude to take towards the people that work with you and just makes you sound like the stereotypical basement sysadmin that needs to control everything and knows better than everyone else.


Containers are terrible for bureaucratic institutions. It moves security patch management to the development org that builds the containers and development orgs typically don’t have the tooling/process to do this.

Source: consulted for a company that got bitten by allowing containers without a patch management process for vulnerabilities.


Perhaps but they are happy with the results.

They tried it their way for a couple of years, and there were performance problems and outages.


Good luck doing something like that at scale with thousands of instances.


I think you just pinpointed what Docker should have been focused on from the beginning to monetize their creation. There are so many obvious ways to make Docker work for enterprises.


Sure docker is not perfect but isn't running docker containers mainstream practice now? I think your response is a bit of exaggeration.


Haha... for production workloads?

Definitely not.

Docker is a lovely developer tool, but it’s not, in my experience, ever been (even with swarm when that was a thing) suitable for production workloads.

Containers: yes.

Docker: that’s a container runtime, and definitely not mainstream for production workloads.


you're kidding right?

Docker runs as the Container Runtime under most Kubernetes clusters (in fact every Kubernetes cluster I've seen). The only other CRIs that are commonly used are CRI-O and ContainerD (which Docker already uses), and last I saw CRI-O was < 10% of the Container runtime market.


just to make sure i understand what you’re saying : running containers created with the docker tool is ok, and running it with kubernetes is also ok, but actually deploying docker tool on a server to run the container is not. Am i correct ?

Care to explain what makes that docker tool/runtime unsuitable for production ?


I think, when people are talking about "using Docker", they're talking about using Docker tooling to build and push/update/pull Docker images (actually OCI images) around. They're not really talking about running OCI images "on the Docker daemon container runtime." But to most people, OCI images are "Docker" and if your cloud scheduler/hypervisor can run them then it's "a Docker runtime."


Docker daemon is the default runtime for k8s and k8s is most definitely being used in lots of production environments.


I like to believe most people are sufficiently with-the-plot to appreciate that there is a difference between a "container", "container runtime", "container orchestration" and "docker".

No one is seriously confused about "using docker" meaning, oh hey, I run my containers on kubernetes. I have literally never met someone who was confused about this. I suppose some people just stop at "I use docker" and don't know what that means when it hits production... but I'm skeptical they actually think a copy of docker is running out there in prod.

...but to be fair, the parent comment did say that running "docker containers" was mainstream; and fair enough.

My point was running containers using docker most certainly is not.


I don't get your point. Isn't docker the default runtime everywhere? k8s uses docker internally, by default.

So containers are most certainly run in prod using docker (to be specific docker daemon), are they not?


For many people, Docker = container, created using a Dockerfile and deployed to kubernetes or elsewhere, regardless of the actual runtime underneath.


What is the problem with using a container based approach for production workloads?


I’ve wondered about this myself. What is the solution? On premise cloud like openstack?


Most of the industry runs Docker containers in production my dude.


Pretty sure Google App Engine runs Docker under the hood. It has been a while since I logged into a GAE machine.


Around here it is still plain old VMs, thank you very much.


Both have their usecases. It just doesn't make sense to carry around a VM image (or build a new VM image on each push) for a fairly simple web app (which is what many work on).


Well, that is what application servers are for.


There are a non-zero number of deployments that run Docker at considerable scale with pretty good success. I think their software has made considerable strides.


We are a small agency. Before Docker you can’t imagine the pain of managing Python, Ruby or whatever environments and dependencies especially on designers laptops for running dev environments. Now on a fresh laptop it’s just install the Mac desktop client and docker-compose up. Seriously it was a blessing for us.


If working at your org requires me to spin up Docker containers to write, test, and commit code, I'm changing jobs ASAP. Docker is a tool for CI environments, not for developer laptops (or production, for that matter).


I should have been more clear that it’s mostly for non developers (designers) or front-end developers for running the back-end, without the hassle of installing and maintint the related development environment. The same as if it was a third party program.

If you still install a full ecosystem every time you want to try a piece of code in any language in Github instead of doing docker run I think you are missing something.

With that said I too use it in my dev machine and don’t really see the downsides for the kind of projects I work on, so I’m curious if you can explain your opinion.


Most programmers bring value to an organization by creating and maintaining code.

Code should therefore be organized so that programmers have to interact with the fewest possible artifacts (repos, compilation units, deployment artifacts, processes) to deliver that value, ideally 1. (This is symmetric with Conway's Law.)

Delivering value on that single artifact should be possible in isolation, meaning (after cloning) without internet access, or any other service running on the development environment. Concretely, all services should have a "development mode" that allows them to exercise their business logic against mock or stub dependent services, in-process/in-mem databases, etc.

So,

> If you still install a full ecosystem every time you want to try a piece of code in any language in Github instead of doing docker run I think you are missing something.

If I'm working at a company, I expect that delivering business value means checking out, compiling, running, and testing a single process with no external dependencies. If this isn't the case, I will make it the case, before starting any value-delivering work.

If delivering business value requires me to spin up the entire universe of services on my laptop, creating a miniature staging or production environment, that's a problem that needs to be fixed. This means that docker-compose, in fact Docker at all (unless the business value involves e.g. Linux-specific features), Kubernetes, etc. in my build/test/run cycle are all red flags.

Exceptions exist, but they are and should be rare.


I think we are talking about different contexts: I do not disagree if this is a big org and you are engineer N (> 100). With that said:

The previous company I worked in and others I’ve heard of were dysfunctional because organized like what you describe compared to their size. Backend engineers working on their API in isolation, without a good contact with end users, their business, and the UI, is the huge red flag in my case.

> Concretely, all services should have a "development mode" that allows them to exercise their business logic against mock or stub dependent services, in-process/in-mem databases, etc.

This brings only complexity and additional code in our case without any clear advantage (except code that’s easy to test but that’s circular reasoning). If your code is in the chain between a user action and the database, then the sooner you test the whole chain for side effects the better, without complex integration testing processes (with subs that will never be as accurate as the real thing). Bonus for being in the user’s shoes.

> If I'm working at a company, I expect that delivering business value means checking out, compiling, running, and testing a single process with no external dependencies.

Given the complexity and the number of dependencies of an OS running a browser and your editor/IDE, I don’t really see the point here.

> If this isn't the case, I will make it the case, before starting any value-delivering work.

That’s exactly the kind of issues I experienced at previous companies: engineers endlessly reworking the architecture until it’s perfect in their view with no rationale regarding real value (« it’s best practices » is not).

> spin up the entire universe of services on my laptop, creating a miniature staging or production environment, that's a problem that needs to be fixed.

This nicely summarize our different point of view: that’s a killer feature for me.

Again it really depends on the context and I’m not disagreeing with your experience, I disagree with how you state this like it’s universal rules of good software development without taking the context into consideration.


Without Docker being so convenient for developers, it would be hard to containerize production.

One should not miss the importance of running Docker under Windows or OSX natively, with a VM running Linux tucked behind the scenes. It makes developers' experience so much nicer, and compatible across platforms. This helps adoptions a lot.

Time and again, path to mass adoption is not technical excellence, but making an important thing stupidly easy, when one has no excuses not to try.


What do you base this on?


I don't understand that argument. Docker really doesn't help with application packaging. A Dockerfile is rudimentary at best, and ends up mostly a bash script for most people. It also hasn't changed in years so it's obviously not a focal point for development.

What Docker did provide that other packaging tools didn't was free image hosting for anyone, no vetting, no strings attached. Of course that was a major hit with developers. Free hosting often is, especially when well integrated with development tooling.

The thing is that it sets you up for a nice acquisition. But for some reason Docker, Inc. felt they wanted to go the VMware Way instead and sell enterprise tools to enterprise customers. Those are two different strategies that will never be reconciled. It makes sense for them to shed that part and double down on Docker Hub and the Mac/Windows tooling.


> Docker really doesn't help with application packaging

It does. You can do crazy shit like this with Docker containers:

    thinking about the time we created docker at work in 2009: 
    nobody could remake our centos4 build machine for our C++ code
    so we shut it down, imaged the file system into a .tar.gz, and 
    made our makefiles transparently extract it & build via chroot. 
    so absolutely fucking cursed -- @stdlib 
https://twitter.com/stdlib/status/1192461409549987841

It's essentially still exactly what docker allows you to do:

    tar-up your whole f'ing disk, and run it in a chroot somewhere else.
Try doing something like this with yum. Good luck.

Of course it's horrible. But now, you can get away with it.


How is this different from running the same tarball under lxc or kvm or on whatever cloud provider's vm or just in a chroot?

What has Docker to do with this?


Dockerfiles and docker-compose are IMO decent ways to abstract build and runtime environments. For our current development life cycle I combined it with a Makefile and some common scripts that streamline all life cycle commands across all services. With that, it's straightforward to set up systems where everything is commitable to VCS except docker setup and secrets.


It doesn't seem that decent a way if one has to write a bunch of scripts and a Makefile of all things.

It, well, I wrote a bunch of perl scripts to "streamline all life cycle commands across all services" back in 2002, where all we had were FreeBSD jails.

So, again, what's so special about Docker?


Excuse me but this is "crazy shit":

".. in 2009: nobody could remake our centos4 build machine for our C++ code ... so absolutely fucking cursed"

What lead a computer running a supported operating system[1] in a state like this? Distill that and you have docker :)

1. https://en.wikipedia.org/wiki/CentOS#End-of-support_schedule


Most likely, custom installed libraries and tools (./configure; make; make install), that were never packaged. And a package manager, that can't even tell you if any of the installed files were changed, after they were installed.

Just a guess...


If you wanted a RPM that installs on multiple versions of RHEL, you would take the oldest RHEL you could find, build a rpm with all your dependencies except glibc included, and ship that. The resulting rpm would work on all later versions of RHEL

Problem is, you still need to update your dependencies, so sooner or later you’re bootstrapping modern GCC and Stdc++, openssl etc all from a Centos 5 base just to get a forwards compatible rpm

A that point you start tarring up your chroots so you know how to reconstruct this Frankenstein build enviroment

Thank Docker for giving an easily portable environment that’s almost as easily accepted by customers as a RPM


First you should not mix CentOS 5 packages into a CentOS 4 system. That is how you create a problem in the first place.

Building software on top of frankenstein build environments is how you end with broken software, and should not be encouraged :)

Let alone development environment.


What the parent is describing by "Frankenstein build environment" is a bog-standard cross-compilation toolchain/SDK. You don't infect the host with the toolchain's packages; you install the toolchain's packages in the toolchain chroot. (If you've ever developed for an embedded arch, this will be painfully familiar.)


> You don't infect the host with the toolchain's packages; you install the toolchain's packages in the toolchain chroot.

Infect is a too strong word, because normally the cross compiler will coexist well with your system. But you are right about the chroot, throw it away and start again, repeat. That is great!

I normally install gcc-arm-linux-gnueabihf + qemu-user-static and enable binfmt. It works well for building armhf, but I can imagine for things like ESP32 where you don't have the toolchain sources and etc.


> Most likely, custom installed libraries and tools (./configure; make; make install), that were never packaged. And a package manager, that can't even tell you if any of the installed files were changed, after they were installed.

Software has to be read or at least installed in a safe place (before installing on a live server) so then one can be sure it does nothing silly.

Packaging is not hard, but there are some rules, including placing binaries in /usr/bin, configuration files in /etc and /etc/default (deb) or /etc/sysconfig (rpm), general package files in /usr/share. If you place files in the standard locations probably the package manager will detect changes in config files.


You mean Lxc and kernel namespaces is the best thing to happen to software in the last 20 years?


It turns out developer experience matters.

Docker (the software, not the company) and Stripe are two of the best recent examples of that.


Definitely not, IMO linux "containers" succeeded in spite of their shortcomings (as compared to jails/zones).

What Docker did is make a great application packaging tool, and the fact that it uses cgroups/namespaces underneath is irrelevant for 99% of users, because they're deploying containers in VMs.


Docker got popular because a lot of devs gave up on packaging (and declared dependencies and shared libraries) and instead want jails (and isolation and static linking). Now it's self-perpetuating, because it's very easy to put packages into containers but it's hard to package containerized software for use in an ordinary unified namespace (no container).


They gave up on packaging because its a nightmare that never ends. Conflicts, missing dependencies, old versions or no version available for your distro because of course there isn't just one package format.

Docker is still an over-engineered solution to this problem, but its popularity is evidence that there is in fact a problem.


Yep, the question was "package it as what?", and the answer was rarely ever "a statically linked linux executable."

This created the need for piles of configuration management or machine build scripts to deploy the app.


Packaging and jails are simply the means to an end: distribute and run applications.

Docker became popular because it provides an abstraction that simply (and elegantly) eliminates the problem of distributing and running server applications.


No I don't think he meant that. I know countless numbers of users that are familiar with the Docker cli and exactly 0 of them that know how to manipulate kernel namespaces, cgroups, lxc, etc. manually. Not a thing in the developer community. Docker is what makes that stuff useful and usable. As low level primitives they are fine but they'd never be used that widely without something like Docker. Clearly they depend on each other but they are not the same thing. Also there's more to Docker than just that. E.g. the layered filesystem is hugely important.


Let me know when LXC and kernel namespaces are portable across Windows and macOS and I can use the same interface between all three with my coworkers.

You're being purposefully obtuse for snark points.


Right. This is why they use a VM on other platforms.

In general, I always try to give the contrarian opinion as it causes an interesting discussion between the community members.

Also, there are some young members here, that might not realize that what we see as "innovation" might happened before (multiple times).

The truth is actually in the middle.

I could care less about the points.


That's why when someone says "the Tesla model S is a game changing innovation!" I say "you mean, electric motors and batteries are a great innovation!"


Innovation doesn’t have to be 100% hard tech. Tesla built a product using a mix of existing technologies and improved ones, great product design and great marketing, and together all of that changed the industry. The innovation is the whole package.

Docker had the right mix of existing tech, good packaging, great user interface, and good marketing. And together that made a product that made developers do new things in practice, and it changed the industry.

I think it is important to remind people that in order to disrupt an industry with a new thing, you need the whole package. Not just a good tech.


My comment was sarcasm.


This is a tired argument compared to what Docker introduced. Should we not commend Apple for the iPhone because the rotary phone was invented years before?


Don't forget about chroot. That's been around since the 70's...


That's not entirely fair. Yes, LXC/OpenVZ is cool and does some awesome container stuff, but part of what makes docker so neat is more than just containerization. Everything is "stateless"; you can reproduce environments exactly across machines with dockerfiles and as such it lends itself well for local development, but also distributed deployment (such as docker swarm).


Dockerfile provides as much reproducible environment as bash shell script.

This is why I have beef with docker, it was marketed to solve multiple things, but ultimately it becomes over-glorified zip file. It absolutely fails in reproducibility and I constantly see scenarios where Dockerfile that worked one day broke another day, or it worked on one machine and broke on another and so on.

Yes, you can put extra effort and make your dockerfile reproducible, but similarly you can make a reproducible bash script.


Yes. But I guess in 5 years or less there will be another runtime (especially now that the company is basically dissolved), while namespaces, etc will stay.


I don't know that that's true; Google and Microsoft seem to have really started betting on Docker and, for better or for worse, they are big trend-setting companies, so I suspect it's more likely that one of them would purchase it and continue working on it before it just goes away.


Google and Microsoft have really started betting on containerization. Neither has really started betting on Docker.


Considering Google, Azure, AWS, DO, IBM and others all have cloud offerings centered around Kubernetes with Docker compatible containers, even if some of the tooling changes, I don't expect that "Docker" will really go anywhere. I would be surprised if the next step doesn't have it swallowed by one of the above companies (probably not DO though).

Even for AWS Lambda, I use docker to build linux lambda packages on non-linux environments (docker for windows/mac). The easiest way to get a service up in Elastic Beanstalk is with a Dockerfile.

Even if Docker (organization) does die, Docker (open source tooling) will live on for a very long time. And although I do like where podman and others are coming from, I don't think that it will displace docker in the long run.


Boot2Kube Linux distros will be the norm

One of the daemonless tools out there now or some interface specifically for Kube will exist as a dependency


Docker is more than just a runtime.


I agree! I feel so empowered by docker and docker-compose and even really enjoyed docker swarm although everything is k8s now


>> eliminating "works on my machine" build problems

This was eliminated way before Docker. Amazon had this problem solved around 2009 when it was not a new thing in the company. LXC also predates Docker. I think you are talking about immutable images that can run on any OS. This is what Docker introduced. This is not as ground breaking as people try to make it, this is why commercial success avoided them at the first place.


If you mean Apollo, that requires mutating global system state in extremely fragile, error-prone, and unrecoverable ways. It's practically the opposite of "immutable infrastructure".


Not sure which part of Apollo you are talking about. The configuration management?


Any other single example where a company created a product, enabled massive usage and still struggled financially ?


Also known as "the Tivo Problem".


Linux doesn't make money on its own. Not sure why it is a surprise to people that Docker can't monetize as a For-business entity


> Kubernetes itself which for most users today still relies on Docker

Not anymore, over the past couple of years Kubernetes slowly stripped Docker most of its power to the point that Docker can be totally removed from k8s clusters. And this will be widely be the norm in a couple of years or so


I wrote:

> which for most users today still relies on Docker

Was that inaccurate? Are most users manually running Kubernetes clusters with an alternative runtime these days?


Depends but most K8S distributions and managed offerings are no longer using Docker as the runtime.


I'm not sure that's true. GKE is still using Docker at least. It will move to containerd in the next few versions I expect.


No, it just made something that existed 18 years ago become more popular.

It's just a rootfs compressed tarball. Running on top of Linux technology.

Docker was always a brittle tool with lots of limitations and huge compromisses.

Want to ${FOO_PORT:-1234}? Forget about it. Running random stuff as root? Why not?

It also made very easy to consume and produce junk/snowflakes.

Every docker image place its files/configuration/data in a special place. FHS? Who need that?

People have been running, packaging and distributing software more efficiently and more secure way before docker even existed.

Another tool that made existing technology more popular is Kubernetes.

But it did a better job in this "container" space.


This is an example of "unpopular" post. As expected, it received no counter arguments.


There is an easy counter argument which is the post you were replying to talked about the impact Docker had had, it made no arguments about Docker's technical merit.

You replied by talking about your perceptions of Docker having a bad technical design.

These two points are entirely orthagonal, which is why I'd guess your comment got downvoted.

It is frequently the case that the most popular technology is not the one with the "best" technical design.

For example the popularity of JavaScript or PHP. Neither is one that is regarded generally to have good technical design, however they have had a huge impact.


I know. This was a social experiment.

Thanks for your comment, I appreciate that! Both tools you mentioned are great and kinda relate to this experiment too.

The existence of docker and how it was developed and absorved by people brought nice things too. The kernel had to workout some areas.

The tool itseft is ok. I use something like this:

.

|-- Dockerfile

|-- README.md

`-- rootfs

    |-- etc

    |   `-- sample

    |       `-- sample.conf

    `-- usr

        `-- bin

            `-- sample

5 directories, 4 files

But I'm not talking about that.

For me is just interesting, you know, the tool itself doesn't matter, one can do that with any tool.

This experiment was to measure how humans entering an existing field will react... just like language has been shaped.

It was a bit sad that nobody was curious enough to talk about ${FOO_PORT:-1234} and why we should deprecate (links) or about "defaults". It was a success, though.


Whats the problem with ${FOO_PORT:-1234}?

Sure, it doesnt exist in the language of Dockerfile but you can achieve the same effects in other trivial ways.


This seems like an embarassing error in communications for Docker! I wonder what's going on behind the scenes.

There is one positive message for Docker coming out today, which is that they raised $35 million dollars. Yeah, they didn't announce the valuation, so it is probably a down round, but still, getting $35 million dollars to work on your core business is a good thing.

However, the first announcement that TechCrunch wrote about didn't include that at all!

Check out the timeline here. At 8:45 a.m. TechCrunch publishes the first article, about selling off the Docker Enterprise line. Then at 9:21 a.m. TechCrunch publishes a second article - https://techcrunch.com/2019/11/13/mirantis-acquires-docker-e... - with two really large pieces of news. Both that Docker raised the $35 million, and they replaced their CEO for the second time since May.

TechCrunch says: for reasons only known to Docker’s communications team, we weren’t told about this beforehand. It seems like they only learned the full news after publishing the original article, and quickly wrote a followup in the next half hour.

What's going on at Docker to be this confused in the message to the press? Chaos around the leadership change? Close to running out of money and they only raised the round at the last moment? Were they going to sell off the open source component to someone else, but that fell through? Or, boringly, maybe they just thought they clicked "send" on an email that they didn't. I'll keep imagining there's an exciting reason though.


It sounds like a full recap so it's not exactly the sort of fundraise you necessarily want to announce at all. Unless you made your other press release accidentally sound like you were going out of business.


Sorry, but what is Docker core business?


Docker Hub, probably.

It makes sense.


That's going to take a beating from GitHub's repository service especially when GitHub actions can automatically build containers and dump them in that repository.


Presumably it already has been taking a beating, from every cloud having this as well with their own lock-in benefits.


GitHub hasn't rolled it out yet, so the beating is yet to come.

If it works for me, I'm switching.


maybe signals it wasn't a done deal or similar? but still, I agree that none of this makes sense. why not just sell off all the bits? like you, wondering if that fell through.

sad end to a once tech darling.

Has anyone secured rights for the book yet?


Well, it's not an end yet. That $35 million isn't nothing! That's why it's a shame that this was messaged so weirdly, a lot of people are going to skim this news and conclude that Docker went out of business.

Plus, open source Docker is a really nice tool. Even if the company tanks, I hope somehow the open source community manages to keep it going, because it would be a shame if the Docker parts of peoples' infrastructure just decayed over time.


Ok, we've added $35M to the title above.


"Why not sell off all the bits?" is the only clear part of this. No one wants to acquire all of the baggage of a 10 year old company. Dozens of investors, two major pivots, many rounds of funding. Hence taking 90% of the company...the good stuff.


Might be a good idea to steer the rest of Docker (the open-source, less commercial parts) towards CNCF and/or Linux Foundation as a subsidiary. It would be a better fit, and probably grow better as a baseline.

The $35m could keep the organization running and revise itself. Though not sure what constraints are on existing funding, revenue streams or what was and wasn't sold off there are left.


This makes much more sense if you think of Docker as two distinct companies that were awkwardly sharing the same name and corporate entity, and are now being separated.

One makes developer tools, has a huge developer brand and community. It does not generate revenue except for Docker Hub which probably barely pays for itself.

The other sells enterprise products competing directly with Red Hat and Vmware, and indirectly with the big cloud providers. It generates meaningful revenue, but probably flat growth, which considering the huge amounts of VC money invested, makes it a failed business.

The investors probably decided that 1) Docker developer tools and brand still have potential, but 2) the enterprise business failed to deliver, so 3) they are jettisoning the latter and recapitalizing the former- essentially starting over.


Docker hub probably makes no money. We pay monthly for it, but could easily switch to AWS ECR at this point with a cheaper monthly price. The Hub is nice in that it essentially has unlimited storage, but I suspect this might be an internal struggle for the company. We have no reason to go in and clean up 2 year old images that are 1GB each. We generate images on each commit - trying to do as much as DIFF reductions as possible, but if we change package.json - boom, puppeteer + all over again. And that's quite often.


But if they're using a deduplicating storage backend, they can shrink down the actual storage on disk across all customers to something quite reasonable.


That only works if Dockerfiles are layered in such a way to maximize caching (many are not and naively pack many changes into a single layer). Yes there are block-level dedupes but that's a transparent feature of most storage systems and not a competitive distinguisher of Hub vs ECR/ACR/Quay.


I believe the person you replied to was commenting on the advantages to Docker, not the customer, and how (blocklevel is what I inferred) dedupe makes the quantity digestible to Docker's pocketbook.


I'm glad somebody invested in Docker again, but I'd like to understand why. What's their business model, once you remove the parts that have been making money in the past?


I don’t think they are betting on an existing source of revenue. They are betting that there are enough users of the free product, and a strong enough brand, that it’s worth trying to build a new business on top of those assets.


I agree that docker never got branding right. but the whole thing makes even less sense when you see this nugget:

"Mirantis will keep the Docker Enterprise brand alive, though, which will surely not create any confusion."


Since the author of the article was too busy trying to be funny, I will do his job for him and explain why this makes sense.

Docker needs to keep its brand, obviously, or they have nothing to build a new business from.

But Mirantis is buying a business line called “Docker Enterprise”. That name can’t disappear overnight, it would be immensely confusing to customers, and impossible to pull off operationally. What you do in these situations is make an exception to the trademark exclusivity. Usually with a time limit. So, for example, Mirantis would have 2 years until they have to stop using the name “Docker enterprise”.

On the one hand that is confusing to analysts and journalists; on the other hand, developers don’t care about the enterprise product anyway, so to them it won’t be confusing. And this was probably the least messy option.


Docker naming was all the time confusing. Think on Docker swarm which are 2 (3?) products.


Two products. "Classic" Swarm, and SwarmKit. SwamKit is the thing which we think of as trying to compete with Kubernetes with a declarative model and is what people think of as being "Swarm". Very few people still use "Classic" Swarm, although it does still work.


Yes, Docker naming has often been confusing. No argument there.


> that were awkwardly sharing the same name

ah like Python 2 and Python 3!


> Update: for reasons only known to Docker’s communications team, we weren’t told about this beforehand, but the company also today announced that it has raised a $35 million funding round from Benchmark. This doesn’t change the overall gist of the story below, but it does highlight the company’s new direction.

This is weird

Edit: Link of announcement : http://www.globenewswire.com/news-release/2019/11/13/1946551...

Excerpt: AN FRANCISCO, Calif., Nov. 13, 2019 (GLOBE NEWSWIRE) -- Docker today announced it has successfully completed a recapitalization of its equity to position it for future growth, and has secured $35 million in new financing from previous investors Benchmark Capital and Insight Partners. The investment will be used to advance developers’ workflows when building, sharing and running modern applications.


Can someone explain what is "recapitalization of its equity"


It could mean a bunch of things but often times it means the previous shareholders were basically wiped out and the company were given a new lifeline to continue operating.


It means they were broke af and in a death spiral because nobody wanted to buy any shares of docker at any worthwhile price, so instead, they created a ton of more shares and sold those and now the organization has capital to do things that makes value for all shareholders


For clarity: Since a ton more shares have been issued, the previous shareholders own much, much less of the company than they once did.


and for clarity, this doesn't get them out of the death spiral it is a key part of the concept! but maybe they generate value with their second wind


It can also mean some shareholders voluntarily gave up certain rights/equity.


Technically it just means a significant change in the capital structure. In context usually something like: exchange a bunch of debt for new equity, typically with high dilution for existing shareholders.

So you avoid dying from that debt in the short term, gain a bit of breathing room to do necessary things (e.g. layoffs) and to fix what is broken in your business model before you start running out of money again. Maybe you are lucky and debt load was the biggest part of the problem - but probably not.


It doesn't really mean anything except: we have more money. You might read it as a signal that they didn't want to give details for whatever reason.


It means they were in debt. This helps resolve that so that they are basically paying employees instead of death-spiral debt payments.


I kinda wish Microsoft would just acquire the rest of the company and be done with it. It fits in very well with their Developers, Developers, Developers, Developers focus. Especially in the era of Open Source friendly MS.

VSCode, GutHub and Docker. Three peas in a pod.


What a turn-around Microsoft has had: if you'd suggested to me 10 years ago that something like Docker should be acquired by MSFT, I would have recoiled. Yet here I am in total agreement.


While I think CNCF/Linux Foundations would be a better fit, I do agree that the rest should probably come under a more protective umbrella. Given the many options, MS is probably one of the better ones. Definitely wouldn't want to see Oracle step in, and while I appreciate RedHat they are much slower moving (since IBM's acquisition, and I'm not sure if for the better).


My theory at work is that this is arbitrage. Docker wouldn’t sell to Oracle. So instead, Mirantis buys the Enterprise side and in 6 months will sell to Oracle. At which point everyone who has ever looked sideways at a Docker download will be subject to an Oracle licensing audit...


Red Hat has quay and podman/cri-o, that are basically competitors to docker.


Hyper-V containers and VM instances are already more than enough.


It kind of wouldn't make sense because Azure doesn't support deploying with Docker (it is in preview mode but its almost unusable)


> Azure doesn't support deploying with Docker

Huh? Not sure what you mean by this - Azure has a wide range of container support, including:

  - Azure Container Registry
  - Azure Container Instances
  - Azure Kubernetes Service
  - Azure App Service with Web App for Containers
I'm sure there is other stuff I've missed too, and I seem to recall there is some kind of container support for a new spin-off from Service Fabric.

There's also support for deploying Dockerfiles from Visual Studio and Visual Studio Code


You can also run VS Code as a split-brain system with the UI in your OS and the extensions and the filesystem inside a Docker container, so that you can say, run VS Code extensions that rely on *nix utilities to run.

https://code.visualstudio.com/docs/remote/containers

It's pretty nifty.


That's really cool. I wonder if someone has made a docker-compose file that sets up "ever" language's "best" LSP server so you can, with one command, get an IDE that can work with any repo.


I know at least with Web App for Containers, Microsoft suggests explicitly not to use in production because it's in preview mode. I've used Web App for Containers and it's almost unusable and nothing has changed in the last 4 months.

I have to agree with Microsoft here, Azure is still not completely Docker-ready.


Anything MS is about driving traffic to Azure and avoid leaking traffic to AWS/GCP.

In this context, I am not sure how acquiring docker will do that?


How are VS Code and GitHub about driving traffic to Azure?


I think VS Code and GitHub are mainly about branding and building a new reputation. Microsoft provides first-class development tools (VS Code and GitHub) to millions of developers every day now. Once this becomes the new normal to developers, Azure easily becomes a serious contender for running your applications as well.


First class, built in integrations.

While an open API is offered so anything can be integrated, nothing is going to beat custom built in integrations for user experience. Also, built in integrations are always a step ahead in terms of time to market too. Even if only by a few weeks or months, this really matters.

The example I'd point out is integration of Typescript vs Flowtype in VSCode. The Flowtype plugin maintained by the community was a good effort, but was always quite a way behind the native Typescript integration. There are more reasons for this than just technical, obviously, but that's one of the main ones.

I think, having been a user of both, the awesome native VSCode integration was a huge factor in Typescript 'beating' Flowtype. It'd be a similar idea for Azure too.


For VSCode, I think that Microsoft is introducing extensions for specific MS services. For exmaple, you can run ML experiminets directly from VSCode.

I would expect VSCode to slowly become azure services UI.

For GitHub, Look at the steps from the acquisition forward. For example, github actions is a CI/CD run on azure. Next step would be a gitops service that will integrate with AKS. Etc.


> Next step would be a gitops service that will integrate with AKS

That means Weaveworks swallowed up soon then[1], with its Flux GitOps for Kubernetes[2].

[1] https://www.weave.works/

[2] https://fluxcd.io/


Something makes me feel that today a lot of hard working employees lost all their stock options, or their hard earned cash if they previously exercised them... Mirantis can't possibly have paid multiple billions for that, which means all common stockholders were likely wiped out as part of this "fire sale" (pure speculation on my side).


Probably, yes. Unless they turn it around in a big way and IPO one day, your typical rank and file is probably out of luck here.

The good news is they have a lot of engineering talent so if you’re a hiring manager then now is a good time to begin directing your recruiters to poach aggressively from there.


This seems like a bit of an unusual move at first glance, hopefully more details will come out about Docker's plans.

From this article it sounds like Docker are keeping Desktop and Docker hub, neither of which make a lot of money (I'd have thought?), so not sure what their plans are to develop those, but you'd think that without the enterprise product line, they'd perhaps need to start monetising Docker Hub more...


Docker Hub does make money with paying for private repo plans but not sure how much that is and how many people are in the "newly restructured" Docker org


Unless they are downsizing to 10-20 people, I don't see how that generates enough revenue to keep the lights on.

It's weird. Assuming Docker Enterprise was keeping the lights on, why would you sell your cash cow? Maybe the price was too good to turn down. But now Docker Inc finds itself in the same shoes as other companies trying to monetize open source without a platform.

If this means we'll get to pay for Docker on the desktop AND it'll get improvements, I'm all for it. But it's a tough situation nowadays (nobody expects to pay for most developer tools).


Reading between the lines, it's pretty obvious that the enterprise business did NOT generate enough revenue to keep the lights on.


And what does exactly?


Funding from investors betting on the future value of a business built on the huge user base of Docker’s free products.


Docker Hub is hosted on AWS and Docker doesn't charge for egress. I wouldn't be too surprised if private registry revenue doesn't even cover AWS egress (even with huge discounts Docker likely has).

Other registries like Bintray or GitHub Registry charge $0.45—$0.50 per GB egress, while Docker Hub charges a much smaller flat fee.


yeah I know Hub has some revenue stream, but I'd be fairly surprised if it even covers the cloud bill they must get from all the free repo's.

From my experience most companies using private repos use their cloud provider's ones for the IAM integration.


There's a bit more here, but it doesn't really say much: https://www.docker.com/blog/docker-next-chapter-advancing-de...


Sad to see this happen, but honestly their SaaS products (Docker Cloud, Docker hub etc.) were absolutely terrible.

The UI and UX felt like some half assed intern rush job.

Terrible bugs around things like login and teams that just never got fixed.

As if no one really cared.


Can anyone explain what this means for the future of Docker the company?


Yes. Docker the company is gone. It is hollowed out. Docker Enterprise will morph into mirantis Kubernetes offering as added features.

Probably the rest of the company will turn into some sort of Kubernetes desktop dev tool.


Docker also announced a $35M raise today: http://www.globenewswire.com/news-release/2019/11/13/1946551...


I doubt that much will change. Docker itself already has a paid platform, Docker Hub, which generates revenue trough the private registry. It sounds like Enterprise was a separate branch anyway, so it doesn't seem to affect Docker itself.


This news comes at a time when I discovered that Fedora 31, the latest release, doesn't support Docker anymore.


To give more background, it supports Podman[1], which behaves the same as Docker in many ways. But the support situation changed with Fedora 31 because of CgroupsV2[2]; however, that doesn't mean Docker CE _won't_ ever run on Fedora 31. Follow this GitHub issue[3] for progress

[1] https://podman.io

[2] https://fedoraproject.org/wiki/Changes/CGroupsV2

[3] https://github.com/docker/for-linux/issues/665


...the Docker CLI is part of the sale?

What exactly does that leave Docker the company with?


the desktop experience and the docker hub apparently a.k.a : the good bits imo.


Oh, the macOS app? I've never used that or the Windows app, do you still use the CLI to control them?

Docker Hub is kinda worrying for me, as GitHub, Gitlab, GCP, Heroku, and AWS all offer container registries now (and Quay is open source)


The MacOS and Windows App automate the process of running Docker on a non-Linux OS. It sets up a virtual machine, volume mounts, port forwarding, and so on, and the Docker CLI then allows you to just "docker run" an image and have navigation to "http://localhost:3000" just work.

Recently the app gained the ability to run a Kubernetes cluster as well, which makes spinning one up on a machine quite a lot easier to evaluate those workflows.


Docker hub and Docker Desktop.


Not really sure what got acquired here? Are they just taking the docker enterprise business? How can they acquire Docker CLI? Isn't it opensource?


They can acquire the team that's primarily building it. If they stay on board and they keep focused on docker CLI, then that's effectively the acquisition of an open source project


And they can take the copyright and the brand and the domain name and ....


Open source != no owner.


Moby is open source; I'm not so sure about Docker Engine and Docker CLI.


Docker CLI is open-source - https://github.com/docker/cli Docker Engine is open-sourced in its community edition - https://github.com/docker/docker-ce


LinkedIN profiles that mention Docker

Mar 2015: 14,000

Mar 2016: 50,000

Dec 2017: 178,000

Nov 2019: 534,000

Still going up!


I've never even heard of Mirantis - curious to know if I'm alone in this, or if they're better known by others on HN?


Finally they found someone to get sold to. Only 750 customers, so low.

And no info on the price.


750 customers for an enterprise business is quite high. They are not counting Docker Hub customers which are probably in the tens of thousands.


Can we please not show yet another container ship every time there is a new about Docker? I get it - all news articles need an image. But imagine being a non-tech person trying to follow news and wonders...


Are there any particularly detailed/good anthologies out there about the founding/fate of Docker after Google opensourced Kubernetes?


Don't Docker Enterprise and Docker Desktop share code and expertise? Isn't this a loss for both of those projects?


It buffles me that software-wonder Docker raises so few money compared to unicorns like WeWork or some SF Startup.


I wonder why the investors weren't concerned. They've replaced their CEO for the second time since May.


Does anyone have a background story of Mirantis? Where's the money coming from?



When is Docker going to add the basic functionality to see if a container image is out of date from the registry, without having to re-pull the image?


Who keeps Docker hub?


Docker.


> Mirantis will keep the Docker Enterprise brand alive, though, which will surely not create any confusion.

Is this in jest?


This kind of reminds me of the crazy history of the Rolls-Royce brand.

https://en.wikipedia.org/wiki/Rolls-Royce_Motors


I took that line to be sarcasm.


I thought it was quite funny that they snuck in that line in the middle of a serious news article.


As far as I know Mirantis dont use docker for own development...


I have regular panic attacks about git vs GitHub vs Gitlab. So confusing!


[flagged]


Why does something have to going on there?

Isnt it possible that there exists a profitable tech company outside of US that most of us has never heard of?


Well, their address is in California but bulk of their workforce seems to be in Russia. My confusion is mostly about whether this is an American company that has a ton of eastern Europeans on staff or an Eastern European company with an address in California.


I understand the confusion, I am just not sure if either of this stuff (location/nationality of workforce/location of offices) matters as long as they keep honoring the policies /contractual obligations of docker enterprise and are law abiding entity.


I've heard of mirantis but legitimately forgot they were still in business. I think a good question is what happens when one dumpster fire gets put into another dumpster fire?


also fairness to the original poster from what I recall mirantis at least to really play up the bad boy russian tech company angle. Always thought it was an odd marketing play.


Is it just me, or does it seem that New York tech companies are really struggling?


Docker is headquartered in SF:

https://www.docker.com/company/contact

We’re actually in the middle of a tech renaissance in NYC, especially for software infra: eg Datadog, MongoDB.


My bad. I think I got it mixed up with Digital Ocean. Still, WeWork, StackExchange...MongoDB isn’t particularly loved anymore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: