One last tidbit about Steve. Steve made his fortune off Concur being sold to Amex. So Docker, obviously, had to use Concur internally. We were told not to complain about Concur in Slack ever so as Steve would never see it. Well Steve, Concur is a flaming pile of garbage compared to modern alternatives. And that was where Steve was leading Docker. There are a lot of really good people left at Docker. Get rid of Steve, get rid of Steve's buddies, get rid of the sales leadership and circle back up with the product the team they have can build. I think Steve's departure implies he knew he'd never hit his goal of profitability this year. Bon voyage Steve! Hopefully it's not too late for Docker.
I will say that Docker’s 2.9 on Glassdoor is very low for a tech company.
For example, if I attach a receipt, the "receipt is attached" toggle remains off - you still have to manually change it!
The "cost allocation" stuff is also an unnecessary waste of time, and things often seem to be off by a penny, which causes Concur to complain and prevents submission.
I could go on..
For such a large scale system, I just can't fathom the UX or performance.
Ideally, yes; rationally, given the incentives of the concrete eco omic system they exist in, no. Managers work for higher level managers, or at the highest level for capital owners of the business, not for their employees except to the extent that the employees are also capital owners of the business.
Managers aren't union reps for their subordinates, they are agents of capital. That's, literally, their job.
It absolutely does; just because capitalist are competing for labor doesn't mean that management switches from being agents of capital to agents of labor. Valuable, contested human resources are still resources, not owners.
> Also, the top-down structure is no longer preserved within the Agile's "self-organizing teams".
One of the most frequently reported problems with Agile in practice is that the idea of empowered, self-organizing teams is, even in organizations that give lip service to Agile development given limited, effect by management. In any case, that concept applies mainly to how teams deliver on business goals, not on setting business goals, so even ideally it would not prioritize staff opinions over business goals.
Yes, I generally agree, but it goes as follows:
1. The business owners care only about achieving their business goals.
2. To achieve the business goals the owners need to have good employees.
3. To hire and retain good employees some business decisions should take into cosiderations the needs of the employees.
So even from the purely economical point of view there should be some balance between 1. and 3.
And overall your purely economical point of view seems too rigid to me. One example that comes to my mind is the cultural shift happening at Microsoft. The company made many decisions with little business sense like open sourcing many projects and providing free developer tools. But the effect is that developers like the company more. This has measurable effects like Azure success or hiring better employees. I think this is an example of a bottom-up-built success which doesn't fit your top-down viewpoint.
Also, <mild sarcasm warning> if you ship high quality working enterprise software on day one that actually delivers everything your customer needs, how will you bill them for a large “services” team engagement? It’s not unusual for this service money to make up a substantial portion of a mid to large enterprise software company’s revenues. All too often they are building features or fixing bugs that should have been in the original product as sold.
I’ve often sadly joked at my own work that if we shipped working software we’d probably make less money...
That's because “selling to enterprises” is a distinct competency from “making a working product”.
> I guess this is because the decision to use the software is made by managers who don't ever use it.
That's, often, an important part of the problem, but another, perhaps more signficant, part of the problem is that enterprise level constraints on software purchasing decisions are often made by managers (or management workgroups with no single responsible party, or even by directly by state or federal legislation and/or regulation from outside the purchasing agency ) who are (or, in the case of groups, consist of people who mainly are) remote in time and org-chart distance from both the decision to purchase software and the actual use of the software. This is perhaps most notoriously true in the public sector, where often the most critical competency for selling to an agency is the ability to navigate the acquisition policies applicable to that agency, which in the most complex of cases involve policies driven by both state and federal legislation and regulation by multiple state and federal control agencies as well as the internal policies of the agency actually doing the purchasing, but any large organization tends to have bureaucratic purchasing rules which, while usually well-intentioned and sometimes legitimately necessary to avoid even worse adverse effects, inadvertently reward vendors with competence in navigating the particular bureaucratic maze over those that lack such competence in a way that can at times outweigh competence in delivery.
Members of the software company staff do week long rotations working at that business just so they can get a real life feel for actual needs of the market, which has been really beneficial.
Belgian news has an item every half-year where some big shot company boy complains about how difficult it is to get (good) ITers. This gets thinned down to "there's not enough ITers". Reality is that they refuse to pay for the quality employees they NEED, so those work elsewhere/leave the company/work remote.
This is the complaint every company I've worked for or met with so far has fielded at me, which is kind of hilarious as they all also unequivocally refuse to pay a reasonable wage.
Also, did you think about telling Steve Singh these comments, or just HR?
Color me confused.
> he didn't actually take any responsibility for his actions. He didn't take a pay cut
In my opinion, selling a PaaS was fundamentally the wrong call when they could have licensed an enterprise version of the engine and taken a commission on every server in existence. This is more a Microsoft or RHEL type business rather than a second rate Pivotal or OpenShift, which in themselves are being mullered by EKS/AKS in enterprise.
The whole pitch towards legacy apps was also short termist distraction when investment dollars are all going to cloud native solutions. Legacy apps are a big business case but getting companies to invest in their legacy estate is still a tough ask. This angle should have been secondary to competing where they are naturally strong.
1) Is there another company that benefited from that missed opportunity? No. K8S is free and it "makes money" in the sense that it's a great on-ramp for GCP, and therefore GCP is happy to subsidize it.
2) It's never too easy to be the "first" company to break into a new technology or opportunity. Look at Altavista, and how it lost to Google in just a few years, despite when Google started Altavista was already a huge company.
3) I am not sure that your proposed solution would have worked, at least in terms of making Docker successful. It is the obvious alternative, but hard to be certain that its outcome would have been a better one for Docker (the company).
Considering Docker literally developed the underlying technology, and a lot of the complexity and capabilities of Kubernetes come from its design, they had a tremendous advantage that was squandered.
Google clearly had a huge advantage too since they have been using containers for years. But Docker certainly was in a good place.
Is it though? Aside from cloud vendors (Azure/AWS/GCP) who's the "huge market"?
From memory docker and some of its engineers and managers where at logger heads with redhat and google see rocket, systemd containers, podman and k8s
This forced docker to commoditise it’s tech or docker was slowly boiled into commoditised component
Signed arm chair warrior
Their CEO stepped down three years ago:
The new CEO has ONE expertise, which is winding down bankrupt tech companies:
As someone who maintains the container runtime underlying Docker (and contributes all over the stack), in my view there is a lot more innovative "core" engineering happening in the LXC/LXD camp than in the Docker camp. There are far more kernel patches coming out of LXC (and more kernel maintainers developing LXC) than have come out of Docker. And let's not forget, LXC came first to modern Linux containers. There is a lot of work going into Docker, but I guess I put more of an emphasis on OS engineering to determine who more innovative engineering on systems tools.
(Yes, there is Kubernetes but that's not a Docker project. If anything, Swarm emphasises my point. LXD has clustering too and they support real live migration between cluster nodes -- though CRIU has historically been a bit hairy.)
Apple’s in a slightly different position in that they bank a more profit sun/sgi ever did and are pretty aggressive on COGS so if/when they lose their exalted position they will have a lot more room to maneuver. This is how Microsoft managed to survive its sag (decline is too strong anword) towards irrelevance and recover.
IBM was in the same situation as Microsoft but though Gerstner managed to right the ship, his successors were not able to re-ignite growth (to mix metaphors)
> Sun is the loose cannon of the computer industry. Unable to see past their raging fear and loathing of Microsoft, they adopt strategies based on anger rather than self-interest. Sun’s two strategies are (a) make software a commodity by promoting and developing free software (Star Office, Linux, Apache, Gnome, etc), and (b) make hardware a commodity by promoting Java, with its bytecode architecture and [write-once-run-anywhere]. OK, Sun, pop quiz: when the music stops, where are you going to sit down?
Sun had all sorts of good tech, worth paying for, and couldn't figure out how to make people pay for it.
Sun had excellent engineers but no adults in the room focused on selling the tech or making money. They got drunk off the dotbomb cash influx. They saw that they needed to open source Solaris to properly compete with Linux (and arguably they weren't too late) - but they didn't actually have a plan to make MONEY off that. You can't both give away all your software while simultaneously trying to switch to x86 hardware at the time when x86 had already become a race to the bottom...
To be clear: Sun lost a microprocessor war first and foremost. In my opinion, Sun needed to respond to x86 by being even more iconoclastic than the company had the stomach for at the time: by buying AMD ca. 2004 and fighting the Intel cross-patent poison pill in court. So in the end, Sun's problem was arguably too many adults in the room, not too few...
There's always more to the (inside) story. Meaning I have no idea what's going, so should stay humble if I can't keep my mouth shut.
James Gosling's shared one theory for the downfall of Sun: radioactive packaging of the hot new UltraSparc-II chips cost the company billions.
I so wish Sun had survived. Jini, JavaSpaces, JXTA, grid computing... I recently had to do some AWS Lambda work (serverless & nodejs) and wanted to kill myself.
Thank you for sharing your views, theories. It's actually therapeutic.
> It was deeply random. It's very randomness suggested that maybe it was a physics problem: maybe it was alpha particles or cosmic rays. Maybe it was machines close to nuclear power plants. One site experiencing problems was near Fermilab. We actually mapped out failures geographically to see if they correlated to such particle sources. Nope. In desperation, a bright hardware engineer decided to measure the radioactivity of the systems themselves. Bingo! Particles! But from where? Much detailed scanning and it turned out that the packaging of the cache ram chips we were using was noticeably radioactive.
They had really good x86 hardware at a time when the post-dot.com crash wave of companies was getting started, but the sales channels were what you could charge on a credit card on their site, and VARs and the like which usually wouldn't sell to startups. Dell got a great deal of that business because they actually wanted to sell, and by the time these companies' hardware fleets got big enough for enterprise sized orders they'd long mastered how to run Dell or whatever systems to provide a reliable service.
I don’t reckon any of them failed, they succeeded in an unexpected way.
Sounds like a great way to kill docker completely
Neither of these went down well. RedHat had that market sewn up, and swatted Docker away like an irritating fly.
You seem to have gotten that mixed. It was RedHat that refused to support anything (including their RHEL based images) on the docker platform unless it was run on the Red Hat's franken docker. It was RedHat that chose to not play fair, not Docker.
For a good open source ecosystem, a commission isn't like rent, it is more like taxes.
There's a balance point where paying someone else to debug and ship fixes for your problems approximates Adam Smith's pin factory.
The value part of an enterprise open source system is in turning around fixes fast and debugging on behalf of a customer.
>> taken a commission on every server in existence
As someone who's worked with Rob before, he gets it that you can't really squeeze money out by force.
However, there'll be enough people who pay you to air-drop committers on inexplicable problems & otherwise mostly pay them well to do what they already like doing - ship open source to the community.
But the enterprise version is the open source version. Red Hat just didn't provide the binaries, now they even provide the binaries (since they acquired CentOS). Since White Box Linux, CentOS, and Scientific were started (which was pretty quickly after they launched enterprise Linux), they have been selling per-machine support and 'blame us when it does not work' licenses.
A similar approach may have worked, but it is not a given. Red Hat and SUSE arrived on the market when there we no big open source support companies. Once containerization turned out to be a good idea, the other open source companies could quickly adopt and provide support contracts for the same tech (which they have done).
CentOS can be used for whatever you wish, but that's not a RH binary of course.
That said, it is in RH's best interest to have a free version of RHEL that customers can try out and migrate their workloads to, precisely because it's easy to migrate them to RHEL. So yes, in that sense, there's some relationship.
You're correct though that there's a lot more relationship than I previously thought. Thanks for the link! I'll update my knowledge :).
I'm sure Docker did miss an opportunity to reap profits, but I really don't think there's as easy answer as to how they might have done that.
The stories I could tell.. needless to say, looking back at the wasted opportunity is truly depressing.
So many avenues to (huge) success, shutdown hard by the wrong attitudes towards the rest of the market.
I don't know much about it beyond my own perception, but I think of the cloud vendors are doing as much as possible to seem neutral / safe while usurping business logic at every turn. Things like an API gateway that's specific to a cloud provider seem like nothing more than super risky vendor lock-in to me. Another example is all the serverless rage with tech like Lambda@Edge and the equivalents. Who's realistically going to move that type of stuff between providers once they hit a certain scale?
However, I don't have that same cynical view of Docker. For me, Docker has become a safe bet of sorts. I know that anything running in a Docker container is pretty safe in terms of portability between cloud providers, so it becomes the default choice for me whenever possible.
That's great to hear as it means it has served its initial intended purpose.
We take it for granted now, but the mere concept of doing this so easily in 2014 was really exciting stuff, especially when the workflow of build->publish to N cloud services is measured in minutes/seconds not hours/days.
What kind of collaboration could have taken place beyond that?
Anyone familiar with how the swarm and k8s politics played out would know it was not a collaboration at all.
I actually would have liked to see Docker not give up on Swarm so readily. There's absolutely room for a something simpler than k8s.
My guess is that the truth is in the middle :)
In fact you could say Kubernetes was evolved from Docker, much more than it was evolved from Borg.
I assert to this day that the only conclusion to choosing Docker Swarm is to rewrite a worse version of Kubernetes.
The initial project the team worked on was called Beam. After about 6 months and no progress to show, the conversations referenced above happened and this is the result..
Kubernetes launched at the first DockerCon, early 2014. Swarm was launched in beta in Dec 2014.
So, roughly 1 year late to market, 6 months after initial K8 launch.
But yes agreed this was all completely frustrating at the time, and completely depressing now.
Of course we actively chose to not collaborate on it, even after it matured a bit and had a good community.
Ultimately the valuable bit is the vehicle or what you do with it, not the engine.
Folks get too focused on the deeply primitive tools and miss the larger picture.
In #devops is turtle all way down but at bottom is perl script. - @devops_borat
... via http://github.com/globalcitizen/taoup
Digital had VMS, and lost slowly in the marketplace until they lost Dave Cutler to Microsoft, then VMS started collapsing. Cutler showed what he could achieve with decent backing and a company not wedded to selling expensive proprietary hardware.
Digital had the Alpha processor.
Digital had the StrongARM processor, the fastest commodity ARM chip on the market, and sold it to Intel because what future did ARM have anyway?
Digital had AltaVista.
There's plenty more, but those are some highlights.
Kubernetes isn't that great but it's less of a disaster than the current docker ecosystem.
I'm convinced another player will come in taking the best of both and give us something reasonable to work with. Maybe in 5 years or so...
Istio looks promising. The missing piece has always been convenient inter-container communication. I think something with mesh and gRPC baked into the core for strong service contracts would blow everyone else away.
Don't even get me started. Leadership wanted Docker to sell "MTA" (Migrate Traditional Apps) and Docker probably spent a few million to the middleware folks sitting in a room developing "archetypes" instead of TCO to position with the customer as output of the time and money containerizing those apps would save. Instead the push from sales leadership was "sell them on the value". What value? We couldn't prove we were providing any unless we did the containerization. And in the case where we could get services to handle a few freebies they were always the easy ones that didn't truly show anything. It was and is a horrible miss. I'm not sure if that's what they're still focused on but it didn't resonate with customers or prospects.
Right before I left Docker was moving into positioning Docker Enterprise for Desktop. They had ridiculously high limits on seat sales and we're going to charge ridiculous premiums per seat. Clearly nobody in leadership realized that the UI provided to bootstrap some simple framework wasn't worth that. But nope... So many wasted resources on products that add no value.
What exactly would an enterprise version offer that would make someone pay a per-server licensing cost?
Orchestration. Compose/Swarm, which Google gave away for free as k8s (In the interest of completeness, I'll add Hashicorp's Nomad will also orchestrate containers as well as binaries). The idea of Docker monetizing died with that action by Google (and to a lesser extent, Hashicorp, although Nomad does have an enterprise version which corps are paying for and using). Google needed something to slow down AWS adoption, and Docker was collateral damage. The rest of necessary tooling for container management (build, pull, push, local dev management, registries) is fairly trivial .
Now anyone can run Kubernetes (or Nomad), either self-hosted or with a hosted provider, which is great for the industry but not so good for Docker.
 https://github.com/p8952/bocker (Docker implemented in around 100 lines of bash)
Exactly the same model as Redhat RHEL, who incidentally are the only profitable open source company ever.
The elephant in the room for Docker is that there's not an effective way to monetise it - Docker Hub subscriptions clearly aren't going to cut it for a company with lots of investment and hundreds of employees.
Prior to all of the Docker as a Service endpoints getting launched, there was an internal pitch for a Docker as a Service program aimed at all of the cloud vendors as part of a Docker ubiquity/available everywhere strategy.
The push back internally was that we should create these integrations and sell them to the cloud vendors. That was never going to work, but, about 3 years past the mark we started talking publicly about 'Docker Editions.' A little late to the party by then..
The reality was bifurcation of effort and lack of alignment.
There's so many bugs I've just stopped caring. Like many devs, I have a bash alias to blow everything away when the daemon gets funky. And it behaves differently enough between platforms that, at least at my company, we've given up on running the same containers between different OS.
I really think the endgame will be a VM based container system backwards compatible with docker. At VM level it's far easier to deal with kernel level incompatibility. Not to mention the constant security bugs that come from sharing a kernel. Somewhat ironically MS is already doing this in a way by hosting a Linux kernel in a VM for their docker implementation. They're only a step away from launching a kernel for each container.
With how cheap ram is these days, sharing a kernel is pointless. You save maybe 50mb of ram per container in return for maddening implementation complexity and worse performance
I don't think ram use is the reason to share the kernel - the main advantages are startup time and IO performance.
Startup time is definitely a consideration but you could lower this to container levels in many cases by keeping a couple VM's "warm" but not running
There's no way virtualized access to IO should be better than the host, so the containers should be no worse than that.
With SR-IOV on VM's each gets it's own network stack and it's handled in hardware by the NIC.
VM networking is as fast as the host. Container networking is not
> Kata Containers is an open source community working to build a secure container runtime with lightweight virtual machines that feel and perform like containers, but provide stronger workload isolation using hardware virtualization technology as a second layer of defense.
Since Docker just uses Linux namespaces under the hood, I assumed it would have different behaviors between Linux, Linux in a VM, FreeBSD Linux compat, etc. Maybe VMs got us used to a level of compatibility that Docker can't provide.
This is pretty much what ChromeOS does to run Linux apps with Crostini. IMO it's pretty fantastic.
I can't stand this kind of marketing dribble, phrases like "increasing their innovation cycles by a factor of more than 10X" are not in themselves quantifiable and without that or context can be misleading.
I don't (at all) disagree that the more wide-spread adoption of containerised application deployment and related work-flows has sped up the release and application life-cycle, but I think people should not aspire to talk about such benefits with such 'greasy' wording (especially when it comes to the word 'innovation').
And if I were leading Docker, I'd look into the direction of a built-in virtual hosting solution, ie; jwilder-proxy and DNSMasq. I think that is what still stopping them from mass adoption.
Having said that, they didn't appear to be doing too well with that either.
I'm not even sure what I'd do. Google and k8s raised the bar so much that it overshadowed Docker the company. If it ceased to exist, Google would take up development just to drive GCP growth.
2. The interesting part of containers is the tooling people have built around them to make it easy to ship and run software stacks easily.
3. Building all this shit from primitives - downloading your own istio & k8s - is painful and will waste a lot of time and frustrate people.
Go get an opinionated k8s+containers solution that you can plumb into your dev tooling and will let you "commit code, spin up container fleet" easily, because that's the value: reliably increasing velocity.
OpenShift is one example I've worked with and like. There are others. Don't waste your time and money fucking around with individual bits of the stack.
I'd like to move these applications to a common platform, to reduce some of the maintenance burden, introduce monitoring, perform security audits, etc.
I vaguely imagine this platform as being self-service, where the user creates a project and points it to a git repository with a docker-compose.yml file, and then a minute later the service is reachable at https://projectxyz.____.edu.
I work at Red Hat, happy to answer questions. We also just released OpenShift 4.0, which brings in all the features from the CoreOS acquisition, like single push button kubernetes and OS upgrades.
For the record, I work on the OCI specs (and maintain runc and image-spec) and would really love it if people actually used the OCI formats and we could freely innovate in an open spec. But that's not really the world we live in.
(I'm aware containerd supports OCI images and most folks now support the runtime-spec. But how many people use containerd directly? Not to mention that since the OCI distribution-spec is creeping along so slowly everyone still converts back to Docker to actually publish the damn things.)
Docker / Kubernetes / Istio .
You need all three for good micro-service platform.
More complex != better.
I.e. install and standardize on both, but start using features as needed (of course).
I.e. I would rather find out any architecture issues with istio sooner, than trying to bolt it on top of some kuberentes only app.
Build serverless workloads and run them on whatever compute is available.
The the money you spend for being provider independent is the very money you save by going serverless.
Leaves the question: Is the risk worth the money saved?
I don't know Singh's reasons behind getting out, but Bearden doesn't seem to play the short game often, he's more of a long-term Open Source strategist.
Kubernetes already doesn't need docker to run, so migrating wouldn't be a nightmare. There are other image repositories everyone can use for the base images.
CoreOS did a good thing by pushing for the OCI as it's made sure this exact situation isn't a disaster.
It'll get picked up.
So using Docker locally for me is a major pain in the ass, even though I use the probably most widely used enterprise virtualization stack. Let that sink in for a while.
Totally agree about the Docker website - it's just a shameful torrent of marketing bollocks.
A company (Docker) selling a product which you don't need because you can simply package your application and its configuration as an OS package has no future since it's selling snake oil.