Hacker News new | past | comments | ask | show | jobs | submit login
Steve Singh stepping down as Docker CEO (techcrunch.com)
211 points by akulkarni on May 8, 2019 | hide | past | favorite | 190 comments



I worked for Docker under Steve. Steve is a nice guy, but a horrible CEO for any current tech company. When I left Docker I told HR how disappointed I was in his response as a CEO with regard to recent layoffs. He went on and on about how this was about saving money to get to profitability in 2019 and he blamed himself, as he should have for missing. But then he didn't actually take any responsibility for his actions. He didn't take a pay cut, he had just hired his buddy Neil Charney for marketing and that guy should exit stage left right behind Steve. There's lots of leadership at Docker that's bad and Docker is very misdirected internally. For an entire quarter they kept telling us Microsoft was the path forward. Except most native Windows shops aren't doing a ton of containers. And if they are they're moving to Azure because of the credits. Then Microsoft kind of sort of vanished from the push. And it was kind of sort of IoT - but Docker Enterprise doesn't run on ARM and they were so far behind. Docker has bad leadership and nobody is steering the ship towards a destination, unfortunately.

One last tidbit about Steve. Steve made his fortune off Concur being sold to Amex. So Docker, obviously, had to use Concur internally. We were told not to complain about Concur in Slack ever so as Steve would never see it. Well Steve, Concur is a flaming pile of garbage compared to modern alternatives. And that was where Steve was leading Docker. There are a lot of really good people left at Docker. Get rid of Steve, get rid of Steve's buddies, get rid of the sales leadership and circle back up with the product the team they have can build. I think Steve's departure implies he knew he'd never hit his goal of profitability this year. Bon voyage Steve! Hopefully it's not too late for Docker.


I don’t know about Docker but I concur about Concur. It’s 1990s technology that’s only a quarter step above mainframe green screens.

I will say that Docker’s 2.9 on Glassdoor is very low for a tech company.


We have to use Concur at the megacorp where I work - it truely is awful. It's slow, clunky, and often just gets in your way.

For example, if I attach a receipt, the "receipt is attached" toggle remains off - you still have to manually change it!

The "cost allocation" stuff is also an unnecessary waste of time, and things often seem to be off by a penny, which causes Concur to complain and prevents submission.

I could go on..

For such a large scale system, I just can't fathom the UX or performance.


For some reason many "enterprise" software products are clunky and buggy. I guess this is because the decision to use the software is made by managers who don't ever use it.


I think it is more to do with the fact that "enterprise" software vendors have done the boring legwork of ticking all of the necessary multi-jurisdiction legal tick-boxes, implement that one insane export format, provide reports required just-so by a given regulator and so-on. The value they provide to the business is outside of the software in that sense.


It's good that the managers care for the higher-level "business" goals, but shouldn't they care for their subordinates more? In the end it also contributes to a business metric - employee happiness. When I was switching a corporate job to a startup job the fact that I switched from a corporate Outlook and a proprietary issue software to GMail and Github made me happier.


I suppose my point is ultimately that they're not goals, they're requirements, and they are often imposed from the outside by legal or other official bodies. In that context caring doesn't really come into it. In terms of the vendors of shitty enterprise software (and attempting to answer your original question), if you've spent a bunch of time, money and effort making the Venn diagram of certifications and industry accreditations and implementing niche proprietary licensed file formats converge into your product you may not have either the willing, cash, or capability to produce quality software with flavour-of-the-month UX too.


> It's good that the managers care for the higher-level "business" goals, but shouldn't they care for their subordinates more?

Ideally, yes; rationally, given the incentives of the concrete eco omic system they exist in, no. Managers work for higher level managers, or at the highest level for capital owners of the business, not for their employees except to the extent that the employees are also capital owners of the business.

Managers aren't union reps for their subordinates, they are agents of capital. That's, literally, their job.


This sounds like early 20th century capitalism... I don't think it's very applicable to the current programming industry, where companies have to fight to retain good employees. Also, the top-down structure is no longer preserved within the Agile's "self-organizing teams". So in this landscape I think it's very reasonable to take seriously what programmers think about the given internal software.


> This sounds like early 20th century capitalism... I don't think it's very applicable to the current programming industry, where companies have to fight to retain good employees.

It absolutely does; just because capitalist are competing for labor doesn't mean that management switches from being agents of capital to agents of labor. Valuable, contested human resources are still resources, not owners.

> Also, the top-down structure is no longer preserved within the Agile's "self-organizing teams".

One of the most frequently reported problems with Agile in practice is that the idea of empowered, self-organizing teams is, even in organizations that give lip service to Agile development given limited, effect by management. In any case, that concept applies mainly to how teams deliver on business goals, not on setting business goals, so even ideally it would not prioritize staff opinions over business goals.


> just because capitalist are competing for labor doesn't mean that management switches from being agents of capital to agents of labor. Valuable, contested human resources are still resources, not owners.

Yes, I generally agree, but it goes as follows:

1. The business owners care only about achieving their business goals.

2. To achieve the business goals the owners need to have good employees.

3. To hire and retain good employees some business decisions should take into cosiderations the needs of the employees.

So even from the purely economical point of view there should be some balance between 1. and 3.

And overall your purely economical point of view seems too rigid to me. One example that comes to my mind is the cultural shift happening at Microsoft. The company made many decisions with little business sense like open sourcing many projects and providing free developer tools. But the effect is that developers like the company more. This has measurable effects like Azure success or hiring better employees. I think this is an example of a bottom-up-built success which doesn't fit your top-down viewpoint.


There’s some truth in this, but my overwhelming sense is that previous poster is correct too. Enterprise software vendors almost never sell to their actual end users. While this is true, no one should be surprised that the UX on most enterprise software products is truly awful.

Also, <mild sarcasm warning> if you ship high quality working enterprise software on day one that actually delivers everything your customer needs, how will you bill them for a large “services” team engagement? It’s not unusual for this service money to make up a substantial portion of a mid to large enterprise software company’s revenues. All too often they are building features or fixing bugs that should have been in the original product as sold.

I’ve often sadly joked at my own work that if we shipped working software we’d probably make less money...


> For some reason many "enterprise" software products are clunky and buggy.

That's because “selling to enterprises” is a distinct competency from “making a working product”.

> I guess this is because the decision to use the software is made by managers who don't ever use it.

That's, often, an important part of the problem, but another, perhaps more signficant, part of the problem is that enterprise level constraints on software purchasing decisions are often made by managers (or management workgroups with no single responsible party, or even by directly by state or federal legislation and/or regulation from outside the purchasing agency [0]) who are (or, in the case of groups, consist of people who mainly are) remote in time and org-chart distance from both the decision to purchase software and the actual use of the software. This is perhaps most notoriously true in the public sector, where often the most critical competency for selling to an agency is the ability to navigate the acquisition policies applicable to that agency, which in the most complex of cases involve policies driven by both state and federal legislation and regulation by multiple state and federal control agencies as well as the internal policies of the agency actually doing the purchasing, but any large organization tends to have bureaucratic purchasing rules which, while usually well-intentioned and sometimes legitimately necessary to avoid even worse adverse effects, inadvertently reward vendors with competence in navigating the particular bureaucratic maze over those that lack such competence in a way that can at times outweigh competence in delivery.


The company I work for provides software to a very specific enterprise market. One of the most insightful moves the owner/CEO made was starting a business that uses the software right alongside the software company itself (in the building next door).

Members of the software company staff do week long rotations working at that business just so they can get a real life feel for actual needs of the market, which has been really beneficial.


Also execs (both buyers and sellers) choose to underinvest over time.


I will say that rating is likely very close to how I'd rate the company overall during my short tenure there.


Same experience at another megacorp, hated it. The cynic in me has the feeling they use this to turn people off from getting reimbursed - the worse the UX and tech the fewer megacorp has to pay for expenses.


Low marks on Glassdoor are pretty much the norm in European tech companies, especially in countries like Germany.


Maybe they are actually crap. Many seem to be running a sweatshop model - long hours, low pay (yes really - that's a big difference to Silicon Valley) and stupid perks.


Seconded. I've seen quite some Belgian and German (tech) companies that don't value good (tech) employees. They'd rather pay slightly below market rate and cheap out on everything. Somehow these companies keep going. People's standards here, are low.

Belgian news has an item every half-year where some big shot company boy complains about how difficult it is to get (good) ITers. This gets thinned down to "there's not enough ITers". Reality is that they refuse to pay for the quality employees they NEED, so those work elsewhere/leave the company/work remote.


> "there's not enough ITers"

This is the complaint every company I've worked for or met with so far has fielded at me, which is kind of hilarious as they all also unequivocally refuse to pay a reasonable wage.


Small correction: Concur was sold off to SAP, not Amex. Although we all expected it to be sold off to Amex for a long time :)


What are the good alternatives to Concur? I don't mind it (as far as enterprise software goes)

Also, did you think about telling Steve Singh these comments, or just HR?



Expensify


Doesn't Expensify have a landing page?! https://www.expensify.com

Color me confused.



> Steve is a nice guy

> he didn't actually take any responsibility for his actions. He didn't take a pay cut


I was implying that he was nice personally, as in some CEOs won't give you the time of day. You'd expect someones actions to align with their personality in a case like that.


Has there ever been a bigger missed opportunity in business than Docker managed to preside over? They could have created a huge business, but the product and go to market strategy has been awful.

In my opinion, selling a PaaS was fundamentally the wrong call when they could have licensed an enterprise version of the engine and taken a commission on every server in existence. This is more a Microsoft or RHEL type business rather than a second rate Pivotal or OpenShift, which in themselves are being mullered by EKS/AKS in enterprise.

The whole pitch towards legacy apps was also short termist distraction when investment dollars are all going to cloud native solutions. Legacy apps are a big business case but getting companies to invest in their legacy estate is still a tough ask. This angle should have been secondary to competing where they are naturally strong.


I agree it was a missed opportunity, but:

1) Is there another company that benefited from that missed opportunity? No. K8S is free and it "makes money" in the sense that it's a great on-ramp for GCP, and therefore GCP is happy to subsidize it.

2) It's never too easy to be the "first" company to break into a new technology or opportunity. Look at Altavista, and how it lost to Google in just a few years, despite when Google started Altavista was already a huge company.

3) I am not sure that your proposed solution would have worked, at least in terms of making Docker successful. It is the obvious alternative, but hard to be certain that its outcome would have been a better one for Docker (the company).


There are dozens of companies providing kubernetes with various levels of management, and there are also many companies selling addons. It's a huge market, and Docker captured almost none of it.

Considering Docker literally developed the underlying technology, and a lot of the complexity and capabilities of Kubernetes come from its design, they had a tremendous advantage that was squandered.

Google clearly had a huge advantage too since they have been using containers for years. But Docker certainly was in a good place.


> It's a huge market, and Docker captured almost none of it.

Is it though? Aside from cloud vendors (Azure/AWS/GCP) who's the "huge market"?


The cloud market is pretty darn huge, why are you dismissing it? Aside from the public cloud vendors, containers are very much a thing in the private cloud space too (OpenShift, Pivotal, etc).


Exactly a good model is puppet labs. They have a enterprise version and a go to consultancy They seem to be surviving. I hope in this age of the cloud

From memory docker and some of its engineers and managers where at logger heads with redhat and google see rocket, systemd containers, podman and k8s This forced docker to commoditise it’s tech or docker was slowly boiled into commoditised component

Signed arm chair warrior


> Exactly a good model is puppet labs. They have a enterprise version and a go to consultancy They seem to be surviving. I hope in this age of the cloud

Their CEO stepped down three years ago:

https://www.oregonlive.com/silicon-forest/2016/09/luke_kanie...

The new CEO has ONE expertise, which is winding down bankrupt tech companies:

http://thedronegirl.com/2018/09/14/airware-closing/


Huge? Don't think so, not now. Name me one of these companies with more than $100M revenues per year. None. It's still a tiny market.


FYI: AltaVista lost to Inktomi, which then ceded to Google.


> Has there ever been a bigger missed opportunity in business than Docker managed to preside over?

Sun Microsystems


Sun made $200 billion of revenue over 28 years. Yes, Sun missed opportunities -- but (as someone who worked at Sun during the internet boom) Sun definitely took advantage of plenty of them. Or are you making the case that Sun's technological output was so substantial that the $200 billion over the nearly three decades represents a "bigger missed opportunity in business than Docker"?


I would argue that Sun's innovative output was a much larger missed opportunity than anything Docker has developed (then again, hindsight is 20-20). Yes, Docker has a lot of hype and has "brought containers to the masses" but I would argue it's nowhere near as revolutionary as ZFS/Zones/DTrace/etc. If I had been born 10-15 years earlier, I would've hoped to work for Sun. I've never wished to work for Docker.

As someone who maintains the container runtime underlying Docker (and contributes all over the stack), in my view there is a lot more innovative "core" engineering happening in the LXC/LXD camp than in the Docker camp. There are far more kernel patches coming out of LXC (and more kernel maintainers developing LXC) than have come out of Docker. And let's not forget, LXC came first to modern Linux containers. There is a lot of work going into Docker, but I guess I put more of an emphasis on OS engineering to determine who more innovative engineering on systems tools.

(Yes, there is Kubernetes but that's not a Docker project. If anything, Swarm emphasises my point. LXD has clustering too and they support real live migration between cluster nodes -- though CRIU has historically been a bit hairy.)


It always felt to me like Sun had great ideas and vision but was too early or idealistic. Seems like they would have missed at least $200 billion in revenue with follow through. Sun Grid is the AWS that never was. Chromebooks are the new Sun Rays.


The tragedy of Sun is that they missed the opportunity to smite System V.


In 1999 I wanted to buy a Sun SPARCstation 5 (I worked with those). Was fabulous how that machine never blocked, was super-stable, and fast. At that time I had a linux PC box (red hat) at home, with IDE and PATA (if I remember correctly) it was really difficult for the hardware to perform.


The were good machines, but they were stupidly expensive a that time. The same with SGI. I remember being quoted $18,000 for equivalent amount of memory I had just put in a linux box of my own for about $300. Sure, it wasn't really apples to apples ... but still. Intel based machines were eating their lunch for good reason.


They should have targeted a premium around what Apple charges for their hardware, not the absurd multiple they were charging.


Both sun and sgi had business models built around premium margins and couldn’t adapt the the mammals underfoot. This phenomenon is one of the things that The Innovator’s Dilemma got right.

Apple’s in a slightly different position in that they bank a more profit sun/sgi ever did and are pretty aggressive on COGS so if/when they lose their exalted position they will have a lot more room to maneuver. This is how Microsoft managed to survive its sag (decline is too strong anword) towards irrelevance and recover.

IBM was in the same situation as Microsoft but though Gerstner managed to right the ship, his successors were not able to re-ignite growth (to mix metaphors)


Yeah, anything was better than ISA and IDE. But a PC with PCI and SCSI was much closer to SPARC/Alpha/MIPS/etc. performance at 1/Nth the price.


No offense to all the great people working at Docker over the years-- but after they came to see us right after they raised their series B, it immediately occurred to me that they had hired _all_ of the wrong people. I don't recall having had that visceral of a reaction like that with any other company. I think from the beginning they didn't really have a plan and it really showed IMO.


Xerox PARC!


Do you want to elaborate?


https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/ from 2002 talks about it a bit:

> Sun is the loose cannon of the computer industry. Unable to see past their raging fear and loathing of Microsoft, they adopt strategies based on anger rather than self-interest. Sun’s two strategies are (a) make software a commodity by promoting and developing free software (Star Office, Linux, Apache, Gnome, etc), and (b) make hardware a commodity by promoting Java, with its bytecode architecture and [write-once-run-anywhere]. OK, Sun, pop quiz: when the music stops, where are you going to sit down?

Sun had all sorts of good tech, worth paying for, and couldn't figure out how to make people pay for it.


>Sun had all sorts of good tech, worth paying for, and couldn't figure out how to make people pay for it.

Sun had excellent engineers but no adults in the room focused on selling the tech or making money. They got drunk off the dotbomb cash influx. They saw that they needed to open source Solaris to properly compete with Linux (and arguably they weren't too late) - but they didn't actually have a plan to make MONEY off that. You can't both give away all your software while simultaneously trying to switch to x86 hardware at the time when x86 had already become a race to the bottom...


The E10K -- the former Cray Business Systems Division purchased from SGI for a pittance -- made $1.2B in its first year as a product, and probably still stands as one of the most profitable acquisitions in the history of the industry. (And was due entirely to the adults in the room.) Yes, Sun was badly disrupted by x86 -- but not being able to adapt to economic disruption is really not the same as not being able to "figure out how to make people pay for it." (Indeed, those most fixated on immediate revenue are those for whom economic disruption is most difficult to counter.)

To be clear: Sun lost a microprocessor war first and foremost. In my opinion, Sun needed to respond to x86 by being even more iconoclastic than the company had the stomach for at the time: by buying AMD ca. 2004 and fighting the Intel cross-patent poison pill in court. So in the end, Sun's problem was arguably too many adults in the room, not too few...


Buying AMD would have been great.

There's always more to the (inside) story. Meaning I have no idea what's going, so should stay humble if I can't keep my mouth shut.

James Gosling's shared one theory for the downfall of Sun: radioactive packaging of the hot new UltraSparc-II chips cost the company billions.

http://nighthacks.com/jag/blog/336/index.html

I so wish Sun had survived. Jini, JavaSpaces, JXTA, grid computing... I recently had to do some AWS Lambda work (serverless & nodejs) and wanted to kill myself.

Thank you for sharing your views, theories. It's actually therapeutic.


For the curious:

> It was deeply random. It's very randomness suggested that maybe it was a physics problem: maybe it was alpha particles or cosmic rays. Maybe it was machines close to nuclear power plants. One site experiencing problems was near Fermilab. We actually mapped out failures geographically to see if they correlated to such particle sources. Nope. In desperation, a bright hardware engineer decided to measure the radioactivity of the systems themselves. Bingo! Particles! But from where? Much detailed scanning and it turned out that the packaging of the cache ram chips we were using was noticeably radioactive.


James only has a fraction of the story there. Yes, the alpha emitter (it was radioactive boron) that had contaminated our supplier's SRAM was a serious contributing factor to the (dreaded, infamous, triggering) e-cache parity error on UltraSPARC-II that itself was a major drag on the business. No, it wasn't the only factor (there were many, sadly -- the e-cache parity error represented multiple failures at nearly every level of the system) and no, the e-cache parity error didn't alone change the fundamentals of the business -- but it definitely didn't help!


There are a few details Gosling left out: Sun trusted those chips enough to omit parity or ECC that would have smoked out the problem early, Sun reversed whatever growing trust it had been earning in enterprise space by blaming the customer and for some time making them sign NDAs to work on the problem, and Sun had engineered their sales so that they only did direct for enterprise sized purchases.

They had really good x86 hardware at a time when the post-dot.com crash wave of companies was getting started, but the sales channels were what you could charge on a credit card on their site, and VARs and the like which usually wouldn't sell to startups. Dell got a great deal of that business because they actually wanted to sell, and by the time these companies' hardware fleets got big enough for enterprise sized orders they'd long mastered how to run Dell or whatever systems to provide a reliable service.


The "Innovator's Dilemma" basically. Sun threw gasoline on the fire by giving everything away, but it would have happened anyway.


Java.


Missed opportunity or a contribution to where we are now?

I don’t reckon any of them failed, they succeeded in an unexpected way.


don't make me cry


> they could have licensed an enterprise version of the engine and taken a commission on every server in existence

Sounds like a great way to kill docker completely


They tried that when I worked for an $enterprise. But refused to support any RedHat-based images running on their engine. They also refused to license any of their products separately, seeming desperate to get a tax on any server on which Docker ran.

Neither of these went down well. RedHat had that market sewn up, and swatted Docker away like an irritating fly.


> But refused to support any RedHat-based images running on their engine.

You seem to have gotten that mixed. It was RedHat that refused to support anything (including their RHEL based images) on the docker platform unless it was run on the Red Hat's franken docker. It was RedHat that chose to not play fair, not Docker.


> Sounds like a great way to kill docker completely

For a good open source ecosystem, a commission isn't like rent, it is more like taxes.

There's a balance point where paying someone else to debug and ship fixes for your problems approximates Adam Smith's pin factory.

The value part of an enterprise open source system is in turning around fixes fast and debugging on behalf of a customer.

>> taken a commission on every server in existence

As someone who's worked with Rob before, he gets it that you can't really squeeze money out by force.

However, there'll be enough people who pay you to air-drop committers on inexplicable problems & otherwise mostly pay them well to do what they already like doing - ship open source to the community.


Again I refer you to RHEL who have an open source version and an enterprise subscription. The Docker Engine taking a similar route rather than building another PaaS and bumping heads with Kubernetes seems an altogether more differentiated and defensible business model.


Again I refer you to RHEL who have an open source version and an enterprise subscription.

But the enterprise version is the open source version. Red Hat just didn't provide the binaries, now they even provide the binaries (since they acquired CentOS). Since White Box Linux, CentOS, and Scientific were started (which was pretty quickly after they launched enterprise Linux), they have been selling per-machine support and 'blame us when it does not work' licenses.

A similar approach may have worked, but it is not a given. Red Hat and SUSE arrived on the market when there we no big open source support companies. Once containerization turned out to be a good idea, the other open source companies could quickly adopt and provide support contracts for the same tech (which they have done).


Red Hat and CentOS don't have any official relationship. It's CoreOS you're referring to. RHEL's binaries may be available for some SW, but license-wise cannot be used for production workloads unless explicitly stated.

CentOS can be used for whatever you wish, but that's not a RH binary of course.


Totally official relationships: https://community.redhat.com/centos-faq/


So, it depends a lot what you consider an 'official relationship'. You cannot pay and be supported on CentOS. If you talk to Red Hat and you say 'we have CentOS running', the first thing they'll talk to you about is how do you migrate to RHEL, because no officially-supported products from RH run on CentOS.

That said, it is in RH's best interest to have a free version of RHEL that customers can try out and migrate their workloads to, precisely because it's easy to migrate them to RHEL. So yes, in that sense, there's some relationship.

You're correct though that there's a lot more relationship than I previously thought. Thanks for the link! I'll update my knowledge :).


I was going to say much the same thing. The only reason Docker got so popular outside of Enterprise is because there were no cost barriers - and the reason it became popular inside of Enterprise is because it had become so popular outside of it.

I'm sure Docker did miss an opportunity to reap profits, but I really don't think there's as easy answer as to how they might have done that.


> Has there ever been a bigger missed opportunity in business than Docker managed to preside over? They could have created a huge business, but the product and go to market strategy has been awful.

The stories I could tell.. needless to say, looking back at the wasted opportunity is truly depressing.


You once told me that if we (the company) don't succeed it will absolutely be our own fault.

So many avenues to (huge) success, shutdown hard by the wrong attitudes towards the rest of the market.


Well, we did what we could and hopefully learned a bit in the process


We like those kinds of stories around here...


Like the time when Kubernetes, prior to it being announced, could have been a joint collaboration between Docker and Google.


That might have been great for Docker the company, but I wonder how it would have been for Docker the technology. Maybe Kubernetes would have ended up being less open.

I don't know much about it beyond my own perception, but I think of the cloud vendors are doing as much as possible to seem neutral / safe while usurping business logic at every turn. Things like an API gateway that's specific to a cloud provider seem like nothing more than super risky vendor lock-in to me. Another example is all the serverless rage with tech like Lambda@Edge and the equivalents. Who's realistically going to move that type of stuff between providers once they hit a certain scale?

However, I don't have that same cynical view of Docker. For me, Docker has become a safe bet of sorts. I know that anything running in a Docker container is pretty safe in terms of portability between cloud providers, so it becomes the default choice for me whenever possible.


> However, I don't have that same cynical view of Docker. For me, Docker has become a safe bet of sorts. I know that anything running in a Docker container is pretty safe in terms of portability between cloud providers, so it becomes the default choice for me whenever possible.

That's great to hear as it means it has served its initial intended purpose.

We take it for granted now, but the mere concept of doing this so easily in 2014 was really exciting stuff, especially when the workflow of build->publish to N cloud services is measured in minutes/seconds not hours/days.


Wasn’t Kubernetes introduced at Dockercon, and entirely built around Docker from day one? Seems like a pretty close collaboration to me.

What kind of collaboration could have taken place beyond that?


The two-ish years of Docker Swarm trying to fight with k8s and the weird tensions around Docker as the runtime in k8s probably wouldn't have happened if Docker had decided to join forces rather than actively compete.

Anyone familiar with how the swarm and k8s politics played out would know it was not a collaboration at all.


I agree that Docker took too long to embrace Kubernetes once it was obvious it had more momentum than all other orchestration projects combined. But nickstinemates suggests that Docker could have and should have “collaborated” much sooner than that, even before Kubernetes had launched. I wonder what that means in practice. What exactly should Docker have done, that they didn’t? And how would it have changed the outcome?


Docker could have never created Swarm and Compose at all. In hindsight if they went all-in on k8s from the beginning then they might now be the leader in enterprise k8s instead of Red Hat and IBM would be buying them. But this wasn't clear at all in 2014-2015.


Personally, I like Docker Compose and Swarm, at the very least for non-production workloads - they're just so much lighter and simpler than k8s.

I actually would have liked to see Docker not give up on Swarm so readily. There's absolutely room for a something simpler than k8s.


Hindsight is 20/20. I recall looking at k8s and swarm closely circa 2014-2015. K8s wasn't very good. The networking worked best in the Google cloud .. not your usual datacenter. I recall concluding Swarm was the better tech. If history was any guide, k8s would lose steam. The borg connection .. I duno how much actual code went in there.


Compose existed before k8s, and was created by a small company that Docker acquihired.


Kubernetes was evolved from Borg, which long predates docker, so no, it wasn't built around it. I have to wonder just how much Google knew they were going to squash docker from the time of that announcement, or if there's an even longer play going on here.


You may find this comment on a previous thread an interesting counterpoint: https://news.ycombinator.com/item?id=17580793


Though, as a counter-counter point, a blog post from Kubernetes itself states that it's got a direct lineage https://kubernetes.io/blog/2015/04/borg-predecessor-to-kuber...

My guess is that the truth is in the middle :)


That is a common belief, and a regular Google PR talking point, but it’s incorrect. Kubernetes is only vaguely inspired by Borg, and was created by different people, using different technology, endeavoring to solve a different problem.

In fact you could say Kubernetes was evolved from Docker, much more than it was evolved from Borg.


I am amazed at how Docker bricked on Docker Swarm so hard.

I assert to this day that the only conclusion to choosing Docker Swarm is to rewrite a worse version of Kubernetes.


Especially given we used to have Meetups at the office every Thursday night, where at least 1 new Orchestration project was launched/demoed every week.

The initial project the team worked on was called Beam. After about 6 months and no progress to show, the conversations referenced above happened and this is the result..

Kubernetes launched at the first DockerCon[1], early 2014. Swarm was launched[2] in beta in Dec 2014.

So, roughly 1 year late to market, 6 months after initial K8 launch.

1: https://www.youtube.com/watch?v=YrxnVKZeqK8

2: https://blog.docker.com/2014/12/announcing-docker-machine-sw...


I still rather like some of the output beam (libchan), just that GRPC came along and it's all history now.

But yes agreed this was all completely frustrating at the time, and completely depressing now.


This developer, Gerhard Lazu, states otherwise regarding Kubernetes as he describes the infrastructure that he built for changelog.com based on Docker Swarm rather than K8s: https://changelog.com/podcast/254


This is definitely true, but as you say orchestration frameworks were a dime a dozen at the time, and Kubernetes, when launched at Dockercon, was barely that.

Of course we actively chose to not collaborate on it, even after it matured a bit and had a good community.


I never understood how it was even possible for a relatively small sliver of the dev stack to command a $1b valuation.


Yep, this is it. Great technology, but it's not _the_ technology for developers, as the past years have clearly shown.


what else is used in production commonly?


"Production" is a very broad set of environments, many of which containers are completely irrelevant for. For some developers, the answer to your question is "ClickOnce" because they're building a Windows Forms application that is critical to their business.


I think folks miss this point all too often. Docker is like an engine. Every vehicle needs an engine but there are a shit ton of engine options.

Ultimately the valuable bit is the vehicle or what you do with it, not the engine.

Folks get too focused on the deeply primitive tools and miss the larger picture.


I can pinky promise you that the majority of the larger picture that is up to date / building net new stuff uses Docker, which is why I was looking for alternatives that are popular (because I'm not aware of any and I don't think ClickOnce has any significant market share...)


Duct tape, mostly.

In #devops is turtle all way down but at bottom is perl script. - @devops_borat

... via http://github.com/globalcitizen/taoup


> Has there ever been a bigger missed opportunity in business than Docker managed to preside over?

Digital had VMS, and lost slowly in the marketplace until they lost Dave Cutler to Microsoft, then VMS started collapsing. Cutler showed what he could achieve with decent backing and a company not wedded to selling expensive proprietary hardware.

Digital had the Alpha processor.

Digital had the StrongARM processor, the fastest commodity ARM chip on the market, and sold it to Intel because what future did ARM have anyway?

Digital had AltaVista.

There's plenty more, but those are some highlights.


Xerox and the personal computer at the level of what became engineering workstations. The book Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer covers this fairly well as I remember.


I agree it's a mess. We have docker swarm, docker compose, "regular" docker, the Moby fiasco. I'm just waiting for Enterprise Docker Faces to surface at this point.

Kubernetes isn't that great but it's less of a disaster than the current docker ecosystem.

I'm convinced another player will come in taking the best of both and give us something reasonable to work with. Maybe in 5 years or so...

Istio looks promising. The missing piece has always been convenient inter-container communication. I think something with mesh and gRPC baked into the core for strong service contracts would blow everyone else away.


Biggest? Nah. Like so many tech companies, Docker didn't have a business plan. Or,well they did have one, which they mishandled, based around container orchestration. But, then along came Kubernetes and that was that.


> The whole pitch towards legacy apps was also short termist distraction when investment dollars are all going to cloud native solutions. Legacy apps are a big business case but getting companies to invest in their legacy estate is still a tough ask.

Don't even get me started. Leadership wanted Docker to sell "MTA" (Migrate Traditional Apps) and Docker probably spent a few million to the middleware folks sitting in a room developing "archetypes" instead of TCO to position with the customer as output of the time and money containerizing those apps would save. Instead the push from sales leadership was "sell them on the value". What value? We couldn't prove we were providing any unless we did the containerization. And in the case where we could get services to handle a few freebies they were always the easy ones that didn't truly show anything. It was and is a horrible miss. I'm not sure if that's what they're still focused on but it didn't resonate with customers or prospects.

Right before I left Docker was moving into positioning Docker Enterprise for Desktop. They had ridiculously high limits on seat sales and we're going to charge ridiculous premiums per seat. Clearly nobody in leadership realized that the UI provided to bootstrap some simple framework wasn't worth that. But nope... So many wasted resources on products that add no value.


The core of docker is just cgroups, and, well, they didn't develop it, that is in the Linux kernel and GPL.

What exactly would an enterprise version offer that would make someone pay a per-server licensing cost?


> What exactly would an enterprise version offer that would make someone pay a per-server licensing cost?

Orchestration. Compose/Swarm, which Google gave away for free as k8s (In the interest of completeness, I'll add Hashicorp's Nomad will also orchestrate containers as well as binaries). The idea of Docker monetizing died with that action by Google (and to a lesser extent, Hashicorp, although Nomad does have an enterprise version which corps are paying for and using). Google needed something to slow down AWS adoption, and Docker was collateral damage. The rest of necessary tooling for container management (build, pull, push, local dev management, registries) is fairly trivial [1].

Now anyone can run Kubernetes (or Nomad), either self-hosted or with a hosted provider, which is great for the industry but not so good for Docker.

[1] https://github.com/p8952/bocker (Docker implemented in around 100 lines of bash)


As opposed to K8 implemented in around 100 million lines of whatever.


Security features, support, integration with a Docker Data Centre type management console and other complementary tools.

Exactly the same model as Redhat RHEL, who incidentally are the only profitable open source company ever.


There are a bunch of profitable open source companies (SUSE, nginx, Joyent, etc) -- Red Hat just happens to be largest.


Hortonworks was profitable before Cloudera acquisition. Rob Bearden was CEO of Hortonworks before coming to Docker as well.


Given Docker's a packaging format (effectively) a packaging and distribution story that's at least on a par with Debian or Red Hat would be valuable. Instead of rediscovering packing and distribution from scratch and delivering complete sacks of shit like "30% of images on dockerhub have serious security vulnerabilities" or "whoops leaked our user credentials".


Nothing. Without any secret sauce, an open source clone of an "enterprise" Docker would have been built virtually overnight.


Docker does have an "enterprise" version (called Docker EE). I'm surprised so few people have heard of it.


Probably because there are so few reasons to use it.


A much better alternative is Rancher. 100% open source.


The problem is that Docker engine alone is not much of a project, you need something higher level to control which containers run on which machines.


That wasn't a likely outcome; containers are too critical to platforms like Kubernetes for those platforms to allow them to be locked up by a company with an enterprise subscription. Also they aren't hard enough for that to work; there's lots of experts out there around namespaces that would write an alternative container runtime before getting locked in that way.

The elephant in the room for Docker is that there's not an effective way to monetise it - Docker Hub subscriptions clearly aren't going to cut it for a company with lots of investment and hundreds of employees.


In retrospect they should have partenered with a company like digitalocean to create a docker cloud platform.


My job was to create these opportunities and they were there - internal politics prohibited them from getting done.

Prior to all of the Docker as a Service endpoints getting launched, there was an internal pitch for a Docker as a Service program aimed at all of the cloud vendors as part of a Docker ubiquity/available everywhere strategy.

The push back internally was that we should create these integrations and sell them to the cloud vendors. That was never going to work, but, about 3 years past the mark we started talking publicly about 'Docker Editions.' A little late to the party by then..


Docker never had a viable business model, this was obvious to everyone in the space since day 1.


I disagree. From the inside, there were a few obvious business models.

The reality was bifurcation of effort and lack of alignment.


From the outside, there weren’t. App Store model with Docker Hub was never going anywhere without an entrenched platform, which you couldn’t possibly build because you didn’t have the talent (no offense at all here, the market was and is insane) and even if you had, there was too much competition from other players. Licensing wasn’t feasible, not enough value-add, and too many viable alternatives waiting in the wings for you to overplay that hand. And any secondary market that showed promise was already swarming with other companies “integrating” with Docker and making their own value props. What alternatives were there? I never saw one.


DigitalOcean.


IMO docker is the Myspace of containers.

There's so many bugs I've just stopped caring. Like many devs, I have a bash alias to blow everything away when the daemon gets funky. And it behaves differently enough between platforms that, at least at my company, we've given up on running the same containers between different OS.

I really think the endgame will be a VM based container system backwards compatible with docker. At VM level it's far easier to deal with kernel level incompatibility. Not to mention the constant security bugs that come from sharing a kernel. Somewhat ironically MS is already doing this in a way by hosting a Linux kernel in a VM for their docker implementation. They're only a step away from launching a kernel for each container.

With how cheap ram is these days, sharing a kernel is pointless. You save maybe 50mb of ram per container in return for maddening implementation complexity and worse performance


You should check out LXD. It's containers, but they act like VMs (you have proper systemd inside the container) and the tooling is far more sane than anything Docker has. It's developed by the same folks that work on LXC (and despite what you might've heard, LXC is very good).


I used Lxdock to provision a dev environment as a LXC VM. It is now unmaintained and changed file permission when sharing files from host to guest. I changed to Vagrant, which has infuriating startup delays, needs restarts and breaks inotify-based tools. At least its cross platform. Is there an alternative?


> With how cheap ram is these days, sharing a kernel is pointless

I don't think ram use is the reason to share the kernel - the main advantages are startup time and IO performance.


IO is actually better in VM these days due to PCI support for device sharing. IO sharing in containers uses virtualized software networks that generally don't perform as well as hardware that supports SR-IOV

Startup time is definitely a consideration but you could lower this to container levels in many cases by keeping a couple VM's "warm" but not running


Containers access disks and devices the same as anything on the host. For networking there's just as many options as with VMs. You can directly pass a specific NIC to the container without any overhead, and you can also use other options that have no more overhead than a regular network interface on the host.

There's no way virtualized access to IO should be better than the host, so the containers should be no worse than that.


There's more software overhead for container networking because the host needs to maintain NAT between the containers and host network.

With SR-IOV on VM's each gets it's own network stack and it's handled in hardware by the NIC.

VM networking is as fast as the host. Container networking is not


> I really think the endgame will be a VM based container system backwards compatible with docker.

Kata Containers?

> Kata Containers is an open source community working to build a secure container runtime with lightweight virtual machines that feel and perform like containers, but provide stronger workload isolation using hardware virtualization technology as a second layer of defense.

https://katacontainers.io/


Kata containers give you all the disadvantages of containers, with none of the benefits.


> And it behaves differently enough between platforms that, at least at my company, we've given up on running the same containers between different OS

Since Docker just uses Linux namespaces under the hood, I assumed it would have different behaviors between Linux, Linux in a VM, FreeBSD Linux compat, etc. Maybe VMs got us used to a level of compatibility that Docker can't provide.


Yes. Different kernels and various incompatibility between docker deamons on different OS'es has taken away the build once run anywhere aspect that's the whole point of docker.


> I really think the endgame will be a VM based container system backwards compatible with docker. At VM level it's far easier to deal with kernel level incompatibility. Not to mention the constant security bugs that come from sharing a kernel.

This is pretty much what ChromeOS does to run Linux apps with Crostini. IMO it's pretty fantastic.


I've stopped updating my docker on my dev machine because every time I did I ran into new issues. I'll stick with known issues I can work around.


Hah! Same. As an org we've started updating docker in the same manner we do OS upgrades. Too much risk of breaking everything, which happened to us multiple times


"... our customers are materially reducing the costs of building, sharing and running their applications and they are increasing their innovation cycles by a factor of more than 10X."

I can't stand this kind of marketing dribble, phrases like "increasing their innovation cycles by a factor of more than 10X" are not in themselves quantifiable and without that or context can be misleading.

I don't (at all) disagree that the more wide-spread adoption of containerised application deployment and related work-flows has sped up the release and application life-cycle, but I think people should not aspire to talk about such benefits with such 'greasy' wording (especially when it comes to the word 'innovation').


I loathe this kind of language too - but in Enterprise it's all too often management types that are the decision makers. As long as they keep lapping up this BS, companies will continue to spew it forth.


I just want to say, Docker for OSX is a joke! It's been over 4 years, when is that (slow filesystem issue) ever going to get fixed? People are using solutions like NFS/Dinghy, something Docker for OSX should have out of the box. Cmon now.

And if I were leading Docker, I'd look into the direction of a built-in virtual hosting solution, ie; jwilder-proxy and DNSMasq. I think that is what still stopping them from mass adoption.


I think Docker's highest priority right now is monetization, which explains their lack of focus on OSX in favor for Windows; more enterprise users theoretically would have led to more money.

Having said that, they didn't appear to be doing too well with that either.


I'm not sure of the right path for monetization, but I am guessing, all the same, that their neglect of the OS X edition of Docker has hindered it. A decent number of Docker users are developing locally on OS X, even if they use another platform upstream. I doubt the "bugs and misfeatures being ignored for years on end" thing is making them particularly eager to just throw money at the company.


> I were leading Docker...

I'm not even sure what I'd do. Google and k8s raised the bar so much that it overshadowed Docker the company. If it ceased to exist, Google would take up development just to drive GCP growth.


As someone who wants to pitch containerization at my enterprisey company, what is the takeaway message here? Do I still assume Docker/kubernetes is the way to go? Sounds like I don't want to stake my reputation on Docker Swarm? Is there another container platform this community recommends other than Docker?


1. Swarm lost, don't bother.

2. The interesting part of containers is the tooling people have built around them to make it easy to ship and run software stacks easily.

3. Building all this shit from primitives - downloading your own istio & k8s - is painful and will waste a lot of time and frustrate people.

Go get an opinionated k8s+containers solution that you can plumb into your dev tooling and will let you "commit code, spin up container fleet" easily, because that's the value: reliably increasing velocity.

OpenShift is one example I've worked with and like. There are others. Don't waste your time and money fucking around with individual bits of the stack.


Thanks for the advice! Is there one opinionated offering that has solid Windows Server support? It looks like OpenShift is working towards it, but it’s not ready yet.


I’d upvote this twice if I could.


I've been wondering along the same lines. I work at an academic institution where small one-off applications are regularly developed for research projects. The technology stack is not consistent across applications and the scientists do not have the motivation to radically change their development habits.

I'd like to move these applications to a common platform, to reduce some of the maintenance burden, introduce monitoring, perform security audits, etc.

I vaguely imagine this platform as being self-service, where the user creates a project and points it to a git repository with a docker-compose.yml file, and then a minute later the service is reachable at https://projectxyz.____.edu.


You are describing OpenShift (https://www.okd.io), a kubernetes distribution that adds on top a lot of common needs like monitoring, log aggregation, git->image build workflow, self-service via a CLI or web console, etc.

I work at Red Hat, happy to answer questions. We also just released OpenShift 4.0, which brings in all the features from the CoreOS acquisition, like single push button kubernetes and OS upgrades.


Thanks, I thought OpenShift sounded like it. Does the open source version include a web interface as well as the commercial product?


Yeah, Red Hat's products are (almost across the board) 100% open source. No "extra" features that you have to pay for.


OCI+k8s is the standard but Docker may not be the best OCI implementation for you. Choose which k8s distro you want then use whatever runtime is included with that so you don't pay double.


OCI is the "standard", it's just that nobody actually uses it. Everyone is still emulating the Docker format, and Docker doesn't even support OCI images (the pull request adding the most basic form of support is 2 years old[1]).

For the record, I work on the OCI specs (and maintain runc and image-spec) and would really love it if people actually used the OCI formats and we could freely innovate in an open spec. But that's not really the world we live in.

(I'm aware containerd supports OCI images and most folks now support the runtime-spec. But how many people use containerd directly? Not to mention that since the OCI distribution-spec is creeping along so slowly everyone still converts back to Docker to actually publish the damn things.)

[1]: https://github.com/moby/moby/pull/33355


Recent containerd versions have CRI support built in, so Kubernetes (Kubelet) can use it directly.


Images are still pulled from Docker registries, and thus there is still conversion to OCI rather than OCI being the primary format. cri-o has been doing the same thing for the past few years.


Yes. Assume Docker/kubernetes is the way to go. In general the sure bet right now is

Docker / Kubernetes / Istio .

You need all three for good micro-service platform.


I'd hardly call Istio something you "need". Between Docker and Kubernetes you'll have your hands well full enough already and I'd recommend getting a good grip on those first and avoiding the service mess until you are very convinced your life will be worse if you don't slap Envoy, Mixer, Pilot, Citadel, and Galley on top.

More complex != better.


I think that the OP is looking for a standard/container based enterprise platform, so I would install both since only both provide the complete solution.

I.e. install and standardize on both, but start using features as needed (of course).

I.e. I would rather find out any architecture issues with istio sooner, than trying to bolt it on top of some kuberentes only app.


The service mesh doesn't have to be complex. If you want the value of the service mesh at a fraction of the complexity, start with Linkerd.


There certainly are others, but Docker ‘just works’. Start with a small team and show some value/velocity.


Reconsider it. Unless your enterprisey company is building a platform itself, you don't care about VMs, containers, or MicroVMs.

Build serverless workloads and run them on whatever compute is available.


How do you build serverless workloads that are portable between compute providers?


You don't.

The the money you spend for being provider independent is the very money you save by going serverless.

Leaves the question: Is the risk worth the money saved?


Azure's Function App engine is OSS, so you could go with Azure's FaaS, but retain the option to self-host on VMs or servers anywhere.


OpenFaaS (around 2.5 years since launch) with 200 contributors, 17k stars and around 40 end-user companies in production. www.openfaas.com


Thanks for this! It looks very cool.


How about Google cloud run?


Knative, but it's still immature, like most serverless frameworks.


Most serverless frameworks give you some degree of portability across the majopr providers. I work on architect (which is currently AWS only) which is going to add Azure support pretty soon.


Anyone has any clues as to what the real reasons for his departure are?


Looks like the new ceo has a track record of selling open source companies. It's likely docker would be up for sale in one or two years time.


Bearden was CEO of Hortonworks since 2012, almost 7 years before the Cloudera acquisition. Before that, Springsource for 4.

I don't know Singh's reasons behind getting out, but Bearden doesn't seem to play the short game often, he's more of a long-term Open Source strategist.


It says something when we consider 4-7 years to be the "long term".


Running an open source business is very different than a proprietary one. The new CEO is filling those gaps.


Well, this happened right after the news about one of docker's databases being compromised, so perhaps he is being scapegoated.

https://news.ycombinator.com/item?id=19763413


That was my first thought, but they should have waited just a little longer so it looks like it was at least a thought-out decision, or put this announcement on hold for 6 weeks so no one's asking this question.


There is practically 0 chance this is the case. Accountability doesn't work that way at Docker.


What happens, hypothetically, if Docker the company shuts down?


Not a lot really. The core parts aren't actually docker (containerd and runc do all the work and are not docker owned), so it's more a problem of how fast we can get tooling to replace the wrapper around them (that's docker, so the bit that would be more of an issue). Local development would be hit the hardest, though I'm sure things would carry on as-is for quite a while since the parts of docker that are mainly used for building and pushing containers is open source.

Kubernetes already doesn't need docker to run, so migrating wouldn't be a nightmare. There are other image repositories everyone can use for the base images.

CoreOS did a good thing by pushing for the OCI as it's made sure this exact situation isn't a disaster.


Revenue is good enough to have that not be a real consideration, just not good enough to meet its potential.

It'll get picked up.


I had a somewhat kafka-esque experience trying to install the Docker client on Windows recently. I just wanted the standalone executable so I could connect to Docker running on a headless Linux server on the local network. They don't make that available anymore for the latest version and force you to install Docker Desktop, which required two reboots and enabling Hyper-V, none of which should actually be necessary if all you are doing is connecting to a Linux box. It was a garbage experience. While I was trying, unsuccessfully, to figure out how to get the standalone binary, I noticed that the Docker website has been completely taken over by really low-signal marketing crap, and all of the actual technical substance is now hidden away in the documentation. The way that Docker is flailing around as it desperately seeks some kind of enterprise traction is really a huge turn off to their core audience.


The website has really been a bunch of crap for a few years now! I always have to access it via Google searches, even when I'm looking on extremely trivial pieces of documentation.


Yes! My team has recently moved our services/applications into docker through azure... Trying to understand what parts of docker we needed through their site was an absolute nightmare, it's all marketing stuff and very little on what each 'service' actually does.


Consider installing podman if all you want is a client compatible with Docker.


I've had a similar issue. I must use VMWare on my machine for various reasons, and Windows have decided, probably for a good reason, that only one hypervisor can run on the machine.

So using Docker locally for me is a major pain in the ass, even though I use the probably most widely used enterprise virtualization stack. Let that sink in for a while.


If you're on Windows 10, AFAIK you can install and run the Docker client in WSL.

Totally agree about the Docker website - it's just a shameful torrent of marketing bollocks.


"Singh appeared tired, but a leader who was confident in his position and who saw a bright future for his company."

A company (Docker) selling a product which you don't need because you can simply package your application and its configuration as an OS package has no future since it's selling snake oil.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: