Hacker News new | past | comments | ask | show | jobs | submit login
Docker Desktop no longer free for large companies (theregister.com)
725 points by alanwreath on Aug 31, 2021 | hide | past | favorite | 571 comments



So many people in this thread don’t understand how enterprise decisions get made.

The business license costs $21/month, probably less in reality.

Do you really think that businesses are going to jeopardize the workflows of their $250k/year assets over a very core piece of software for $250/year?

Any alternative has switching costs and risks. Companies will just pay this. I see so many people saying “just do these 10 steps and it’s basically the same”. It just ain’t worth it for $250


That’s assuming some kind soul in engineering management has the patience and leverage to guide this through 10 layers of purchasing, procurement, finance, legal etc…

Another likely outcome is that it’s “easier” for teams to switch to another tool (easier in that at least they’re not waiting on a third party for approval) and everyone loses a lot of time

Big corporations are not the most efficient beasts for this kind of situation


I've been fortunate enough to work at companies where engineers were trusted to make small purchasing decisions. It works well for a while, but eventually everyone accumulates a lot of random recurring charges and the company cracks down.

$21 is nothing for a one-time spend.

$21 per month per employee is now $252/year per employee, but now you also need someone managing all of these licenses and accounting. Every new employee or team change requires some juggling of licenses with associated turn-around times before that person can get started.

It's not bad when it's just a couple key pieces of software, but it doesn't take long before every engineer has some mix of 20 different subscription tools and platforms and licenses and you're on the phone with a different vendor every week doing the annual subscription renewal pricing negotiation dance. The sales people know how this works and would prefer to wear you down with endless conference calls until you get tired of negotiating and just pay the new, higher price they're asking.

Soon, all of those "cheap" tools have added up to $1000/month or more per employee with a couple people dedicated to managing these licenses and negotiating with vendors all of the time. And it's terrible.

When the tool isn't easily replaceable, you deal with it. I'm not sure I see that with Docker Desktop, though. When you get a new hire, do you tell them to submit a ticket with licensing and wait until they can get their Docker Desktop license? Or do you simply write some documentation about how to accomplish tasks without using Docker Desktop so you can remove another external dependency? Teams generally gravitate toward the latter.


> requires some juggling of licenses with associated turn-around times

This! I've always said that a bit reason for FLOSS to win over the internet server-side is because scaling fast and juggling livenses is just too hard. Especially with the prying eyes of Oracle/MSFT/etc's powerful legal teams and hidden "phone home" code.

Going with a LAMP stack was just to simplest way to keep moving at speed.


I’ve had this very conversation with work colleagues. We can go through the inordinate-pain of acquiring a licence, and all that entails, or we can choose open-source solution and be making progress by the end of the day.

There’s a dozen services I could buy for work that would probably improve things dramatically, for very little cost, but almost nothing is worth the pain of “get sign off. Get sign off again. Fill in paperwork. Wait n weeks. Get more sign off. Wait even longer for finance to do their thing.”


What would have happened if Microsoft had put some clause in volume licenses that said "you can use a system unlicensed for up to 30 days before paying for a license".

Then engineering can spin up loads of instances to test stuff and scale fast with minimal hassle, and it'll be the purchasing team playing catchup later, no longer in the decisionmaking path.


For non-prod/qa/testing/dev stuff Microsoft provides MSDN/Visual Studio Subscriptions which provide specially licensed versions of their operating systems and software. Everyone who uses the MSDN software must have their own subscription and it can’t be used by real users/customers except in a limited dev/test capacity (e.g. UAT).


People do that with EAs. Once they figure out that it’s happening, they have various means to detect it.


that trick might work once or twice, until a hard policy comes into effect after the company gets burned.


One other big factor: certain other vendors have very aggressive sales tactics which essentially boil down to “buy a bunch of stuff you don’t need or we’ll audit every computer in your company and charge a penalty for anything we can find to quibble with”.

Docker doesn’t need to actually do that to run afoul of policies based on the scar tissue from those other vendors. Simply going from “you can use it without being sued” to “we have to pay people to make sure we’ll win” will increase the perceived cost at many large shops.


Yup, this is the concern. Having been kindly asked by Oracle to remove virtualbox extensions, this sort of gotcha/conditional pricing feels dangerous


The big thing for me is the question of the future: they say they currently won't be predatory about it[1] and I have no reason to doubt that the people saying that are being completely honest, but we don't know who will be working there in the future or where the next acquisition/merger will take them.

Without a contract, it's hard to disagree with the policy types who are going to ask what protects the organization if that happens. Once you go down this path even a little, the barriers to entry at large organizations go up since you have to look at it from the perspective of both the upfront cost and possible future cost / off-ramps.

1. https://twitter.com/scottcjohnston/status/143272649295845376...


Docker is a company that won’t exist in a few years, so a promise of leniency now means nothing.

When they are merged into some other big company, that company will look to milk the cow by going after license compliance. It happens every time.


My personal guess was that they get eaten by Microsoft and Docker will be integrated more deeply with VS (Code) and GitHub.


Some years back I tried to pay the $50 for the VBox extensions - small company and I was the only user. They were very confused, almost like they couldn’t understand that I really wanted to pay for something that I judged good value for the money.

They ended up telling me to forget about it.

I guess that perhaps they only want to target large businesses.


How Oracle discovered you're using it?


Our company has a lot of processes that could be streamlined via containerization of build and development environments, across Windows, Mac, and Linux.

But arguing for $252/yr times a thousand developers (in the office I work in, at least; we have others elsewhere) is just untenable. If the value was there for us, then we could get it signed off on, but there's no way to build that value because now it's too expensive to get started.


Someone making $250k/year shouldn't think twice dropping $21/month on a tool that truly makes them more productive.

A year or two ago I bought a personal license to Ubuntu FIPS after being urgently asked to debug an issue w/ Python's OpenSSL bindings. Unsurprisingly it turned out that my company had an enterprise license, and had I known whom to contact (which I didn't), I could have gotten a license key in a couple of days. But why waits days and potentially many thousands of dollars of time when I could buy a license now and get started immediately?

Frankly, it's weird how Silicon Valley employees making obscene amounts of money balk at personal expenses as if they're some minimum wage employee being victimized by their employer nickel-and-diming them. But that's just me. In fact, except at startups, Silicon Valley corporations neither expect this nor even give you credit for taking such initiative.


Because it establishes an exploitative precedent that shouldn't be followed. Because it hides the true cost of a project, which may result in poor decisions later on. Because the cost of a decision/process should fall on the person or company who has the ability to change it. Because using wages to pay for business expenses wasn't part of the employment agreement. Any one of these would suffice.

For personal expenses (meaning expenses for my own hobby projects, not "personal expenses" to mean business expenses paid by an employee as you have used it), $21 would be a quick decision. But using that same $21 to shore up a faulty requisition process? Nope, not at all.


> Because it establishes an exploitative precedent that shouldn't be followed.

It also allows small creators to survive.

I'm working on hybrid mobile apps again after a long break. The number of essential packages in both the Ionic/Capacitor/Cordova and React Native ecosystems that are "looking for maintainers" (think: camera functionality) and have long lists of issues is frankly astounding, given the number of users of said packages.

An expectation to pay so the maintainers can maintain is a good thing in my book.


Except that's not the issue being discussed here. Expecting to pay for software development isn't a bad thing. Expecting the employee to pay out of pocket for business expenses is the exploitative behavior.


The problem is that there's simply no easy fix for these bureaucratic frictions and sub-optimal equilibriums. It's the nature of large organizations--they trade efficiencies in some areas for inefficiencies in others.

I decided long ago not to worry about such expenses because 1) the engineer in me hates this inefficiency and urges me to fix or work around it (depending on your perspective), 2) navigating bureaucratic red tape takes a personal toll, and as someone who is paid well I don't mind at all spending a trivial amount of money for my own wellbeing, even if its for work, and 3) as someone who has worked in startups and even founded one, I've both been in a position where I was expected to take on such expenses and expected others to do the same (at least as an initial matter[1]).

[1] The dilemma is that unless the purchaser faces some risk of incurring the expense themself, they're not as incentivized to consider the reasonableness of the purchase. The solution is either 1) requiring permission beforehand, or 2) hiring more mature employees who understand the nature of the dilemma and who have already factored this responsibility and risk into their negotiated compensation. The latter doesn't scale, though, which is why large organizations invariably regress to the former.


Unfortunately your policy, in a small way, perpetuates this ‘sub-optimal’ efficiency by obscuring the true cost of your work from your employer.

You’re making life harder for everyone else!


It's not about $x a month, it's about the friction of dropping any money at all. It could be 10c/year and still annoy everyone, because now they have to go through a procurement process, find someone with a company credit card, wrangle with reimbursement forms, etc.

Docker arguably should be charging more for it. The friction between $0 and $21/month is higher than the friction between $21/month and $60/month in a lot of ways.


This 1000x developers are lazy and generally don't want go throug bureaucratic hassle. Long run this just going to hurt docker when developers start using podman or some other free alternative.


A couple comments earlier people were talking about where does this stop, 1000$ a month? A 3D CAD license with the ability to do FEA is around 30 grand a year, I’ve seen seats of specialized software cost north of 100k a year per seat, and that’s for people that 100k is probably pretty close to their take home.

So where does it stop? Is 250$ ok but 1000 isn’t?

I think we’ve set the line just fine, if you want to cover opex out of pocket, cool. But don’t try to put that on people who can’t afford it.


So ignoring holidays, $250k/yr works out to $120/hr.

How long does it really take to set up the VM and port forwarding to have most of what people care for from docker desktop locally. A couple of hours. Let's say 3. So cost $360.

How long does it take to go through your average large company requisition process? Probably 2 hours over multiple weeks. So that's $84 for the license at the $7 tier, and $240 for the time spent dealing with the corporations self inflicted tedium. But hey, $324, still technically cheaper.

But then there's the opportunity cost of waiting two weeks to install fucking docker. I think we can all agree that's worth more than $36.

So I'll be looking at alternative solutions rather than deal with the corporate purchasing bureaucracy. Even if the purchasing manager was reacting in real time I'd probably consider it worth $36 not to deal with the process.


For companies it's not that easy though. Employees will accumulate services and don't cancel them, will leave without leaving any info on which services were bought for what purposes etc. Companies still need proper invoices and account for all charges. If they cannot accomplish this because employees forgot they bought services or have left without letting anyone know, the company is on the hook for it.

Having worked with procurement departments I can understand very well why employees should never be able to make purchasing decisions without someone from procurement in the loop.


As someone who went from the PA suburbs and small IT shops (where 75k was a premium salary) to the SF startup scene and Facebook, one of the observations that has stuck with me was the incredible cheapness of my wealthy tech peers. It's hard to explain unless you've been around people who do a whole lot more with a whole lot less.


Maybe they’re wealthy because they aren’t spending all their money on shit their employer should buy. You say cheap, others say “good with money”.


I noticed this exact same thing in my ex-FAANG colleagues. No judgment, just an observation.


> When you get a new hire, do you tell them to submit a ticket with licensing and wait until they can get their Docker Desktop license

that's how it's worked for me - in my last job spent 3 weeks waiting for a Qt License, Intel compiler license, MSVC license...


Fixing this has to be a great business opportunity. Surely someone is already working on it?


A simple local k8s installation with a simple but effective GUI on top, running containerd instead of docker and with a CLI wrapper for docker commands, would be more than enough for everything I need.


My team is working on https://rancherdesktop.io/ and it's scary how closely matches we're doing (depending on what you consider to be effective UI)…


Yeah, but who's going to pay for that development? Especially since the targets are Windows and MacOS.


20 is an extremely low estimate. It is more like the greater of N solutions for N teams, and M solutions for M people.


It's almost as if a manager or engineer shouldn't manage the overall finances of a company or department... Gee, I wonder if there's a job title or something like that that could manage finances like this?

Real talk: Just hire people. The US especially has a huge amount of college educated people who struggle to find a job that matches their chosen education. If you can't find someone that matches the exact profile, pay for an education and training. As a company, don't accept just winging it - especially not if you have enough money.


This poster has clearly worked at the same kind of companies I have. Plenty of them would gladly burn 10x what it would cost to just buy the damn license on engineering man-hours switching something that's inferior but free. Because it doesn't show up as an expense on the annual budget.

The concept of opportunity cost is completely lost on a lot of business leaders.


A lot of the driver here is not in the moment short-sightedness, but rather a byproduct of the procurement or other finance processes (ironically often instituted with intent to prevent waste and fraud or make the company more efficient).

It’s not just the $250/yr/dev, but rather the requirements to create a new vendor in the ERP morass, to get approvals for an exception to the standards for payment terms (and/or methods), any requirements for vetting vendors, etc.

If you’re selling to an enterprise, don’t charge just above whatever the “employees can put it on their card without approval”. If you’re going to exceed that, you might as well exceed it by a lot. (If you’re going to make every developer file an expense report every month, I can readily prefer to do a lot of command line typing rather than filing an expense report… If I automate that for a lot of my fellow devs, I get to do something fun and be a minor folk hero.)

https://www.joelonsoftware.com/2004/12/15/camels-and-rubber-...


A lot of those enterprises already have Docker on the vendor list though, because of Docker Hub.


I suspect that most people running at the kind of scale where purchasing is hugely problematic are also running Artifactory or similar, not using Docker Hub.


If someone suggests a tool that costs $1k/yr over a free tool that costs $5k/year in extra work, I’m going to die on the free tool hill. Because the $1k/yr tool will disappear when the company goes defunct, or it won’t interoperate with something else and there is no way of fixing it. Or it can’t migrate to the next tool. Or we need to upgrade to an enterprise license because we become 21 developers instead of 20. Or they just bump the cost to $20k for whatever reason. Or the tool won’t work on CI servers because it only works after entering a key in an attended install (yes this is still a thing).

Free tools have a predictable and stable cost.

I have probably been burned more times from free tools over the years, but the scars aren’t as deep. It’s just a shrug and hoping the other project works when the first doesn’t.


> Free tools have a predictable and stable cost.

Unless they suddenly turn from free into a $21/month per person fee.


I think he means free as in open source, rather than free as in freeware. In which the worst case scenario is that you are stuck with the last open-source version, but at least you retain full control over your fork of the code and can add features and bug fixes as you see fit.


Indeed. Proprietary/Closed-source but costing $0 is the worst of both worlds.


Then I'll find the fork and use that. We have already done that a few times. There is a reason we audit all the licenses of open source software we have.


Exactly. I've never understood why capital or cash expense costs are valued so much differently than salary costs. I've had jobs (like any manager) where I have complete leeway on how I "spend" millions of dollars of people's time, but had to to through all kinds of approval to spend even the smallest amount of money.


Another example is if I want a $100 keyboard I need ahead of time VP sign off for the non-standard expense, if I want $10000/mo extra in AWS services I need any other engineer to approve my change


Yeah, that's what we decided when figuring this out at our company. Everyone is spending thousands of dollars a month of the company's money (their time). We either trust them to spend our money wisely, or we don't (and they should work elsewhere). So we don't have all these policies. Having said that, these policies are a result of size. You wouldn't spend money poorly when the company is just you and four friends in a garage. But when you're big enough, hiring problematic people is absolutely going to happen and they just see the company's money as a free money pot. We're in a middle ground right now, but, as we grow, are finding it hard to avoid trending more towards the meme corporate policies.


There are a few reasons:

- Reviewing purchases should put some reasonable controls around how much software you can accumulate. Adding to the tech stack has maintenance costs, and you don't really want to do it without some sort of review. In this case, it's not the dollars as much as the fit.

- With SaaS tools in particular, there are often privacy or security concerns. Those teams should review. Probably want legal's eyes on it to get the right terms in place as well. Again, less about the dollars here, and more about the agreement, risk, required disclosures (subprocessor, etc), and those kinds of factors.

- On the spend itself, you'd be surprised how this adds up, especially if folks are signing multi-year deals and not bothering to negotiate licensing. Remember, if it's easy for you and me to buy, it's easy for everyone else to buy too, and it multiplies quickly.

Now, on one-off hardware purchases and such, I agree it should be a lot easier. At least at the startup I work at, anything under $500 is just an expense report, so just make sure your manager is cool with it and go for it. But there are considerations here also around stuff coming in that IT or facilities then needs to support - obligating their time with your purchase is something you need to agree on or avoid.

EDIT: Also as someone else pointed out, grift on purchase orders is easy without any oversight.


A lot of it is path dependency whereby various practices turn into processes, regulations and laws. On top of this narratives form that guide peoples thinking as to what the right financial decisions are, then people can start to believe these narratives over time without questioning them. Eventually companies can create certain processes or habits on the basis that those narratives need to be true: https://www.epsilontheory.com/what-do-we-need-to-be-true/


graft is a serious issue with purchase orders, its much less of an issue with salaries.


A big advantage of using free open source software is that the licensing prices will never increase because the company needs a new revenue stream to support its business model.

Docker Desktop was free, now it's $21/month, what will it cost next year when Docker needs more money?


That depends on how many frogs contentedly stay in the pot.


Licensing prices might not increase, but paid technical support costs could theoretically be unlimited. Especially if you're at the mercy of an open-source software that isn't well-maintained.


I’ve seen companies burn half a million in developer time to save 10k or less several times.

Oh some JavaScript graphing library is expensive. Let’s roll our own!

Heroku meets our needs 100% let’s spend millions to switch to K8 and have a much worse experience.


The 4 hours I spent learning the basics of d3 and then couple hours a night for a few weeks working through examples (of others and of my own design) really gave me a powerful new tool for charting applications. Rolling your own is difficult to justify, but "learn and use d3 (BSD licensed)" seems an entirely reasonable alternative to a high-priced commercial offering.


How much of that is developers doing it because they want to make something new?


> The concept of opportunity cost is completely lost on a lot of business leaders.

A lot of it is that opportunity cost really isn't the problem of many business units. I spent a week building a crappy version of codecov for our few repos.

Dev won't get in trouble for that because opportunity cost isn't a thing for us as a cost centre.


Unless there is a security audit and the dev has to justify why non authorized software is on the company network.


This.

Buying anything on my organization costs something around $10k. Add your price to this to discover the total we are spending.

That's on financial cost. The opportunity cost of stopping technical people to handle the technical details of an acquisition is just huge, and larger the most differentiation there is on the market.


Heh heh... so true it hurts. Why spend 3 years pushing it past risks reviews and oversight committees when you could just write a crappy version that does what you need?

Legal alone will spend months looking at run-of-the-mill software contacts trying to negotiate absurd concessions from vendors in situations where they clearly have no bargaining power.

We can spend a lot of money on Oracle for their profoundly obtuse products though.


> trying to negotiate absurd concessions from vendors in situations where they clearly have no bargaining power

I got front-line seats at a company with tens of millions in revenue try to get payment processor to remove escrow terms from the contract.

Said company got blocked at tier 2 support. They refused to escalate for a company who wasn’t a customer, especially when legal stuff was involved. Tier 2 had a script specifically for people like us.

I regret not bringing popcorn to that phone call.

In short: “Those aren’t our terms; those are the terms of a billion-dollar bank. Sign it or leave.”

They were very much not impressed with our claim to being the second-largest widget maker in the world.


Yep, I’ve been there, I was a consultant in London charging 700 pounds a day, and my hands were tied because of a silly 50 USD license expense and couldn’t work for a month.

I asked if I could just pay it myself, but was told that would violate the IT department guidelines and possibly an SLA too, so I just sat there for a month, reminding everyone at the daily stand ups I was blocked. I couldn’t believe it.

I was not the only one who was tied in red tape there, it was indeed normal.

This company was making 600 million a year in profit.


Perhaps you're not understanding corporate bureaucracy. Nobody wants to be the manager who gets fired for trying to save $25k by switching from Docker Desktop to {insert random open source project here}. Not only is it not worth the time or the risk, but the engineering manager's exact purpose is to traverse the corporate bureaucracy. It gives them job security. Plus the engineering manager can negotiate big discounts with the vendor and can brag about that on their own performance reviews.


That is not the point he was making. It's about wanting to get the licenses procured but the process being unreasonably laborious so people just don't bother. The problem isn't cost, it's the mess of corporate wastelands.


It's not the manager, it's some department somewhere else that'll take weeks to respond and then you'll have to chat with them about it and they'll be like "i don't see why this is necessary"


It's very impactful though, it'll probably be people seeing it as highly visible and career-advancing who manage to muster the 'patience and leverage'.

If the org's been thoughtful to in advance, it's at the level of being almost an operational risk - all of engineering uses this tool, tool's licence or pricing might change, retooling has an associated cost and down-time.


Easier method to deal with monthly payment...Create a purchase order with 12 lines. One line for each month with dollar amounts defined. The purchase order serves as the "approval" so Accounts Payable can process without asking for coding information.

Example: I only have to revisit licensing once a year, to create a new 12 line purchase order. It's still a pain but I prefer purchase orders vs dealing with credit cards. In addition, querying expense history for purchase order lines provides far more detail vs querying a credit card platform. Credit card history tends to go into a black hole.

One possible hitch...Some companies might restrict Accounts Payable from receiving purchase orders for separation-of-duties reasons.


You can drill through layers of that crap if you can sell something through aws marketplace or equivalent thing that your company is already set up to spend millions a month.

Not sure how would that work for a desktop tool. It's in them to figure that out though


I really doubt that procurement is going to be harder than switching a technology out.


Entirely depends on the organization, I used to think this too but then I encountered some organizations (granted ones with entirely dysfunctional procurement processes) where switching technologies was easier and less risky to schedules than going down the procurement process. The risk of missing deadlines and blowing out schedules is the factor that tends to be on people's minds when procurement is brutally slow.


OpEx is much easier to get vs CapX. That's why so many things are subscription now. (also Sarbaines Oxley pushed vendors into subscription models)


> OpEx is much easier to get vs CapX

Not where I work at. IT opex are a recurring cost item with no visible benefits, so are squeezed dry at every annual budget review. Capex are seen as investments into something new and good and their impact on the bottom line is amortized over several years.


Now I'm curious: Sarbanes-Oxley pushed vendors into subscription models? How? What exactly did it change in the cost-benefit picture?


S/O changed the way that revenue was recognized when a vendor sold product and services to implement. Basically, you had to wait until the implementation services were completed to recognize product revenue. Time to complete services could be multi-year and customer dependent.


> Big corporations are not the most efficient beasts for this kind of situation

What situation, being trapped? I'm not sure what size has to do with it. Are small corporations maybe more ... agile?


Sometimes you can put these things on CC rather than P.O.


Yes, yes they would. And they do. Regularly.

When you've spent enough time in the IT Department, you'll see companies demand that you "cut laptop costs" or similar. Because when you're buying 250 laptops a year, the people with the budgets go "OMG we're spending half a million a year on laptops". So changes are made to the configuration, maybe small SSD but often times it'll be something like less RAM, or slower CPU - because purchasing controls aren't that fine. Great, you've now cut $200 per laptop and the money people are excited to see you've saved $50,000/year.

Do they understand that now there are 250 people, which probably all cost more than $100,000/year, whom aren't quite as efficient as they could be? Nope. And they don't care. The budget says you get X. The personnel cost is attributed in some other category not to be touched.

Almost no one at a higher level looks at the puzzle and goes "Well we should spend $250/year to get this $250,000 asset productive". They look at the situation and go "We've got 100 developers, ain't no way we're spending $25k/year on this product. It's not in budget."

Lastly the per-employee cost adds up. It's $11/mo for SSO. $18/mo for email suite. $19/mo for Gitlab. $14/mo for Jira. $5/mo for Confluence. $3/mo just to put SAML on Jira & Confluence. $8/mo for Password vault. $12/mo for Slack. $17/mo fro Zoom. $17/mo for Lucidchart. $12/mo for Laptop MDM. That's not everything an employee needs, heck it doesn't even cover HR products (Workday? 15Five? Applicant Tracking?), Training (probably a couple of those), EDR/AV for laptops, and dozens of other smaller services. Sure adding another $21/mo isn't going to change the number a lot on an individual basis, but it adds up really fast when you multiply it out across all your Developers.

And don't forget. How do you rationalize that your container software costs more than your source code control solution? One of those two things you can very easily live without... and it's Docker.


You basically typed out what I feel.

Let me add something useful here and say that it even has a name:

"subscription fatigue"

I was fine paying Netflix for an all-I-can-watch even if I ended up watching less than a DVDs worth of content just for the convenience of not keeping a stack of DVDs around.

I'm not fine with paying Netflix, HBO, Disney, my cable provider, NRK (state owned broadcastee,mandatory) etc etc a monthly or yearly fee.

So I just drop it: it is not like I need to watch it.

Same with tools: I'm happy to bug my company to pay for IntelliJ as long as NetBeans is stuck in its current spot, we already pay for Jira and Confluence but I am always seeking out the open source solutions, - partially because I'm old enough to realize that $n in subscription means $n x 12 x m devs a year, partially no cost open source (contrary to popular belief here) is less hassle and partially because going proprietary feels like paying Dane geld.

Edit:

> Almost no one at a higher level looks at the puzzle and goes "Well we should spend $250/year to get this $250,000 asset productive". They look at the situation and go "We've got 100 developers, ain't no way we're spending $25k/year on this product. It's not in budget."

Probably true in most places and it is sometimes a problem.

But that institutional hesitation is also a powerful defense against every vendor who wants to inject themselves somewhere and then start jacking up the prices.


The places where I worked there's an inverse relationship - the smaller the cost, the harder it is to justify with finances. ($4000 monthly AWS bill for "testing purposes"? No questions asked. $10 wireless mouse? Mission impossible!)


In case you aren't aware, it's easy to explain by that fact that most finance departments are afraid to question your spendings on grounds of looking incompetent.

So it's less about 10$ and more about: "I understand what a wireless mouse is and it doesn't look mission critical to me."

"No idea what those items on that AWS bill mean, but I'll probably be better off not asking"


If you must exist in this kind of organization use this to your advantage. Get involved in important projects, setup purchase proposals, then make sure you add in new laptops, cables, mice, extra monitors, and whatever other accessories you need. Do you have a remote KVM attached to all 100 servers? Yes. Will finance care if you add 100 monitors, keyboards, mice, etc? Nope. Will your vendor happily add those on to the price? Yup. How do you get involved in projects? Find a Director or VP who wants to get something done and say "yes" or "we'll find a way" to whatever it is they want to do. Then do your research and give them a proposal: "We can accomplish X in 24 months with Y headcount and Z equipment budget". If you can cultivate a reputation as someone who "gets things done" eventually you will find the normal rules no longer apply to you. Finance will stop asking questions about your projects.

You have three rational choices: 1. Play the game, 2. Keep your head down, 3. Quit and move to a company that doesn't play those games.

Sitting around complaining that a big company has crappy inefficient processes is like complaining that water is wet. A complete waste of time and makes you look incompetent to other people in the company who are playing the game. These inefficient processes end up optimizing for people who know how to talk the code and cultivate the right relationships. Take advantage of that.


Yep, large companies work like congress.

First you need something core to start a bill around. Let's make a law that makes it easier to buy guns.

But no one is going to vote for that, so let's give it a name you CAN'T say no to. It will henceforth be known as "The Child and Family Home Protection Act".

Great we have a cool name and we have a law significant enough to send to the floor of congress. Now let's get enough people to promise to vote for it so we don't waste our time. Oh, Congressman X says that he would vote for it as long as we add another law about funding polar bear research. Sure, whatever just add it in, we need the votes. Congresswoman Y says she will vote for it if we add a law about requiring masks at church. We need the votes, tack it on. Congressman Z has been trying to get more tanks sent to Afghanistan for nearly a decade, if we add that in I bet he will vote for out law too.

Then these things get bundled up and sent to the floor where people vote on laws with fun marketing names added to them.

The same thing happens in business. You start off with a core project like a new ERP system. Give it a complex sounding name that no one in accounts payable will say no to. Then we add in a bunch of computers into the budget that we have been trying to get for 2 years. Add a new printer. Throw in some docker desktop licenses for our developers, and then bundle it up and send it to Accounts Payable. Bam, now you have docker desktop licenses and new computers. You're welcome.


You forgot a critical part of bill names - there has to be a terribly forced backronym made out of it.


“2.4Ghz Laser Based Human Interface Device, $10” seems cheap approved.


Too cheap. Been in a few places where getting 700k for something with a boring but plausible name was a no questions asked thing, but trying to get a $15 miro board license was literally impissible. I paid for a lot of tools out of my own pocket to avoid the hassle. I bet I am not alone with this approach.


It's not even about the price or approval. For mental health, I'll pay for my keyboard and mouse from a local shop rather than get a quote from an approved provider, submit it for approval, submit it to IT, submit the receipt to process expenses, and wait for delivery of an approved-crappy-hardware. (with two run corrections because I didn't fill out the forms properly) It's not worth it.


Agree. I have subsidized places where I worked due to this. Bosses often say - oh don't pay yourself but then when you put in the request for something months will go by waiting for that certain manager to say yes or no. Face to face or on the phone it's easy to get a 'yes but first send me the proposal/information', only then to get ignored. Yeah you can keep pushing and finally get the item you need months later, but by then the project is over cause you or someone else hacked together or found a free solution... or worse used a personal credit card.


he, well said. It's basically the bike shed effect: https://en.wikipedia.org/wiki/Law_of_triviality


Exactly. Cost is a proxy for importance (usually). I can’t be bothered to approve your $10 mouse, but I can be bothered to approve your $10k AWS budget.


;D damn right…. Wanted to buy a css framework extension for 50$ - Mission impossible ..


I honestly don't think I could get Docker Desktop through our procurement process before the end of the grace period. It's not a matter of "this is peanuts" as much as "we're guaranteed to breach the license terms if we keep using this thing, so everyone has to get off it now." And then once it's gone, we'll limp along with whatever plugs the gap until something else emerges a as a winner, which probably won't be Docker Desktop.


This reminds me of the genius of AWS. The engineering team can just buy whatever they want, no questions asked.


Hey we get asked plenty of questions when the bill comes in :)


That sort of depends on the size of business under discussion. If you are a Fortune 500 with 2,000 engineers who all need licenses... half a million in licensing costs is not always the easiest sell.

Of course, that fortune 500 is going to pick up the phone and demand to pay 1/4 of that (and they'll probably get it). Enterprise sales is fun


*Actual fun may vary.


Many engineers at large companies won't want to bother dealing with the headaches around licensing software and spending money, whether it's $2/mo or $21/mo or $200/mo.

If it's a core part of my job and the best option available, it'd be worth it, but if there's any reasonable alternative, I'll go download that today instead of wading through all of the lawyers and approvals and compliance to use something slightly better.


If you're getting started, sure.

If you already have a live deployment then the company's bigger fear is the a risk of switching to a completely new infrastructure and they'll all of a sudden push the paperwork quickly to stay on their existing codebase.


Even when the purchase process has been streamlined it's still a headache for a piece of software I might only use 1-2 times a year.


It’s not at all about the price. Obviously a corporation can afford that. It’s the sheer dread of even starting the procurement process in your average corporation that your average developer must overcome, that is the barrier.

I’d rather investigate an alternative like running it on a VM than deal with that. Actually I’d rather shave my face with some mace in the dark than deal with that.


>Do you really think that businesses are going to jeopardize the workflows of their $250k/year assets over a very core piece of software for $250/year?

Tech companies? No, they will probably cough up until they have another solution.

IT departments in non-tech companies? Yes. I fully expect a circus there. Many won't have known it was being used, purchasing will have their ego bruised by a company "hijacking" them and won't want to pay, and so on.


> The business license costs $21/month

No, it costs $21/user/month.

> Do you really think that businesses are going to jeopardize the workflows of their $250k/year assets over a very core piece of software for $250/year?

I work for a big enterprise that is currently going through a transition that is simultaneously a cloud transition and a transition to more critically consider tooling with license management overhead (which for a $21/seat license, is probably a significantly larger cost in a big, bureaucratic enterprise than the license cost itself.)

That said, there's a good argument that the things that you get for that $21/seat are things many enterprises will find are still worth the license cost + license management overhead. On the other hand, most of that is stuff that doesn't scale with number of developer seats, so a subscription model that doesn't work on that basis and therefore doesn't impose the kind of license management overhead that per-seat licensing does would probably net Docker more revenue while imposing less total costs on enterprises.


> No, it costs $21/user/month.

Do you guys not realize that I know that? I did the maths standardized to one seat.

I’m not also suggesting that the entire engineering spend for 100+ person companies is $250k/year.

Do the analysis on a per unit cost because that’s how life works: per unit cost of human capital and per unit cost of software vs. per unit value creation.


> Do you guys not realize that I know that?

Did you not read the rest of the post? I think you knew the license fee was per seat, I think you just think you failed (and fail, still) to understand the significance of that on the actual cost to the business imposed, of which the license fee itself is only a fraction.


It's not the amount of money that is the issue, the issue is that it this wasn't budgeted into the project when it was proposed 2 years ago. It is software that falls under category S which means you can't use overhead funds it has to be a category S purchase, but you have no category S funds budgeted to the project because you were using free software.

Being so cheap actually complicates the matters even more, since the finance people don't really want to mess with purchases less than $5,000, even though it is their own rule that requires all software to go through them regardless of cost. It just means they won't be willing to help very much.


Exactly. This makes sense.

USD 21 per user/month + bulk discount is nothing.

If companies want to roll their own they can, but most won't. Docker Desktop adds a lot of value if only by removing the hassle for quick os-agnostic development.


What is the value add for Docker Desktop?

In a world where podman exists, what's the point of docker on dev machines anyway?


How about the fact that not all, or even most, dev machines run Linux, which is the only platform podman supports?


I don't use podman, but a quick search shows that you can install podman on Linux, Windows, and MacOS. Are you referring to something else?

https://podman.io/getting-started/installation


> Podman is a tool for running Linux containers. You can do this from a MacOS desktop as long as you have access to a linux box either running inside of a VM on the host, or available via the network. You need to install the remote client and then setup ssh connection information.

Literally the first non-title element in your link. Just because the client is cross-platform doesn't mean the entire solution is turn-key cross-platform.


That's exactly what docker desktop does as well. A Macos or Windows client that runs docker in a Linux VM. There is a really limited concept of container on Windows, but it's far from being great and since not much people use Windows in production to run apps, this is not really usefull.


Well yeah, sure, but Docker for Mac/Windows installs the VM, sets up host-guest file shares, papers over networking and VPN stuff, etc.

I was going to say that installing Podman on macOS/Windows leaves the VM as an exercise to the user, but per another comment, there's podman-machine[1], a new-ish built in to setup a VM. However, it's apparently already deprecated (?) and recommends simply 'Vagrant' as an alternative, so seemingly setting up the VM is back to being a user exercise for Podman?

[1]: https://github.com/boot2podman/machine


The integration (the Linux VM is transparent to the user) is what makes all the difference.


This is currently its big weakness in my opinion.

Most of the problems that devs are facing with docker are not actually docker but this layer that tries to abstract the VM. So in the end, it's quite common that you have fix things in the VM or get rid of it. I don't know if docker on WSL2 makes the matter better, none of the devs in my team using Windows can use it because of the memory usage bug.


Oh yes, whilst Docker was created to help developers to think operationally, it (ironically) ended up helping Developers to not think about operations at all.


If you read the instructions, they basically say that you still need a Linux VM or WSL environment to run Podman in. Which makes it not a complete replacement for Docker desktop, which handles the VM for you. So OP isn't wrong.


Podman 3.3 and up has `podman machine`, which is supposed to do this for you...


Apparently it's already deprecated https://github.com/boot2podman/machine ?


That's an unrelated tool named "podman-machine". "podman machine", as in the subcommand to "podman" proper, is not deprecated.


So that's a couple hours for the first dev to make a list of WSL setup instructions or a VM template, and 5 minutes each for every dev after.


Does Mac have any sort of built-in VM framework like Hyper-V?

I did not know that running basic Linux VMs was something you could do without downloading VMware or Virtual Box - which isn't as easy as just running a few scripts (especially in corporate environments where Brew and other tools might not be so readily available to all employees).


> Does Mac have any sort of built-in VM framework like Hyper-V?

Two, actually. Hypervisor.framework [0] to build virtualization solutions on top of a lightweight hypervisor, without third-party kernel extensions, and Virtualization.framework, to create virtual machines and run Linux-based operating systems.

[0] https://developer.apple.com/documentation/hypervisor?languag...

[1] https://developer.apple.com/documentation/virtualization?lan...


Thanks for the tip. I will have to take a look. It'd be nice if there were some github repos out there that leveraged this to make experimenting with Linux distros (including Desktop environments) as seamless as it is with VMWare Workstation - where things like high DPI resolution, copy & paste, etc. just work without too much trouble.


I meant that for windows. I don't have enough experience with mac to give great answers.

> downloading VMware or Virtual Box - which isn't as easy as just running a few scripts (especially in corporate environments where Brew and other tools might not be so readily available to all employees).

Docker Desktop needs the same level of access, because it runs virtual machines. If you could install it on your own, can't you install those on your own? If it was centrally managed, then IT can switch to one of those programs instead.

> Does Mac have any sort of built-in VM framework like Hyper-V?

Yosemite added Hypervisor.framework, I guess?


The fact that you could do something alternative doesn’t mean it’s easy, supported, or streamlined for developers or company tech ops.

I don’t think that thinking like an engineer will help you understand the value add here.


Docker for Mac includes a Kubernetes cluster that’s way better than minikube etc.

Not sure the bare docker daemon VM wrapper has a defensible moat though. Maybe this does more in Windows?


Well, I hadn't heard of podman until now, and I imagine I'm not the only one. Does it consistently have functional parity with docker?


It doesn't matter if it has parity of functionality when Docker has grown to the point where it has name recognition with enterprises and a sales team that can engage with these large customers.

It needs to have parity in all other pseudo-layers (3rd party tool support, support plans, OS support, someone to sign a contract with, compliance tools, etc.) We know most of these go unused or have no real meaning to devs, but they unlock enterprise procurement.

I believe podman has a linkage to RedHat which may actually bring all of the things that procurement want to hear, but the question is whether the door is open to RedHat, or not. Procurement departments can be fickle, preferring Oracle for everything or the other way round trying to eradicate Oracle while permitting a combination of others. It's all politics based on previous experiences and opinions in the end.


trey-jones, I apologize, but I cannot reply to you directly.

The biggest showstopper for podman is that it runs entirely in userspace on Linux. Having said that, I use it as a drop-in replacement for Docker and it's only become better in the past year. This is somewhat irrelevant to the Docker Desktop product, as podman doesn't provide a nice packaged up solution, but you can use podman on Windows and Mac as long as you have a Linux host available, either as a guest VM or as a machine _elsewhere_ on the network, see [1]. I only use Linux if I can, and the ability to run images without having to run a daemon with root privileges is a very big bonus for me, but it might not be for you. Now I do wonder, how hard would it be to declare a minimal nixOS VM for running as one's podman host :)

[1]: https://podman.io/getting-started/installation.html


> The biggest showstopper for podman is that it runs entirely in userspace on Linux.

As I see it, that's the whole selling point. Need to have something with limited rights or build a container without root? Podman is the way to go.


No true. Podman runs natively on a MAC, using a Linux VM for containers, just like Docker. `podman machine init` pulls down a VM, after which you can start running, developing, and building the same images that Docker and Kubernetes use.

Just `brew install podman`.

For Windows, Podman is available on many distributions in WSL2.


I agree, what's missing is a nice VM appliance for macOS and Windows.


This is what singularity did in the HPC world (amongst other handy features):

https://singularity.hpcng.org/


This might even be a good selling point - sharing a container host amongst multiple devs means that the hardware for devs can be lighter/cheaper.

Even for myself flying solo I can use my old desktop as a Podman server, conserving RAM/CPU in my development env. Sounds good to me.


> The biggest showstopper for podman is that it runs entirely in userspace on Linux.

Why do you believe that? Docker's "root by default" design decision is the bane of Docker. Podman is even described as Docker done right in that regard.


It doesn't do caching, which disqualifies it from many use cases unfortunately:

https://docs.podman.io/en/latest/markdown/podman-build.1.htm...

> --cache-from

> Images to utilize as potential cache sources. Podman does not currently support caching so this is a NOOP.


I use it on my Fedora dev machine and it's pretty good. Still wouldn't replace a mission critical machine with it, as it isn't the primary target for docker containers to run on and it can break more easily.


It will break from time to time. But so does Docker Desktop. Neither Docker nor Podman are suitable for mission critical stuff. Their market is entirely dev machines and devs can fix in on their own.


> Neither Docker nor Podman are suitable for mission critical stuff. Their market is entirely dev machines and devs can fix in on their own.

Pretty sure they're used for more than that.


You are using logic.

That's not how corporations work.

I work remotely for large corporation (over 100k employees) and it wastes on average an hour of my productivity every day on stupid stuff. For example my office PC becomes unavailable every time it gets an update, at least once a week. They force restart during working hours. If there is somebody in the office I can call them to reboot it, if not I have to drive there. Everybody has the same problem. Meetings are disrupted, plans are thrown in disarray, people are loosing sometimes entire day of work.

The problem has been known before covid and has not been solved.

Any normal logical person would throw resources into solving the problem, but the people responsible just say they are busy and why is that a big problem if you can just press reboot?


That's not how it works in my experience. If it costs more than 0 but less than 10k, the pencil pushers at procurement wont even answer your emails...


I think you don't understand how big company procurement work. Getting legal's and procurement's attention to look at this is more effort than its worth. The logistics of managing licenses, single sign-on, etc is a nightmare. Besides, running docker is already frowned upon by InfoSec and require special permission. It will never happen where I work. Not because of money, but because it's too much trouble.


> Do you really think that businesses are going to jeopardize the workflows of their $250k/year assets over a very core piece of software for $250/year?

Not just "think" - I know for a fact.

Finance refuses to spend money if there is a "free" alternative, and Architecture rejects all software that isn't OSS licensed without a lengthy review process. It would take us at least 3 months to get approvals and POs for all the devs, if it even got approved, but more likely it would be in "batches" of licenses, further delaying roll-out. So instead the devs are beginning to install Linux VMs to run Docker from, or looking at podman.

People forget that just because there's a "rational" decision, doesn't mean people (or businesses) will choose it. Businesses are run by humans, and humans are irrational and dumb. Path of least resistance wins.


Absolutely.

The pricing model for this is really stupid. A $21/mo subscription is a pain in the ass — they would be better off just selling it perpetual for $1000.

I ran a big enterprise shop for years. Nuisance licensing like this costs a fortune - I’d need to go do legal review, have it tracked for renewals and compliance, etc. Freemium product like this is always coupled with a stupid license enforcement audit that appears and tries to extort you. For a small quantity, overhead may be more than the product.

As an enterprise consumer, it makes me question the viability of the company. $250 a year is priced to allow unit managers to use P-Cards to avoid procurement processes. It’s too cheap to make meaningful amounts of money, so unless it’s a way to shift me to a more intelligent model, I’d have folks assigned to investigate alternatives.


I love the term “nuisance licensing”. Considering how everyone and their grandmother is switching to subscription models I wonder why it didn't come up earlier.


What companies are offering $250k/year for engineers?


TCO ain't paycheck offer

Salary + Taxes (Payroll, etc) + Fringe (Healthcare, etc) + Dev licenses + Training/conferences = paycheck * (n > 1.5)


When you add up salary + benefits + workplace amenities + taxes + software licenses and whatnot, you get there mighty quick.


I don't know what you mean by taxes, my question is about 250k base, excluding bonuses, stock, and excluding the supposed monetary value of benefits/training/travel/home office whatnot


The statement was that the engineer is a $250k asset to the company.

Rule of thumb TCO for headcount is 1.5-2x salary to account for taxes, Medicare, health insurance, equipment cost, licenses, office square footage, stock options, travel, etc.

So a $250k asset from a business perspective is typically someone that makes $125k-$165k.


In the US, companies are responsible for paying 50% of the medicare and social security taxes for each of their employees.


It seems like 200k+ is pretty typical for Engineers with at least some experience even in less hot markets like the Midwest. I know several developers in the Metro Detroit area making more than 200k base.


Are you talking about total compensation where you add base salary, payroll taxes paid by employer, health & benefits, 401K contribution, possible bonus & equity, etc?

I see $160K salary ceilings for general "senior engineers" for vast majority of non-FAANG companies in high COL East Coast city, so I'd imagine midwest is $25K less since housing is so much cheaper.

But maybe salaries are really exploding, and I just haven't been paying attention.


I meant base comp not TC. If you haven’t checked out the Team Blind app or https:://levels.FYI recently, the compensation right now for experienced developers is unreal.


In Germany this type of experience would be paid 50k. So the license fee will hit be costly here.


At least Facebook, Apple, Amazon, Netflix, Google, Microsoft.


Pretty much any company operating in a major US city.


I guess it depends on the enterprise. I can imagine the thoughts of certain managers: Recurring costs? Something that used to be free and now they want money? Pricing per user? More expensive than Office 365?


From my experience: Yes, they may jeopardize it.

If docker and containerization is not yet widely used in the company, a lot of decisionmakers will not buy it, because they did fine without it for decades ;)


That price is per person, not total. The highest total I've heard so far is going to cost that company $108k a month, for a development tool.

VCs are shooting themselves in the foot here: it is very obvious that we should never adopt any technical tool backed by VC, because they will eventually try to make us an offer we can't refuse and then go out of business shortly thereafter when their extortion attempt doesn't work.


> That price is per person, not total. The highest total I've heard so far is going to cost that company $108k a month, for a development tool.

Lol yes, I know. My point was the cost of meatware is 3 orders of magnitude greater than the software, and likely less at scale.

$100k/mo would mean 5,000 devs. I highly doubt a company with overheads near/in excess of a billion dollars/year is going to sweat a $1m expense for core infra like Docker.


I think you're correct about existing users at large corporations. Converting all those into paid accounts is a no-brainer.

However, this will have a massive change on the competitive landscape. For companies that haven't yet adopted Docker, this is a huge red checkmark. This change is going to spur development on open source alternatives like nothing else could.


Nah. We're not 250 people. We use docker, and we won't stop/switch because of it.

This is such a good problem to have. I would love to cut Docker Inc. a check.

And we've basically moved over to garden (a k8s dev env) anyway. But we still use docker plenty.


$21 / user / month - so if you have 100 engineers that's $2100 a month or $25k a year.

Still should be doable for most businesses that size but licensing costs can blow up when you start to have a lot of seats. An annoying thing about the company I work for is that they have a limited number of licenses for things like IDEs, so they ration them. And so I'll boot up an IDE for a language I work in less - like say, PyCharm - and it will stop working because my license got taken away and given to someone else. I'll have to request another one be given back to get working again, which is pretty annoying when I'm trying to get something done. I work mostly with Docker / Kubernetes so if I'm in a situation where my core tools are being constantly taken away, I'll be pretty miffed.

I agree that Docker has every right to charge big companies for this software. Just wanted to point out that the costs can be more than you'd expect.


Local dev environments are used because it's what people are used to, and they're low friction to start using. The problem is that they're actually quite difficult to maintain at scale. Large businesses spend a considerable amount of time debugging local developer environments. Cloud IDEs have been popping up and are finally to the point of being usable (and in some cases even better than a local dev env).

My company was already somewhat interested in checking out hosted IDEs, and today this pushed us to start some proof of concepts to move some teams over to github's codespaces.

So, yes, businesses will adjust their workflows to avoid the cost since many were already interested in improving their workflows and relieving some of the burden on their developer environment tooling teams.


> Do you really think that businesses are going to jeopardize the workflows of their $250k/year assets over a very core piece of software for $250/year?

YES! I've seen it. Stinginess at my shop regularly causes hours and hours of pricey engineering resources to address problems caused by underallocating cloud resources to the point things fail - or refusal to allow some cloud service because it will cost $100/month (despite saving 10s of thousands in eng resources)

It's crazy but I bet it happens many places.


That would be the wise thing to do, but I'm sure there are some ways companies will eff it up anyway. Survival of the fittest I guess.

Management may want some badge of honor for saving a budget line item. Or developers may want to embark on a new and interesting project and successfully convince management it's a good idea, who will agree for a wide range of reasons (not pissing off developers might be one of them).

Both will ignore the risk and considerable downsides. Happens all the time.


Have you ever worked at a large corporation?

I have, and let me tell you, they will spend dozens of hours and thousands of dollars to fight a purchase of $21. I'm really not joking.


That math changes a lot in companies that don’t sell tech as their profits margin aren’t as fat. It also changes when you have 1000s of developers.


>>>Do you really think that businesses are going to jeopardize the workflows of their $250k/year assets over a very core piece of software for $250/year?

Hahahahaha, yes, absolutely. Because if the wrong VP gets this in his head, it's not $250/year, it's "almost half a million dollars over 3 years" for the 500 employees or whatever. I've seen it happen.


I once got told my my manager to reduce the cost of my email inbox on the server and download it to my local machine.

This was email provided and owned by the very company I worked for, and it was costing enough for management to care.

Maybe enterprises have changed since then, but I wouldn’t be surprised if stuff like this does start asking questions as it’s a new cost on a budget.


(Big) Enterprises nowadays have mandatory backups for emails. So downloading e-mails on clients doesn't remove the need of storing on the server.


I work at a large company. I added a USE_PODMAN environment variable for devs to use within an hour of finding out about this

But maybe my company is too large; large enough companies can have smaller teams, other teams will probably go the licensing route


True. Absolutely. But I guarantee you that this headline means that even junior "devops" engineers will have workable alternatives by tomorrow, and can tell you how to implement them with little friction.


Yeah that's great in theory. In practice you have to go to the official route (tm). Here's some of my experiences:

1) Internship at a well known tech corporation. I was on a project that would accelerate the debugging and diagnostic of a machine the whole factory depended on; tl;dr I was making an app to replace an overgrown Excel sheet. To do that we used an opensource library but the catch was that the maintainer made its money by placing the documentation under a paywall. At some point the project was going well but the leftovers from Google Code could not do anymore. So it took two months (TWO MONTHS!) to do a $15 USD one time payment with the company's credit card. Apparently the process took longer due to the fact we had to go through PayPal.

2) Another multi billion corporation, less big but still multinational. They spent hours and lots of money on internal propaganda to promote their "core values"; which of course included "agility". We wanted to buy a commercial code analysis tool to save time during code reviews and increase code quality of a code base where ~30 devs worked. No brainier right? We had the R&D department's VP's OK to buy the damn thing and ask for forgiveness to the mother ship later. After 6 months our team leader was still receiving calls and and form to fill. "Do you really need this? Did you think about other products? Did you have the OK from this person thousand of miles away and this other person who speaks in a very broken English?" Using the company's internal organism and salary tables we estimated that during those six months they have wasted about 5 years of licensing in salary.


Which is why this shady tactic works time and again.


> $250k/year

fuck..i need to find a new job


lol obviously you’ve never worked in a giant bureaucratic corporation.


Please don't post personal swipes or unsubstantive comments.

If you know more than other people, that's great, but then please share some of what you know so the rest of us can learn. If you can't do that or don't want to, that's fine, but then please don't post.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


Lol obviously you’ve never risen to a level of management in a giant corporation.

It’s ok. Just keep telling yourself you’re smarter than everyone else.


Please don't respond to a bad comment by breaking the site guidelines yourself. Not cool.

https://news.ycombinator.com/newsguidelines.html


A company with 100 $250k/year engineers would be billed about $250k/year for this license. Any half competent manager is going to realize that money would be better spend on a single engineer focused on tooling, who can replace Docker Desktop in addition to working on other efficiency improvements.


> A company with 100 $250k/year engineers would be billed about $250k/year for this license

Would love to know how you arrived at this number.

The list price for 100 engineers on the most expensive plan would be $25k/year and I guarantee Docker will do volume discounts.

Additionally, the terms just say you need a paid plan, which presumably could be the cheapo $5/mo plan which would be $6k/year for 100 engineers.


Yes, you're correct, my numbers are completely wrong here. I'm unable the edit the post anymore, unfortunately.


This appears to be cutting of their nose to spite their face. We have a team of 50+ engineers that all use Docker for Mac for daily development tasks, but I suspect that will no longer be true in a rather short amount of time. Frankly, I don’t really know if anybody actually uses the UI components for it outside of starting and stopping the engine and for basic configuration of the VM. Everything else that comes with it is just useless cruft for our use cases.

As soon as there is a viable alternative (and I’d be happy to contribute to the effort), I’ll be moving away from Docker for Mac.


For the Mac, just get Canonical’s Multipass (http://multipass.run) and do an apt-get to install Docker into a VM and use VS Code to “remote” to it. It will automatically install the Docker extension inside the Linux VM and you’re set.

For Windows, use WSL2 and do the same.

Both can mount “local” folders, although the setup is obviously different.

You now have a better way to manage containers than ever before.


Can’t say that limiting developers to VSCode is necessarily a step forward.


You don't need VSCode specifically, but it does provide an alternative GUI for managing Docker containers that isn't tied directly to Docker Desktop.

You could use anything to manage the Docker VM... VSCode is just one option.


Well, it does set up everything automagically for you. I can also dig around for my Docker CLI config and the right way to expose the Docker TCP socket to the host, but if you need a quick way to get working, VS Code is it.


Why run Docker inside a VM on a Mac, when you can just run the Linux dev environment directly inside the VM? That's just starting to sound like Docker for the sake of Docker.

Multipass, Qemu, and Parallels can all provide a solid VM on Mac host. All you need after that is your dev environment VM guest image to deploy to the team.

https://wiki.qemu.org/Hosts/Mac

https://www.parallels.com/


Some people here actually want and need Docker features. For me it's the ability to run from a given image and know that I've got _exactly_ the same image that other developers have. Reproducibility.


I might be wrong, but I think his point is that by the time you're running a linux VM for docker, why not go ahead and get the rest of the tooling for free?

Docker can still be run in the VM just fine, for cases where you want a reproducible build environment.

I do this at any company that lets me (and by lets, I mean doesn't explicitly forbid) - They all give me a Mac, and the first (and sometimes only) thing I install is usually vmware fusion, followed by the linux distro of my choice (Arch).


>for cases where you want a reproducible build environment.

Or just create your reproducible build environment as a QEMU VM image instead of a docker file. That way you only have to install a VM image, instead of install VM image/OS + install Docker + install your Docker file.


See, I don't really agree here.

Containers solve a different problem than vms. The biggest issues (at least for me) are

1. The second a dev starts using that VM, it's no longer reproducible. The goal of docker is that a developer can create reproducible images as a part of normal development.

2. I won't be running that QEMU vm in production, but I might very well be running the exact same container image in both development and production.


Just the cost of porting one single Dockerfile to anything else is bigger than the Docker licence cost for 1 year.


When I want a very specific version if the image, I use the SHA to pull/run

  $ docker pull hello-world@sha256:7d91b69e04a9029b99f3585aaaccae2baa80bcf318f4a5d2165a9898cd2dc0a1


Or you could tag a little more optimally.


Tags are mutable, digests aren't.


Why, how often do you change tags after you've built a container and for what reason if so?


Digests cryptographically guarantee that you get the correct content, which prevents both malicious tampering (mitm, stolen credentials, etc) or accidental mutations. This is why "immutable tags" are a bad substitute and an oxymoron.

There are also better caching properties when using content addressable identifiers. For example with kubernetes pull policies, using IfNotPresent and deploying by digest means you don't even have to check with the registry to initialize a pod if the image is already cached, which can improve startup latency.


> There are also better caching properties when using content addressable identifiers. For example with kubernetes pull policies, using IfNotPresent and deploying by digest means you don't even have to check with the registry to initialize a pod if the image is already cached, which can improve startup latency.

While agree on the unquoted part, this is true also for human-readable (aka mutable-that-should-be-immutable) tags, when that pull policy is set (which is by default for everything that is not `latest`)


With a sha you shouldn’t have to change the pull policy. However there isn’t a need for always if you have the sha.


Because you can map your working folder inside it on both Multipass and WSL2, and you can get an integrated editor experience with VS Code, which is what many people apparently want to do (I’m a tmux guy so I don’t care, but I thought I’d provide a user-friendly approach).


On Apple Silicon Multipass actually uses QEMU under the hood. Basically it's just a (very convenient) wrapper


I want to ship a dockerfile in my CI/CD pipeline, not a VM image


Because the end result of a lot of workflows (eg k8s) is a buildable dockerfile, or built docker image for deployment.



I would strongly recommend docker.io from Ubuntu/Debian repos. This will be always Apache 2.0 licensed. (It's a fork of Moby packaged by Debian people).

Docker Engine looks problematic since the license isn't clear at all. For instance, Microsoft didn't include Docker into GitHub Actions, they also forked Moby and packaged it on their own, since they can't comply with the End User Agreement of Docker Engine.


Regarding the GitHub Docker license thing, this is directly from GH staff (you’re correct): https://github.community/t/what-really-is-docker-3-0-6/16171...


No, you can apt-get docker.io (the repackaged version available for the last 2-3 LTS releases, built from source and with fan networking support). Works for 99.9% of your use cases.


Doesn't work on M1 chips yet.


It's in "beta" right now but it works quite well (you can find the binary in the dedicated GitHub issue). Under the hood it just uses QEMU which in turn uses Apple's Hypervisor.framework for virtualization


I believe this is the issue referenced above: https://github.com/canonical/multipass/issues/1857


This is the best solution. Multipass is also great software.


Why don't you just use the VM directly?


Multipass uses reverse sshfs for mounting into the VM. This is a concept also used by lima now. https://github.com/lima-vm/lima


Folder mapping, which both options provide.


> (and I’d be happy to contribute to the effort)

Isn't paying their fee also contributing to the effort of what they've put in to it so far, and ideally what they'll do to keep it working and improve over time?


So you have a tool that your team use daily, but won’t pay a small license for it? I’m not sure I understand that logic. If the tool you have worked for you when it was free, it still works for you when you pay a small cost per month. Unless the cost is really that huge, I don’t see why that would be a reason to change.


I think it is mostly the issue of dealing with byzantine corporate procurement processes. Also, it's really not all the clear what value-add Docker for Desktop is. Essentially it is just doing what you can do using the OSS components (running a Linux VM which runs the docker daemon and mounting a Unix socket on the host machine to talk to it). The only "value add" is a UI app on the host machine which I have never once used personally or witnessed anyone else using.

I certainly don't begrudge Docker for trying to create a sustainable business, but as an "open core" model it's really hard to understand what their proprietary extensions are that people will want to pay for. They took a stab at the container orchestration space with Docker Swarm (which you could definitely imagine having an "enterprise version") but that lost out pretty definitively to Kubernetes. So they're left with Docker Hub? That just doesn't seem like it will really be much of a revenue generator. You're either using whatever cloud container registry is available (ECR, GCR etc) or you're using a more general artifact repository like Artifactory, GitHub which can be a container registry as well as a Maven/NPM/etc artifact repository.


Not only is he opposed to paying a license fee, he will contribute to the development of an entirely new, almost certainly worse, tool.


I made https://github.com/lime-green/remote-docker-aws a while ago and I've been happily using it for about a year. It throws docker in a ec2 VM and allows you to call docker as if it were running locally by tunneling and syncing files. The network file systems I tried (sshfs, nfs) were way too slow when used as docker volumes so it uses a tool called unison which does two-way file syncing (rsync only does one way, which is a problem for something like making django migrations)


> As soon as there is a viable alternative (and I’d be happy to contribute to the effort), I’ll be moving away from Docker for Mac.

I just SSH into my server. The biggest pain about macOS is that it can't easily mount SFTP.


You can mount SFTP with Mountain Duck [1], from the creators of Cyberduck. Costs around $40.

[1]: https://mountainduck.io


I've been doing development in docker, but unrelated to that I did an upgrade to big sur and borked the machine for a few days.

Pulling the same projects to my (admittedly quite fast) linux box in the cloud is night and day for speed in docker with volume mounts. Browserfy runs 5x faster, at least. Yarn install is 10x faster.

And it's reliable. Docker's filesharing on the mac has about a 25% failure rate that any given save will be properly picked up by watch, with a complete, uncorrupted, updated file.


Fuse/sshfs exists for OSX. Seems to work okay the little I played with it.


I used fuse/sshfs quite a lot in the past, and never had much issues (I think most of my issues were with how my editor displayed and refreshed the file list, not with the actual sshfs implementation, and were similar to those on Linux/Windows.)


Lima project works on an alternative to sshfs https://github.com/lima-vm/sshwebdav


It’s not completely awful, but quite a long way from mounting a natively supported share, made worse by the already convoluted setup process that has been made worse by the project going proprietary.


Ouch, really? Cyberduck was always one of my first installs simply due to how much I spited Finder, but I didn't know things were... that bad.


You’re going to spend scarce engineering resources reimplementing a Docker for Mac alternative, then roll out your immature alternative to 50+ engineers, instead of paying a few hundred dollars a month for a good product and moving on?

It seems to me you would be the one cutting off your nose to spite your face in this scenario.


Assuming that you currently don‘t need any other than the functionality the free plan provides, and assuming all 50 engineers need a license, your „a few hundred dollars“ is actually $1‘250/month just for getting the same as before.

I understand (in some way) the decision Docker made but I am not sure it is the way-to-go. However, it is a very hard question and if I had to pay a monthly fee for each component I‘m using to develop a solution, one or the other project would not even start because it‘s not worth it anymore.


That 50 people team probably costs at least 250000/month. Are you going to take away a tool that everyone on the team needs to save 1250?

Or put another way, how much time would you need to replicate what Docker offers for a team of 50 people? If it takes more than 25% of the time of a single employee, then Docker is cheaper (assuming your employee costs $5000 a month, which I guess is a lower bound for an engineer).


No, I am not (that was also not my point basically). My point is that you are going to pay for a) something you got for free (as in beer) before and b) something you don‘t (maybe) need/want.

I think it is a very valid question how to monetize Docker (and all the other libraries we are using for free), but I am personally not sure that subscriptions to everything are the solution.

I am sure that this expense should not get into your way if you have 50+ engineers, however if you think that with all expenses…


The reason this move isn't popular is because it seemed like local docker development (for any size corporation) was always going to be free. If I personally had known this was in the cards I would have invested (time, money and effort) into alternatives earlier on. Instead they killed all the competition and are now demanding money. So yeah, this is the first move by Docker that has made me kind of mad at the company.

How does this affect consultants that want to introduce docker to large corporations but small teams? A lot of scenarios become crappy now.


> Instead they killed all the competition and are now demanding money. So yeah, this is the first move by Docker that has made me kind of mad at the company.

Which alternatives did they kill? The Podman tool ecosystem is doing fine and is closing in on being a complete replacement, and Docker Swarm hasn't exactly killed k{number}s.


"k{number}s" means k8s, right? Is this a reference or something?


k8s, k3s


Personally I think just running portainer as a container is a viable alternative to docket desktop. But I never really used the UI much, so perhaps there are features I don’t know of.


Unless there's podman or similar for local dev, you'd still need Docker Desktop to use it on Windows/MacOS.


21$/month/user is nothing for the business setting.


Between getting that approved and paid by the company or just using another tool, I will use another tool.


Yes, it's nothing for a single tool, even with hundreds or thousands of dev. But in total it quickly adds up for a lot of tools.


I tried getting podman working pointing at a Linux server and ram into issues as an alternative to Docker. I’m hoping the kinks get worked out and I can move over.


We have been using podman here for all of our developers and have yet to find anything different to docker. Perhaps you are using a more obscure feature.


I think podman is a lot easier than docker. You can stop all containers easily (without xargs)

podman stop -a

or you can mount the current directory

podman run -v .:/mnt

or you can mount while you build

podman build -v /dir:/dir

or you can work entirely without root and have the same user on the host also in the container:

podman run --userns=keep-id

(useful when a directory is mounted into container but the application refuses to run as root)


I've been using Minikube's docker-engine and haven't missed DockerForMac for some time now.

Minikube sets up a Linux VM using MacOS Hypervisor.

It even has a convenience command to configure docker-cli/docker-client.

  $ minikube docker-env
    export DOCKER_TLS_VERIFY="1"
    export DOCKER_HOST="tcp://192.168.65.11:2376"
    export DOCKER_CERT_PATH="/Users/wibble/.minikube/certs"
    export MINIKUBE_ACTIVE_DOCKERD="minikube"

For corporate situations where MITM proxies are used, you can inject/trust custom CAs using

  $ minikube start --embed-certs
https://minikube.sigs.k8s.io/docs/handbook/untrusted_certs/


Copy Paste install instructions for macOS: https://gist.github.com/rmetzger/e556bfda8082bceeae6a32e7e02...


Yeah, I used minikube for local dev on k8s services and ended up just using it for all Docker stuff after a while. It is slightly less ergonomic than Docker for Mac though especially with respect to DNS and network issues. For instance, the minikube VM and any containers running in it cannot by default use the host machines VPN, so if you have to connect to an external service over your corporate VPN then you need to do some extra config (which isn't very well documented) to make it work. And the setup I ended up using was to use the VPNKit socket that was part of Docker for Mac to make it work. Now VPNKit is also OSS so I'm sure you can get it to work without Docker for Mac at all but it's also not trivial.


Before Docker Desktop there existed a solution called docker toolkit that worked exactly like this. The only problem is that occasionally internal corporate networks will use the same ip address and you have to customize that by building your own docker engine.


But what minikube backend are you using for this? The preferred one is Docker and all the others are also paid on Mac.


> all the others are also paid on Mac

Hyperkit is open source software that works on macOS.

https://minikube.sigs.k8s.io/docs/drivers/hyperkit/

Virtualbox is also a free (as in beer, and mostly libre) driver that works on all of windows/linux/macOS


Beware of VirtualBox. While part of it is free, it's not very useful without the extension package. This package is easy to download on the same website as VirtualBox, but... it's not free.

Even better Oracle tracks the ips that download this extension and after a suitable amount of time they will come knocking on your company's door asking for an insulting amount of money (e.g. more expensive than VMware) or get sued. You need to read the fine print of the additional Eula printed in really small letters on the VirtualBox website to figure out the extension isn't free. It's almost a honeypot tactic. Scummy.


I don't know how Macs fare, but on Linux the extension package is not really a great feat, mostly adds RDP and some faster USB modes, but USB passthrough is marginal at most anyway.


Hyperkit is docker for mac's backend though, so... whatever bugs that upset people are probably still present.


Wait, so you're running your app on virtualized Linux inside Docker inside Linux inside Virtualbox inside native MacOS?


That's a reductive way to phrase it, but more or less yes.

It's arguable if the container is "virtualized linux" as they all share a single linux kernel. In reality there's one virtual machine, one linux kernel, and many linux userspaces (one per container), which is kinda the whole point of containers.

Over docker+linux, the virtual machine is the only additional layer.

fwiw, I personally don't use macOS, so I've only got virtualized linux (containers) run by docker running on linux running on my hardware.

Are you trying to make a point or something here? Like, yes, we've built layers of abstraction that include different types of virtualization (VMs and containers), and they compose. Is that all you're observing?


> Are you trying to make a point or something here?

Nah, just curious/intrigued by how these stack.

OS-level virtualization is very much a thing. I'd be interesting to compare this to the approach taken by Docker Dekstop for Mac. I bet they do something quite similar (hypervisor-based virtualization like Virtualbox) - nothing fancy like WSL1 that I believe runs a sort of "tortured" Linux kernel inside the NT kernel.


WSL1 didn't run a Linux kernel at all - it was implementing the Linux user-space API over the Windows NT kernel. Well, some of it - not enough to run Docker, for example.

Docker on Windows and Mac does the same as what is described above - it runs a Linux VM and runs the docker server inside that, and then does a little magic to expose native OS paths and so on to that VM. On Windows, it uses WSL2 by default now, but WSL2 is also a Hyper-V VM in the end, with some Windows magic to blend it more nicely in Windows workflows.


That’s how it has to work when there’s a kernel mismatch from host to guest. You’re implying more layers than there actually are.

- MacOS running a hypervisor

- A Linux VM with Docker installed.

- A Linux container running on that VMs kernel.

Containers on Linux aren’t virtualized (normally, you could use runV I suppose if you wanted). The only overhead is the extra disk space to extract the root fs of the container image and the namespacing.


You can run systemd in podman or LXD containers.

LXC was the first container implementation on Linux and uses full Linux systems similar to a VM.


It's spinning pinwheels all the way down


I use Linux and I have no idea what y'all are talking about.


I’m also using Hyperkit w/ minikube, and after some heavy setup automation it works pretty great. What I worry about, though, is what I’m going to do when I switch to a Mac w/ Apple Silicon. AFAICT Hyperkit is x64-only.


I am using "hyperkit"

Available options:

  --driver='': Driver is one of: virtualbox, parallels, vmwarefusion, hyperkit, vmware, docker, ssh (defaults to auto-detect)


Hyperkit, which is incidentally also what Docker for Mac uses.


Overall if they want to charge for their product that's fine. I just hate the model of release free or really permissible application, wait for widespread adoption, then tighten clamp. For what it's worth they've lost my business there.


I want to coin it as “embrace, extend, extort.”


Same with Telerik Fiddler recently. Good piece of software for debugging network requests on Windows.

Was free for as long as I've known it existed. Telerik recently bought by 'Progress' (ironic), software re-written in Electron and now charges a subscription to use it.

Glad HTTP Toolkit is now available free for 'hobbyist' tasks - https://httptoolkit.tech/


I'm the author of HTTP Toolkit! Just ran into this by chance, glad you like it :-D

I should mention here: not only is the core product all free, it's also completely open source, even including the paid bits (https://github.com/httptoolkit). And those Pro features are completely free for all contributors to the project.

I've tried to set it up so I couldn't run off with it and force everybody to start paying even if I wanted to, but any suggestions for further improvements there very welcome.


What a coincidence. It's a really nice bit of software. Thanks for creating it!

I was very, very impressed when I opened the Android mode and my Genymotion emulator just opened automatically with the VPN app and connected.


I love you.


Oh Stavros you flirt.


You know it bb :flutters eyelashes:


Thank you for the recommendation! I've been looking for a good Fiddler alternative, I'll have to check this out.


There is still Fiddler Classic though.


Wow great rec!


Very accurate for a lot of companies like this lately. Consider it coined.


Nah, newer generations rediscovering the concept of shareware and trial/demo versions.


The (big) difference is honesty. You know you should pay at some point in future if you use shareware/trial/demo and find it useful.


Not really. Shareware, Trials, Demos all come with the expectation that if you want to utilize them fully you will eventually need to pay.


Docker desktop was never really free, as in free software, was it? If so, then it was always a proprietary app and they were always in control. IE. the clamp was always tight.


It was free of charge.


Which is why being free of charge isn't really the point of free software.


But not everyone cares about 'libre' software, or thinks the simple descriptive term 'free' should be co-opted in discussions like you are.


I often get laughed at when I say we should backup every tools/repositories/packages through a proxy and use only that in our internal processes. I also get scolded at making use our Dockerfiles only calls scripts a developer can use on his own machine or any other container manager once the "too good to be free" tech of the decade goes full Oracle on us.

Then everyone panics when a critical build stops working because some apt repository of some decades old distros is unplugged or some shell script piped directly to bash goes dark (or someone with a bit of security common sense rightfully has a panic attack) and we have to salvage it using some ex-employees backup images.

This is also why I just don't say I do devops because it just gets to a point where the "devops guys" are just the people you give the dirty jobs nobody wanna do.


It used to be called shareware.


This seems like a bit of a footgun from Docker Inc. Those on Linux will just run Docker Engine (the open source part) directly, or move to alternatives like Podman. Docker Desktop only really has value on macOS and Windows, and there it's only because nobody wants to manage the glue to setup a Linux VM. Given the cost, I suspect many will chose to do that glue work themselves and I wouldn't be surprised to see an open source project spring up to do that.

Everything else is handled by other parts of the ecosystem already, image registries both private and public, orchestration, etc.


It’s not the users who will be paying for it. Enterprises will bend over and take this 100%

Good move by Docker, financially speaking. They have little to lose.


It's a short sited move that will kill D.Desktop. It's not that these large corps don't have the money for this, it's how money is allocated in companies. Instead, now all hobby projects in large corp get killed fast and early because the hobbyist knows their project is doomed if the company isn't going to go for a new bill.

A whole bunch of scenarios die now.


I agree that it seems self-destructive. I use Docker Desktop at work for a one-off side project that I run manually every once in a while. Using a container for it helps keep things maintainable compared to a full VM that needs full maintenance. If I have to get formal approval and a purchase to continue using it then the most likely outcome is this side project stops completely. And with it my excuse to gain professional experience using Docker.


...if a company is incapable of allocating money to pay Docker, then why should Docker care whether that company uses the product or not?


You can't figure out why a company selling a product might want to care about losing users and market share?


"Selling" and "market share" is pretty generous when talking about people who are not and will never give you any money. Sure maybe they'll pay you in exposure, but personally I wouldn't bet much on that


The hype/buzzword driven development surrounding micro services/containerization has hit middle America and enterprises spend dumb amounts of money on related projects. I can see them spending more money on Docker Desktop with no difficulty, because the incentive is not to save money.


macOS is the hard one to solve. It does a lot of magic things in the background and Docker even created their own "distro" / VM build system, linuxkit, that went on to be useful in a lot of other places to make it work.

A lot of macOS developers imo seem to have more knowledge in their specific domain and less in how to wire up a VM to look seamless, they'll need the docker CLI to work with the local filesystem to keep a lot of existing Makefiles functional, I see a bunch of companies caughing up money in the short term just for that.

Docker Desktop on Windows itself proves quite well that WSL2 works fine for this use case.


Honestly, I don't see a reason to keep Docker for Mac installed on my computer. I haven't run a container workload locally in I don't know how long and I haven't built a container locally in even longer. It's just taking up space on my laptop and bugging me to update what seems like constantly.


There is a Docker Desktop for Linux? What does it do?

Why would I go out of my way to set up Docker differently on my dev machine compared to my servers? That seems like a recipe for failure.


On MacOS and windows, using docker is a pain because you have to run a linux VM and set up networking and everything.


Nope, there isn't (at this time, at least).


I personally support Docker Desktop for Mac for an organization of 250-300 engineers.

I have been supporting it for 2 years now. Been through all the Docker Desktop upgrades, performance issues everytthing. I have researched docker performance on macs running k3d + k3s + istio and a bunch of microservices. I have had to jump into the internals of Docker daemon and docker cli and networking to solve how docker networks are provisioned for various proxying issues.

1. Docker dragged their feet with native performance for file syncing. We have to selectively enable it and just so that it doesn't bog the machine down.

2. When running it gets the CPU running at 75-80C, causing the fan to run non-stop at 3000 rpm at least. It is definitely impact by bad macbook pro design, which is terrible at airflow and heat sink activities

3. We were on unstable for a bit to test the new file syncing approach. Docker dropped that in stable and said "deal with it"

4. The paid forced upgrade notification means that I can't peg the Docker Desktop version for the whole org at a certain version.

5. Right after we switch from the unstable to stable, the next minor version is a breaking change.

6. Number 4 would be fine it docker would keep to their guarantee of stable being stable. They do a terrible job of being backwards compatible. The current stable we had was 3.3.1. With the constant minor upgrades, and pushing people, some people went to 3.6.0. (the latest as of yesterday, Aug 30) This broke everything inexplicable with just a VM error where k3d would keep crashing. I downgraded everyone back to 3.3.1 to get teams unblocked while waiting for me to find a fix.

7. Finding a fix usually involves waiting for Docker to prioritize something but at this point I don't trust that Docker know what it is doing.

I am currently pushing for Linux laptops, hosted dev environments and reducing the need to run distributed monoliths. We shall see.


I hope you do get the Linux Laptops through. I just joined s company that made an exception for me to use Linux and I haven't felt more valued ever. I never want to use another OS again.


I think it's unreasonable to expect so much from a company that doesn't make money.

The non-desktop docker product on it's own is crazy good, I think it's reasonable to expect docker desktop to improve once docker actually makes money from it and can afford to hire more engineers to work on it.


If Docker wants to grow up, maybe they could start with replying to support tickets from paying customers. I have a 10 day old open ticket with no reply.


Hah, tell me about it. We were unable to give them more money (buy more seats), and our urgent support request was open for a week. Turns out (from a moment's console snooping) it was a straightforward REST call that was missing a body parameter, so a simple 'curl' fixed the issue for us. But I wonder... how long they were actively hurting their business? And how did a serious bug like that get into production? What does that say about their systems?


Hey sorry about that, can you send me the ticket details justin @ docker.com and I can look into it.


Policy success is directly dependent on how we handle requests for exception. Granting exceptions undermines people’s sense of fairness, and sets a precedent precedent that undermines future policy. In environments where exceptions become normalized, leaders often find that issuing writs of exception—for policies they themselves have designed—starts to swallow up much of their time. Organizations spending significant time on exceptions are experiencing exception debt. The escape is to stop working the exceptions, and instead work the policy.

Larson, Will. An Elegant Puzzle: Systems of Engineering Management (p. 122). Stripe Press. Kindle Edition.


Thanks for posting, ordered a copy just now


Another point of genius is right after the above section on exception debt.

It was in that era of my career that I came to view management as, at its core, a moral profession. We have the opportunity to create an environment for those around us to be their best, in fair surroundings. For me, that’s both an opportunity and an obligation for managers, and saying no in that room with my manager and CTO was, in part, my decision to hold the line on what’s right.

Larson, Will. An Elegant Puzzle: Systems of Engineering Management (p. 123). Stripe Press. Kindle Edition.


I have a better idea. How about you look at EVERY open ticket, starting with those from paying customers?

EDIT: Wow, they actually did this and got back to me - thank you!


if they're smart they just looked at all the 10 day old tickets


maybe 10 +/-1 for time zones


So, like many companies, successful support consists of yelping at the appropriate public forum, be it Twitter or in this case, HN. Anything the public doesn't see: "due to unexpected call volume, you'll wait at least ten days before hearing from anyone". All the while the company forgets that the complaining customer isn't the only one reading. The rest of us are reading a live account of what company's customer support looks like.


Not really comparable to the problem with Google you're pointing to. Google's services are free, here we have tickets from paying customers.

I've never opened a ticket to Google as paying customer so I don't know for sure but unless someone comes and confirms that same crap is happening even if you pay we can't really compare those two cases and Docker is worse for simple reason that it demands payment yet ignores paying customers.


Fwiw, while there probably isn't a _good_ public relations response here ... N=1, when I see a company publicly managing escalation via public shaming, it inclines me to steer purchasing decisions away from them in the future.


My thought as well.

If I have to tweet-storm to get someone to look at my support ticket, there is no real support.


It should not have to work like this.


Agree 100%. I don’t want to have to resort to Twitter or HN to get a ticket worked. Fuck that, hire some staff, work on your enterprise support.


> hire some staff

And that's why they are scaling back on free plans.


This has also been our experience with the company.


I can no longer edit this comment but I just wanted to update that Docker the company has really made huge strides in the last few days to rectify our experience. So if you're having a hard time, reach out. My feeling speaking directly with some Docker people is that they're proud of the company and believe in its future.


They have core problems open from years ago they just ignore. This is their normal mode of operation.


Since they've buried it a little:

"Specifically, small businesses (fewer than 250 employees AND less than $10 million in revenue) may continue to use Docker Desktop with Docker Personal for free. The use of Docker Desktop in large businesses, however, requires a Pro, Team, or Business paid subscription, starting at $5 per user per month."


Anyone know how they plan to enforce this? Audits into the IP space connecting to hub.docker.com? Maybe arbitrary device OS detection a la

  (nmap -O $local_subnet | grep -ci 'Macbook') > 250


They won’t need to. The number of 250+ engineer businesses that would risk running unlicensed software is small.


I don't think it's 250 seats, but 250 employees. Lots of fairly low tech businesses (such as restaurant or retail chain or universities) may have less than a dozen docker users but still cross that total threshold.


Well, that makes it even cheaper.


well it is AND: "AND less than $10 million in revenue"

basically most companies with ~50 people probably has 10 million in revenue (annually). considering wages and buildings and stuff you need for 50 people...


(In the US, not in developing countries)


Businesses that would knowingly risk this? Small.

Businesses that would unknowingly risk this because some engineer just went and installed Docker Desktop because they couldn't be bothered chasing this through management and procurement? Well..


Maybe this is common knowledge, but I saw an ad recently for a company that offers money to snitch on your employer for using unlicensed software, or not paying for free-for-personal-use" software


This comment was originally posted to https://news.ycombinator.com/item?id=28368997, so it's quoting the corporate press release, not the current article. We've since merged the threads.


Unless I am missing something this is pretty huge. Every company I have worked at that has issued MacBooks has had development environment instructions which outline using docker desktop (since it is the simplest solution). Given this headline every one of those companies would have needed to get licensing for that.

As others have stated: I am okay with attempting to monetize your work, but increasing prices like this (especially from free to a pretty pricey per-head subscription model) doesn't sit well with me. There doesn't seem to be much differentiation between the tier besides: "How many employees/revenue you have" and that is not my favorite line of charging.

Does this relate at all to the forced upgrades that were pushed earlier this year?


Of course it does. Else you could downgrade to a pervious release with different licensing and not have to pay...


Rancher Desktop is an open source container management and Kubernetes desktop app.

https://rancherdesktop.io/

Disclosure: I work on Rancher Desktop. Feedback welcome.


Feedback:

1. There's very little "getting started" info here, you seem to assume everyone already runs kube everywhere else and already has workloads ready to go.

2. Not sure if this is feasible, but I'm looking for something that solves the Docker Desktop problem! I want something that can port map to a local port for testing, I want something that I can map a local folder to in order to store job input/output.

3. I tried starting it, and I'm already running Docker Desktop. It didn't seem to start a healthy kube cluster, and actually did nothing for me but just said it was waiting for the cluster. It might have been attempting to connect to old Docker kube clusters that I'm no longer running. Did I just need to wait longer? It wasn't clear.


You guys (by you guys I mean you and Docker, Inc) would do yourselves a huge favor not spiting the Linux devs who invented the technologies you build your tools on.

Where's the Linux version? Give it to me in Snap, AppImage, Flatpak, deb, or rpm, whatever you want. Just offer something. We'll take care of the rest.


The whole reason this (and Docker Desktop) are used is that Docker and K8s does not run natively on macOS and Windows.

If you’re using Linux already, most of this stuff is as useful as nipples on a breastplate. You could theoretically run an emptied out husk of the app on Linux, but there are much better tools for working with the tools directly.

So I’d be greatly surprised if any Linux kernel hackers are miffed about this.


I'm not sure the whole reason for Docker Desktop is that Docker and K8s don't run natively. I mean, someone could create a Linux VM and get them running right through there. The tools exist to do this.

There are even programs like minikube that can get you Kubernetes in a VM on Mac.

There is something else to it that people want and that translates to Linux, I've learned. They want an easy button with an easy UX. There are a lot of people who are like that.


Right and when you're a corporation it cannot be overstated how important it is to coalesce around universal solutions that get up and out of the way with as few steps as possible. Handing new developers a handbook of incantations to get going is very fragile. Handing those same developers one executable with a big Go! button is much easier to get right.

One example from my last job was having one shell.nix in the root of every project folder a developer could nix shell into that contained everything they needed, same version and all, to get going with that project.


When you are a <10 person small development agency where juniors come to work it is critical. The subject of https://rietta.com/blog/dockerized-cost-savings/. Paying over and over again for new devs to "get set up" was devastating the budget.


Thanks for the feedback. A Linux version is in the roadmap for this fall. I've had several discussions on it in the past week.

Part of this was due to priorities and part of it was technicalities. For example, do we put it in a VM so that way someone can easily blow things away and we don't touch the base system? We had to come to some direction on what we wanted to do there. Now that we have that idea we need to finish up one thing on Mac that will translate over to Linux.

The Linux side will be based on Lima[1] just as the Mac side is.

Earlier today I had a discussion on the packaging format.

[1] https://github.com/lima-vm/lima


Thanks for the update! It's refreshing to see more turnkey GUI competitors in this space coming from larger corporate names.


I installed this and could not get networking going again in WSL 2 until I uninstalled it. I was sad.


It’s open source, you could probably port it yourself.

I somewhat agree with your viewpoint, but given Windows 10 is generally just Windows 10 , OSX is OSX… But Linux could be anything from Redhat to Alpine to a raspberry pi , I understand why devs wouldn’t support it


Seems interesting, but the name conflict with https://rancher.com/ is _very_ confusing. Is Rancher Desktop associated with the linked company?


Rancher Desktop is being build by Rancher (which is now part of SUSE).


Thanks - that wasn't obvious from the Rancher Desktop site.


I'm hitting https://github.com/rancher-sandbox/rancher-desktop/issues/56... but will look again in the future.


Can I run Docker Compose and Docker Swarm with this, or is only Kubernetes supported?


Does this work with Docker compose files? For local development at the moment I mostly use docker-compose and for production using AWS Fargate so the kubernetes functionality is a bit wasted on me really


Ok, I gave it a try. It's given me two K8s errors before any meaningful container work can be done. Not going to waste further time given a first run experience this bad. I'm interested in investing in my tools, not alpha-testing.


Probably shouldn't run tools in alpha status then


We use Docker Desktop for the Mac at work. (Large company)

Docker for Mac absolutely sucks. If they’re going to force everyone to pay they better start fixing bugs.

They recently stopped allowing skipping a release unless you pay, and then promptly shipped a point release with a showstopper bug.

I literally asked IT for a Linux VM/Cloud machine yesterday for development because my Mac is dead in the water due to a bug. It’s time efficient to develop on the Mac if it works, but the overall experience is terrible compared to Linux on the desktop IMO.


I recently switched jobs and made sure I wasn't going to get a mac again just because of Docker Desktop. At my last job, we had an application that did some very strange things with the Docker API. It regularly crash or lock up the VM or hit subtle correctness bugs in networking.

I get the problem they are trying to solve is extremely difficult. I don't think I'd do much better trying to seamlessly ship a very Linux-centric API on Mac and Windows. They have my sympathy but that doesn't mean I'll use the product given a choice.


I'm the only person at my company who asked for a Linux machine. And I got it. Everyone else is on an M1. I said no. I don't want to use a Mac ever again. I had a Mac for 2 years at my previous company, one of those 2018 monstrousities. Never again.

I asked for a P14 AMD Ryzen 7 5850H ThinkPad. Its still en route. But until then I'm using my Dell G5 SE Ryzen 7 4800H laptop with Manjaro. Best dev experience I've ever had at a company. The dev machine adds to the package IMO. If you're not getting the tools you want, you're not being paid enough ever.


I've found Docker Desktop to be equally awful with Windows. You'd think they'd care about giant swathes of the market like that.


> I've found Docker Desktop to be equally awful with Windows. You'd think they'd care about giant swathes of the market like that.

The fact that they don't care, and yet you (or if not you, others) still use it, succinctly explains why they do not care.

If they have a shitty, buggy client for Mac/Windows, and people complain about it but still use it, then they have no incentive to care.


It's hard to complain to much about a free product that is a side line to the companies main business... Oh wait, we just lost the free version and it's now the companies main monetization scheme? Well now I care a lot about the little annoying bugs I've been dealing with for the last 3 years.


Actually, I did like jandrese did, and got our solutions out of docker.


I use it only when some required tool is only available via docker. It is not a choice for us for any of our development.


I've found it to be really good on Windows 10 Pro, even with 6 year old hardware.

I've been using it full time since 2018 and it's been nothing but really fast and as stable as you can ask for given how complex of a tool it is. It rarely crashes (maybe once every few months) and I've built thousands of images across many different tech stacks.


We de-dockerized our Windows deployments because it was causing no end of headaches for the end users.


Wish they fixed the issue where it uses all available RAM even when running no containers yet.


That’s not an issue though. That’s just how virtual machines work. You’re carving out a chunk of your system for the docker Linux VM that runs your containers.

You can open up the docker app and configure a smaller amount of ram if it impacts your host OS


No, that's not how it works with WSL2 as the backend. You then cannot configure a smaller amount of RAM in the docker app, it's greyed out. One can limit the RAM that WSL has, but that's not really helpful when docker steals all of it. (And WSL2 supports dynamic allocation of memory anyways, so it's supposed to return unused memory to the host)

So you are wrong. For those of us affected by the bug, it's a big issue.


You can configure the max memory in wsl 2 with .wslconfig file.


yes, but docker will eat whatever I give to it, leaving nothing for the actual containers or other stuff in wsl


Linux considers unused RAM to be wasted RAM. WSL 2 addresses this with a Linux kernel change that right now is insiders only. I expect it to land with Windows 11.


That's not how virtual machines work on Windows. Even Linux virtual machines use dynamic memory. You assign a minimum, maximum, and a startup value. When the machine needs more RAM, Windows give it to it. When it releases it, it's available for other purposes.


When running no containers? I've found that it's a problem when one is running (the solution to that is here[1]), but I've not experienced it when nothing is running.

[1] https://blog.simonpeterdebbarma.com/2020-04-memory-and-wsl/


>They recently stopped allowing skipping a release unless you pay, and then promptly shipped a point release with a showstopper bug.

Whoa, really? Is this written up somewhere?

My first "WTF" with docker was in Fall 2015 when we dockerized our app and had it nicely set up so we could tell employees "run this command and the app Just Works" ... and then they introduced a breaking change to the format of docker compose files so it just mysteriously stopped working in the middle of the day.


They might be referring to this? https://github.com/docker/roadmap/issues/183


Thanks! Great HN discussion about it:

https://news.ycombinator.com/item?id=26547268


Yeah, that's probably not gonna happen. At the scale Docker is operating at now, the reason the Mac app sucks, is because it's really hard. They already have the resources to throw at this problem now and this is the product we have.

This is purely a $$$ move (which is fine) but we shouldn't expect an order of magnitude more work going into the product as a result of this move, imo.


If you expect me to pay for it, it better work well enough to not be a blocker to my critical job path workflow.


But you're not actually paying for it. Your employer is.


They are still looking for their business model, because they have no money. They have been unprofitable for their whole existence.


Our team uses it too, but we don't even use the UI - it's just to get the daemon started on startup. Is there a way to do this easily on the Mac without using Docker Desktop?


you could check something like ubuntu multipass


I have to restart Docker for Mac multiple times a day. I'm surprised there hasn't been a community driven open source alternative yet


Singularity ( https://singularity.hpcng.org/user-docs/master/introduction.... ) is a platform that lets you create and run containers. Source is on github : https://github.com/hpcng/singularity


podman-remote with a Linux VM?


> They recently stopped allowing skipping a release unless you pay

I honestly thought that's a bug. That is so ridiculous if that's intentional. I agree with your post and have similar experiences with Docker for Mac


After they introduced that feature I hit a bug where it would try to force an upgrade to version "null" and then crash. I ended up having to uninstall and reinstall it to get things usable again.


The Docker experience is (and has been) subopar on MacOS. With the way Apple Silicon is headed, I don't really have much faith that the situation is going to get better, and I really wonder if the Mac client is even a priority for them at this point.


Docker for Windows isn't particularly stellar either. For instance, you have to actually log in to a machine to have a docker image running. Additionally only one user on a machine can run the host application at any given time.

I have no idea how this is popular.


That isn't the case on Windows Server, and on Windows Desktop SKUs, having a logged in user is normal and expected.


Actually I’m talking about windows server. Do you have any guides? Because when I try to open the docker GUI it kicks everyone out.


I got bitten by a Mac bug a couple of months ago: The latest version of Docker desktop didn't work for something (don't remember anymore) so I had to revert to a previous version and work in that for several months now.


In my experience, minikube (hyperkit) performs better than DockerForMac Kubernetes, if we ignore mounting of macOS folders to the VM.


Note that Docker Desktop and Docker Engine are separate products. Docker Desktop is the desktop application package that makes Docker user-friendly on macOS and Windows. Docker Engine, the container runtime itself, remains free:

> No changes to Docker Engine or any upstream open source Docker or Moby project.

If you develop on Linux, no changes are needed.


Not a trivial thing to run Docker natively inside of a WSL2 environment - at least my attempts to install straight docker strictly inside Ubuntu running in WSL2 always resulted in Ubuntu’s attempts to reach some .exe with regard to Docker. I did learn some fun facts WRT Linux in WSL2 - it doesn’t have systemd installed by default.


I've never had a problem with it. I've been using docker engine in WSL2 for a couple years.

I install `docker.io` via apt and its good to go except that package has on some ubuntu versions been missing the /etc/init.d/ startup script.

I build my WSL2 environments via Dockerfile. You can see everything here:

https://github.com/SeanTAllen/wsl-environments/tree/main/ubu...

Using that dockerfile I can then export the file system as a tar (https://wiki.seantallen.com/notes/docker-export-filesystem/) and import into wsl using the wsl import command.


well the installation process seems to have changed in the last 2 years. installing `docker.io` is not enough to get docker running in WSL 2 anymore.


I was able to get podman setup in WSL2 pretty easily following this:

https://www.redhat.com/sysadmin/podman-windows-wsl2


How would that work if you're using WSL? Docker for Desktop uses WSL but creates it's own separate VM (if you can call it a VM).

Would I be able to install and run Docker inside Ubuntu's WSL distro to avoid paying for Docker for Desktop?


Yes, but you'd have to connect the Docker CLI running in Windows to the engine inside Ubuntu (not hard), and then you wouldn't be able to mount stuff in Windows into Docker containers via relative paths (you'd have to start them with /mnt/c/...). If neither of those things matter for you (like if all of your project code is inside your WSL VM), then it's totally fine.


I do all my work under WSL, and run Docker engine in WSL and it works perfectly. 100% headless.

I may have had to expose the Docker socket for VS Code containers support to work, but that wasn't any pain, and secured with TLS.

Never needed Docker Desktop, which seemed like a bloated mess.


You can connect to a remote Docker engine instance over SSH, which is easier to setup than exposing the Docker socket over a TCP port.

So install the client inside WSL and the engine on a Linux VM.

EDIT: https://raesene.github.io/blog/2018/11/11/Docker-18-09-SSH/ was a blog I wrote when that feature landed, AFAIK it works the same way now :)


Or just use it inside WSL2, which already is a Linux VM?


I've never actually tried installing Docker engine in WSL2... might work I guess :)


Enable WSL2, then you can just install the docker provided by your distro package manager. For example, I am using docker packaed by Arch Linux, and it works as expected.

If you need to use `docker` command under Powershell, maybe exposing docker socket to Windows host would probably work. I didn't try it as I don't need it.


You could configure Docker Engine in Ubuntu to expose a network socket, and configure Docker CLI in Windows WSL to use that network socket: https://wiki.archlinux.org/title/Docker#Daemon_socket


You probably can, there's nothing about containers that shouldn't work on WSL2


Interestingly, I think this may be a boon for kubernetes.

We've been managing all our infrastructure with docker / compose, and its been great. But one of the key advantages is unifying the dev & prod environments. Now lately we've been outgrowing the docker solution so k8 is on the radar, but one of the things holding me back is losing the unified prod/dev experience.

So the question has been, take the hit and suck up all the bugs, confusion, duplication etc. that come from having these separate, or move everyone over to k8 and have to deal with the complexity on the developer side?

Well, this decision now definitely tips the scales - there's a distinct advantage to going all in on k8s because we can run it up and down the stack and not be constantly hassled by licensing and software restrictions.


Looking at the installation instructions for Docker on Mac/Windows, what is the expected way to install the Docker Engine without installing the Desktop bundle?

From https://docs.docker.com/engine/install/binaries/#install-cli...

> The macOS binary includes the Docker client only. It does not include the dockerd daemon.


Docker Engine only runs on Linux.

Docker for Mac/Windows sets up a Linux VM using macOS/Windows native virtualization via the open-source HyperKit/VPNKit abstractions maintained by Docker-the-Company and the community. That VM runs Docker Enginer (dockerd) and all interaction (docker CLI commands, shared volumes, networking, etc.) are proxied into that VM.


So unless I'm missing something important, why not just use docker engine directly on a wsl2 instance?


I’m not a Windows user but AIUI just running dockerd in WSL2 misses some of the volume sharing and networking niceties. Nothing that couldn’t be replicated though


Is that working? Does wsl2 provide more than a shell?


WSL2 uses a full Linux VM running under HyperV.


WSL2 can get weird when you start trying to install software with low level virtualization and file system features. YMMV. I’d use it to install apps, but I wouldn’t be confident it’d work with Docker. Even if it did initially work, eventually you’ll hit a problem for which there is no googleable answer & good luck with that.


Docker Desktop installs dockerd in a WSL2 instance these days instead of using VirtualBox so I'd assume it works pretty well now.


Current Docker on Windows detects if you have WSL2 or not, and gives you the option of just installing docker in WSL2 + configuring the Windows docker tools to manipulate the docker daemon running in WSL2.


But what does it run as PID1? I think not systemd?


Last I recall, docker desktop on windows explicitly recommended WSL2 over Hyper-V or whatever based setups.


Yeah, that's the biggest issue; right now Docker Desktop is the only supported way of installing Docker on Windows: https://docs.docker.com/engine/install/#supported-platforms That's literally the only reason I use it.

There's probably a fairly simple way to run Docker directly in WSL, but a lot of documentation is going to need to be updated to point to that method.


> There's a standards conversion going on where we can trace the provenance of each and every layer of the image, we can start signing those layers, and with that metadata, we can start doing automated decisioning, automated reporting, automated visibility into what's been done to that image at each step of the lifecycle.

Docker's CEO is being disingenuous. When you deploy a Docker container, you specify the image ID. The ID looks like a SHA-256 digest and even starts with the string 'sha256' but it is an arbitrary value generated by the docker daemon on the local machine. The ID is not a hash of the image contents [0, 1]. In other words, docker images are not content-addressed.

Since docker images are not content-addressed, your image registry and image transfer tools can subvert the security of your production systems. The fix is straightforward: make an image ID be the SHA-256 digest of the image contents, which is the same everywhere: on your build system, image registry, test system, and production hosts. This fix will increase supply chain security for all Docker users. It is massive low-hanging fruit.

Now Docker will add image signatures without first making images content-addressed. Their decision makes sense only if their goal is to make money and not make a secure product. I cannot trust a company with such priorities.

[0] https://github.com/moby/moby/issues/39247#issuecomment-49697...

[1] https://github.com/distribution/distribution/issues/1662

EDIT: Added another link.


The fact that images are not content-addressed is very surprising to me. I just always assumed they were because… why wouldn’t they be? I bet a large proportion of other devs assumed the same.


That's because the parent comment is completely wrong.

Images have a reference (e.g. "ubuntu:20.04"), they have an ID inside docker (random string), and they have a digest.

All image data is stored by digest. Even when you fetch an image reference it is looking up the digest of that reference and fetching that digest and that digest is verified.

An image manifest contains the digests of the image config and the digest of all the layers, the image config also stores the digest of all the layers. This is how all the data is traversed from the registry.


I will add, once the image is on disk indeed the integrity is not verified so it can be modified after pull either maliciously or by accident.

Once pulled, the content addressed layers are extracted into the storage driver (overlay, btrfs, whatever).


Podman project recently introduced a lot of content addressing features. Some are still experimental though.


They are if you use the digest rather than the id.


You can specify the digest of a base image when building a new image. You cannot specify the digest of a built image when starting a container.


A built (as in `docker build && docker run`) image does not have a digest because the digests are for the tar+gzipped versions of the layers (+ the image config and manifest).

The layers are not tar+gzipped until you attempt to push them, at which point the digest is calculated.


You seem to be mixing up image ids which are not content addressable identifiers and image digests which are.


Image IDs are not content addressable identifiers and I need them to be. How is that a mixup?


So is Docker going to now maintain all the base images themselves or do they rely on the community to provide those for free?

Also, announced on 31.08, effective 31.08 (albeit grace period…)


I think you wrote the announced and effective dates twice. Effective date being 31 of January as far as I can tell. The whole point of Docker monetizing Docker Desktop is so that they can continue to be funded to maintain the base images themselves. That's the primary sell point of DockerHub.


No, the original link was: https://www.docker.com/blog/updating-product-subscriptions/.

The page reads as follows:

> These new terms take effect August 31, 2021, and there is a grace period until January 31, 2022 for those who require a paid subscription…

So: announced 31.08, effective 31.08 (albeit grace period).

To add to this, I have received their email with this announcement after after 2pm CEST on 31.08. This must have been an unplanned decision.


Curious to see what that means for Windows containers. Microsoft is heavily recommending Docker there, but asking people to license another thing from a third party just to be able to use a feature on their expensive Windows licenses seems somewhat on the nose for them to push.


Maybe this is just a play to get Microsoft to acquire Docker lol


Docker have never done the one obvious thing to monetise - an upsell and enterprise support for the engine.

Trying to be a poor mans pivotal was a stupid strategy, and developer tools is awkward too.

I’m convinced if they charged $10 per engine per month they would have kept all of the goodwill and momentum and been the next VMWare.


They tried that ("Docker Enterprise Edition") years ago, with some minor differentiation on features only available in EE... but for $62-300/node/month. This is now the part Mirantis owns, current Docker is the developer-focused side.

https://www.docker.com/blog/docker-enterprise-edition/

https://web.archive.org/web/20171118161452/https://www.docke...


The engine on Windows is nonfree (both cost and freedom).


I don't really see Docker having that much of an alternative - they have to figure out some way to make money and none of their other attempts have worked.

That being said, since the core docker tech is free, this will surely just cause some free alternative to DD to spring up in the relatively short term, and accelerate the move away from Docker in general, which has been happening for a while.


"Large" is a bit of an interesting statement. Companies with $10m in revenue are very common, and are often smaller companies. Software is all about leverage. A very small team can create a lot of leverage with the right tools to make a very strong product and get to $10m ARR without necessarily having many people.

It seems like the real cost to this change is the goodwill from smaller companies + teams that are now realizing they'll have another expense dropped on them. Except the expense is a previously free product with no real improvements, at least from what I can tell.


I see this as an opportunity to play with Podman. I don't fall into the category where I'll need to pay, but Docker Desktop has long been somewhat user-hostile.

The forced updates I HATE. It's my machine, I get to pick the version of software it runs. And I've had enough bad updates from Docker to get jittery when it pushes one on me.


Additionally this change is effective starting August 31st 2021 - i.e. now.


(CTO of Docker here) there is a grace period until 31 January next year, we understand that this is a change and people need time to sort out payment.


> and people need time to sort out payment

Or their removal of Docker


… isn’t it effective January 31st then?


It sounds to me like if you start now you have to pay if large enough of a company but if you are already a "customer" you have till january.


Sorry it is a bit confusing, the overall terms and conditions update is as of now, but the part about paying has a grace period but obviously we want people to know now what will apply. The terms are not very different from previous terms (although I did get the old no benchmarking clause removed, I don't know why we had that there).


> I did get the old no benchmarking clause removed, I don't know why we had that there

Your company is probably not going to fare well in this thread, but thanks for this! No benchmarking clauses are gross. Glad to see someone with the means to remove one do so.


I would guess it's to do with the abysmal performance before WSL2.


Curious how bound I'd be to these terms if I just don't upgrade Docker Desktop. I'm not even signed in to dockerhub and most of our containers are on an Azure private registry.


I can see Microsoft or Amazon stepping into this space. If GitHub desktop is anything to go by, getting [MS][AWS]Container Desktop as a replacement for Docker desktop would be a no brainer. 90% of what Docker Desktop does is already available in VSCode in some form, just the janky GUI is missing.

Somebody else already did the hard work to get it going with WSL2, why not just cut out the 3rd party you don't need and put it together so the command lines were compatible and your existing tooling works. AWS and Microsoft both have skin in this game and Docker Desktop would be right in their sights.

On something like Debian, you don't even get a desktop gui and you don't miss it either. No reason why WSL2 or a mac shouldn't be exactly the same.



Unfortunately, it looks like work is also underway to deprecate and remove `docker-machine`: https://github.com/docker/roadmap/issues/245


This is likely the most realistic path forward for most developers using MBPs.


It'll be interesting to see how well this works out for Docker, I have a feeling they'll lose quite a bit of custom but convert some to this model.

I'd guess a lot of people will just use Docker engine on a Linux VM with the CLI on Windows/Mac as that'll work just fine and is open source.

This was kind of inevitable though, ultimately Docker had to find a revenue stream somewhere. Docker Hub must be massively expensive to run and developing docker's product isn't free either...


There has been a huge push in the community to switch away from docker anyway. The warning signs from the company have been there for awhile and there are several container engine's, systems, UI's and other management tools not built on docker.

This will accelerate those programs


Can you link me to a single of those alternatives? It must be equally easy to use



Podman isn't a replacement for Docker Desktop (the products in question here).

Podman is closest in function to docker engine, which is still open source.


My original comment was not just about Docker Desktop. Their Shenanigans around the Docker Desktop commercial product has cost them alot of good will, and support for their other offerings.

This is why podman, and others are looking to replace the docker engine which while still open source is tainted by the actions of the parent company


Is the docker for Mac fixed yet? Last time I checked there were issues with disk mounting which really affected performance


As weird as it might seem, Microsoft would probably be the best company to acquire Docker at this point.

They could probably turn this around a bit into a free with Windows, pay for Linux (at the 250 employee level - possibly up the employee count to 500 though).


Nah, they should just build a vscode plugin that makes it easy to manage docker running inside WSL2. Bam, Docker Desktop is a beached whale.


> free with Windows, pay for Linux

Isn't Docker Desktop (i.e. the part being sold) just a client that sets up a Linux VM to run the free part?


I've been blandly pushing for podman at work.

Maybe this time management will start listening.

Plus, frankly, podman works reliably well in rootless mode, no more messing up with docker in docker, you can be root in you container without needing pretty much any privileged resources.

And since podman 3.x you can use docker-compose just pointing it to the user rootless podman socket.

And podman can start act like a dumb kubernetes and start pods and deployment from the yaml definitions.


You can use a remote docker host pretty easily. I've been playing with qemu on my mac since I read the announcement regarding docker for mac no longer being free (as in beer). I have port 5555 forwarded to 22 on the qemu vm so I can ssh into it. You can use whatever you want for virtual machines of course. Qemu seems nice these days but requires a bit of fiddling to get working. Virtualbox or vmware are probably easier. And this works just as well with cloud based virtual machines or any kind of remote host.

Assuming you have docker and ssh running on your remote host:

  # point docker on your mac to your remote machine
  export DOCKER_HOST=ssh://dude@localhost:5555
  # run nginx or whatever you want
  docker run -p 8080:80 nginx
  # forward the port locally
  ssh -L 8080:localhost:8080 -p 5555 jilles@localhost
  # open http://localhost:8080 in your browser
There are all sorts of things you can do beyond this to make this more seamless but this is pretty sweet already.


I think that the functionality provided by Docker needs to be a first class citizen in Linux, cli and user interface including and not just the backend. I don't like having my workflows tied to a private companies whims. Podman is a viable alternative, and while its tied to redhat, it's still better than Docker imho.


Systemd has something called "systemd-nspawn" back when Docker was at its peak.

Not sure if its still around.


For everyone who is against this change, can you please write up why?

For a ton of small companies (anyone making $10 million or less per year) nothing is going to change and DD is still free to use.

If you're at a big organization with let's say 200 developers chances are your company makes hundreds of millions of dollars a year. Even Docker's most expensive business plan would cost you 200 * $21 = $4,200 month.

Payroll for your 200 developers will likely be over 3 million dollars a month. How can you be upset with paying 4k a month? That's almost nothing relative to other expenses.

Realistically I'm surprised Docker is charging so little for their business plan. Making 4k on 200 developers at a 300 million+ company is not asking a lot.


I don't get enough out of docker desktop for mac to be worth the 21 dollars a month, personally. It manages a VM and the port mappings/exposure of docker sockets on my behalf. That is something that can be replaced fairly easily and not cost me 5-20 dollars a month.

This on top of some of the decisions in the past year like removing the ability to opt out of updates, and the issues that pop up when I don't expect it (crashes, file systems, etc) I am more inclined to find other solutions.


I'm part of a company that doesn't need to change a thing because of this.

I want to move out of Docker services because of the "The new terms take effect on August 31, 2021" part of the email I've just received, even if it's followed by a "with a grace period until January 31, 2022".

I'm OK with them trying to get money. I'm not OK with them changing things overnight.


you jumped from $10m revenue to a 200 developer company with payroll of $3m month in your example.

There are tiny companies with $10m revenue (remember, revenue isn't margin and certainly isn't profit). A company could easily have non-employment expenses be 90% of its revenue, so we are talking about $1m or 7-8 person company there on decent salaries. A far cry from the 200 devs you give as an example.

However as to "why" - because docker's precise value proposition is its ubiquity and universality. The exact reason people have adopted it is because everyone can run it, no matter who, no matter where. So this compromises the main value proposition of Docker. People will now find alternatives because if I can't distribute my application using docker and know the person at the other end can run it (because now they need a license that they don't have) then it lost virtually its whole point to me.


> A company could easily have non-employment expenses be 90% of its revenue, so we are talking about $1m or 7-8 person company there on decent salaries

Can you give a few real world examples where a $10 million / year revenue company with 7 employees would have difficulty paying $147 a month (or $49 if they went for the $7 / month instead)? With the $7 / month plan (if you only care about DD), the entire annual cost for all 7 devs is less than hiring 1 developer for 1 day at a normal US dev salary.


Ok, so you're asking about a different point, now, the difficulty of paying. In that case the difficulties arise because of corporate gatekeepers, licensing stewards and general policies governing software licensing. Typically my organisation would not approve this sort of purchase without a business case and justification - not least because we are a not-for-profit and any money not going to our cause is scrutinised heavily (the thing donors absolutely hate the most is the idea their money doesn't get to the cause they donated to and instead goes into sinkhole of funding commerical company's bottom lines).

Obviously one payment is not too big but as soon as the policy allows one it allows all such things so its effectively opening the gate to all kinds of micro-payments that quickly build up and become entrenched as "essential".

Here's a similar analogy ... does your company pay for your parking? Why not, its small compared to your salary right? and it definitely helps you get to work, be more efficient etc? Well its not just about the parking its because that represents a class of purchase that if allowed would tilt the scale towards a massive number of similar types of expenses. So in fact most places will have blanked policies disallowing small purchases.

Another question: since the price for Docker Desktop already got arbitrarily changed with no notice, why would you believe that it won't go up in the future? Or get more restrictive in other ways? Once a company executes bad faith one time, continued manifestations of that have to be considered as a risk.


With Docker, even if 1 dev spent 2 days coming up with a perfect solution that would allow all 7 devs to move away from DD without wasting 1 second of productivity you're still losing out vs sticking with DD at their new annual rates. To me that's a very strong business case.

> Another question: since the price for Docker Desktop already got arbitrarily changed with no notice, why would you believe that it won't go up in the future?

Personally, I'll worry about a future notice when it happens. A meteor could wipe out all of humanity tomorrow but I try not to think of "what ifs".


Because procurement processes suck and developers don't want to deal with them. When it's something from Microsoft, Google, Amazon, it's not a problem because those deals are handled at a level that developers don't interact much with and are ingrained as business critical. There's no way we're going to have a contract with Docker by January so I fully expect a "Please uninstall Docker Desktop" email long before that.


What if you asked the person who would write that email to instead ask Docker if they can extend the grace period for you until you can get a contract set up?

Since it's unclear if / how Docker can enforce their TOS I'm guessing they would be happy to extend it because the other avenues lead to you not using DD or using it without paying.


> What if you asked the person who would write that email to instead ask Docker if they can extend the grace period for you until you can get a contract set up?

I think OP's principle is that it will probably just be easier to switch tools than to push the $42k annual spend through the organisational mud to get it approved (depending on how muddy the mud is).

This is particularly true for a single developer that wants to start using docker desktop, if the rest of the org isn't already using it.


Similar to siblings, our medium sized enterprise has a procurement process best described as ‘avoid if possible’, with the last product we onboarded taking over 15 months from start to finish for around a $50k annual cost. Our likely move will be away from docker because we can’t afford to be paralysed for that long waiting for bureaucrats to make a decision.

If docker wants to play like this they will need to be willing to partner with a ‘real’ services company who is already on a range of procurement panels to get a foot in the door, at least in my part of the world. If we could pay this as part of our MSA with a vendor we already use, $50k a year wouldn’t be so bad, but adding a new vendor and pushing that uphill? Not likely for us.


Because we use containers to share images outside our organization. This reduces the accessibility and thus the value of the entire Docker ecosystem.


Extortion. They built an entire community of docker users and then this. it's one thing if we all knew they were oracle. It's another thing for them to turn into Oracle after capturing mindshare.


so, when's everyone switching over to Podman?


If you're doing $10M in ARR, how much engineering time are you going to spend to switch compared to paying Docker a few thousand dollars a month? Your spend on cloud and Slack (or other comms) is likely far higher. You're probably spending more on mobile/cell business service.

"Docker attempting to monetize users of its product who can easily afford the cost." I mean, the terms seems reasonable, and wouldn't you rather support Docker vs IBM (Redhat->Podman)?

Nothing changes for users who aren't making money using Docker, but I suppose you could still spend your time switching to podman on principal.


I'll preface this by saying I'm an IBM employee. That being said your comment rubbed me the wrong way...

> wouldn't you rather support Docker vs IBM (Redhat->Podman)?

Podman is an open source product and Docker is not. I'd much rather support an open source project. And what's wrong with "supporting" IBM anyways? Did they hurt you in some way???


> And what's wrong with "supporting" IBM anyways? Did they hurt you in some way???

They are a dysfunctional consultancy masquerading as a technology firm, running on inertia. They are not to be supported. (Also, my genuine condolences)

https://news.ycombinator.com/item?id=24228972

https://news.ycombinator.com/item?id=26532125

https://news.ycombinator.com/item?id=26869877

https://news.ycombinator.com/item?id=22224782

https://news.ycombinator.com/item?id=23268191

https://news.ycombinator.com/item?id=27706128

https://news.ycombinator.com/item?id=24471903


FWIW, we've divested/spun off the "consultancy" part. And not every part of IBM is bad, there are a ton of great developers and teams that work here, no condolences needed. I quite enjoy doing what I do here. Lots of innovation in multiple areas, but I guess if you have to drink the startup koolaid prevalent here on HN, be my guest.


I'm not at a startup nor drink the koolaid [1]. I am a consultant, so I get to see how the sausage is made across a wide variety of orgs. In my long tenure in tech (20+ years), I have arrived at evangelizing and encouraging engineering first and data driven organizations; in my experience, that provides the best environment for technologists to have autonomy, while pursuing mastery and purpose (which, hopefully, enables some amount of fulfillment alongside financial compensation). IBM is not such an org, hence my comment(s), but there are startups, enterprises, and a fat middle of SMB businesses that truly are innovative and can demonstrate results to back up that description of themselves.

TLDR I want the best experience for my fellow technologists and engineers.

[1] https://news.ycombinator.com/item?id=28181703


Docker Desktop is not open source, but the Docker container engine is. Also, runc, which is the actual container runtime, is not only open source, but was created by docker but is also what gets used by podman. podman is very nearly just a fork with the daemon and socket removed, which would not have been possible if docker hadn't been open source.


Many people in many ways


Docker is not really a product, Docker is a company with in that there are several products, some are open source some are not.

The Docker Engine is Apache License and open source.


I'm part of a large company and I have no influence over what most other people do. My projects within the company are small so whenever these sorts of things happen, it rarely translates into the company spending a bunch of money to provide the product across the company. At best, I may be able to convince a manager to buy it for 2-10 people on my team.


Licensing isn't perfect. On the contrary, it is the least worst implementation of attempting to extract a reasonable amount of revenue from the user of your software, who is realizing value creation or benefit themselves from its use. SaaS is popular because the exchange of value between producer and consumer (and the ownership and responsibility model) is much more clear (imho). Open source tooling might be a better fit based on your org's needs and your use case.

Solving for the intersection of building and maintaining tools people desire and those building said tools eating and paying rent is hard.

(no affiliation with docker)


My tiny part of a division of my last company made $40 million USD per year in revenue. We had ~40 employees. Getting the funding for using something like this came from a few levels up and would be in no way guaranteed.


I admit Docker will likely have to tweak their licensing model while also building relationships where there is some wiggle room for how licensing is handled (perhaps accept credit card payments from corporate users that they can expense to sidestep procurement). "Call Us For Pricing"


At my institution the "adapt open source" vs "buy" balance is also affected by the high effort of making a purchase happen. My bet is that things will get hung up on an exclusive acquisition justification, at which point the IBM/RHEL sales team will come in with "solution" using podman, buildah, etc. I've quit DD just now to try those tools out.


We do accept credit card payments. All our pricing is on the website https://docker.com/pricing - the Business plan will be available by credit card soon as well.


This seems like a false economy. Docker adds insane value for us (similar number of tech employees), and while I don't like price hikes based on things other than value add features, I certainly want Docker to exist in five years. Or get bought by Hashicorp, perhaps.


as soon as a viable alternative to Docker Desktop for Mac exists I am done with this company forever (and they seem to be anticipating that)


Just use multipass https://multipass.run and folder mount.


This. This is what I do (except I use VS Code to remote to the Multipass VM)


I've used Docker for years and never touched Desktop. What's indispensable about it?


It's not great, at all. But at least on Mac it's a lot easier to get going with Docker for Mac than it is to roll your own with e.g. VirtualBox. I assume it's the same on Windows.


I just use whatever `brew install docker` gives me. They don't call that Docker Desktop, right? I thought that was some kind of GUI thing of theirs—I do all my dockering from the command line, which looks the same across Mac and Linux except when (rarely, these days) the virtualization the Mac implementation uses leaks through.


The key is the virtualization. I think (!) with `brew install docker` you've got to set up a VM and get Docker running inside it, yourself. Docker Desktop for Mac does that, and implements filesystem and networking integration for you.

Most people like the convenience of that, if not the performance or (now) the cost.


Closest I've come to having to manually set up anything with a simple `brew install docker` making sure my shell sets the env vars correctly. It automatically sets up the VM, and has since I started using it years ago.

(but, it's possible that what I'm using is also considered Docker Desktop—I just associated that term with their GUI thingy [and I think it includes some kind of sys tray widget?], which I've never used)

[EDIT] oh no you're kinda right, I think I recall having to run one command, post-install, on older versions, to set up the VM, though I don't think you still have to and that was all still handled for you, you just had to tell it to do it. `docker-machine create default` or something like that, was enough for 99% of use cases. Don't have to even do that, now, though, IIRC.


Honest question: what features of DfM are you using and what alternatives have you tried that don't work for you?


why do you need the Docker Desktop? Can't you just use the command line? I mostly use Docker on Linux and even then I've almost switched everything over to Podman.


The Docker CLI can't do anything without the Docker daemon. Daemon (and containers) only runs on Linux. On Mac, it needs to run inside a Linux VM.

Before Docker Desktop, would need to create VM with Docker and connect to that. Docker Desktop makes that smooth and wraps in nice UI.


Podman doesn't do what docker desktop does. They are not the same thing at all.


When it's actually 100%ly compliant in it's APIs, especially regarding podman-compose and the socket API.


If you want an open source alternative, just use Docker Engine, it's still open source.

You can install the docker client inside WSL/OSX and connect over SSH to a docker CE instance.


There reality of this is that Docker is setting themselves up as an enterprise software business. Like the one they spun out a short while ago.

You as a developer won’t be involved with purchasing Docker subscriptions. Instead they’ll have sales teams that approach your IT department who will pay for support reasons and pre install Docker Desktop on all company hardware.

That’s why this is only focused at larger companies. This gives IT departments someone to call when a developer reports a problem.


Honest question: Besides IT endpoint management, why does our industry continue to develop software that is leaning more and more towards containerization on Mac OS?

I’ve been a Mac user for 20 years and do a lot of docker and Kubernetes work. I recently started developing on a Linux machine that was a fourth of the price and a lot less burden for my day-to-day work.


I mean, your "Besides IT endpoint management" comment is the primary reason that most of the jobs that I have worked at won't let me get a linux machine.


"or higher than $10m in annual revenue" .. that isn't necessary a large company. And it says nothing of profit.


Countdown started until Microsoft adds their own docker to Windows.


A better strategy to me would've been to keep it free and tightly integrate it with Docker Hub to push people towards Docker Hub services. This software is already installed on most Windows computers that need to use Docker and provides a perfect opportunity to promote Docker Hub and any of their other services.


Docker Desktop with developer environments would be a great value add if it supported Windows, macOS and Linux. As it is, we have developers in the company using Linux workstations so our Docker subscription is just for a registry.

We'll be moving soon given no forthcoming Linux client.


I wonder how many people just use docker desktop as a nicely packaged installer/VM manager? I know I don't use any of the other included tools, so can't see why I'd use docker desktop on Linux myself over just install docker from my package manager (or podman in my case)


Hi, we have requests for Docker Desktop Linux, please upvote https://github.com/docker/roadmap/issues/39 and we are looking at the details of what we need to do to implement this.


Thanks for listening, Justin. Looking forward to updates. I know it must be tough facing a lot of adversity from the community. I hope you guys continue playing to your strengths, improve customer support (number 1 in my book) and continue beefing out your product portfolio so companies like the one I work for can build healthy relationships with Docker, Inc.


If you are on Linux and using only the open-source bits (that's what I do) and have subscription for the registry, why would you be moving anywhere? What does this change bring that I am missing? As I understand it the change only affects Docker Desktop, which is for MacOS and Windows.


It's not this change in particular, it's that you can get paid image registries with better customer support at a lower price point and higher availability. Docker needs to value add to their bare registry product otherwise they will be outcompeted by larger companies that can offer registries as part of a larger product suite.

Unfortunately, Docker's most valuable addition, developer environments, is only for two of the three OSes used most commonly by developers in a corporate environment. No company is going to adopt a feature that can only be used by two-thirds of its workforce.


Their product and pricing page is extremely vague and full of dark patters, and doesn't really describe what "Docker Desktop" even is. Can I use the CLI without downloading Docker Desktop? Can I launch the daemon and interact with it via the API?


There's a lot of fair points here. We (big company not fitting in the personal/free tier of the subscription) were using docker desktop (on windows 10) to build windows (server) targeted images. From the various comments, it seems like it's really not a common use case. The question I have is : how does one is supposed to proceed with this specific use case if the subscription is not considered for various reasons ? Switch the dev environment to windows server ? Drop the windows container ball and switch to Linux based one ? Are we the only company doing this kind of stunt ? (and feeling a kind of vendor-lock-in vibes right now ?)


Received an unsolicited mail from them outlining the new terms, with no way to unsubscribe.


On the one hand, I'm sad that I probably have to uninstall docker desktop because I only use it for small side projects, on the other I understand Docker Inc's need to monetize as a for profit company.

I do have a genuine question though. Can a company just change their pricing structure and make it effective immediately(I understand they have a grace period here)? I guess for free tiers they probably can, because the users have never paid them, but what if I'm a paying customer? Could Docker simply say sorry we have changed our pricing from next billing cycle(or tomorrow) you have to pay 100% more. Could they legally do something like that?


Of course it's already obvious how successful Docker is in terms of consumption, but it's even more clear in this thread how successful Docker is.

Look at how many complaints there are, and people still use it.


Anyone here running the Docker Engine in WSL2?

I've been using Docker Desktop for years, and thankfully it's been more stable for the past year or two - before that it was a shitshow.

It works great for me now, but I don't actually need or use any of the GUI features - it's more or less a glorified installer for Docker Engine. Not sure if it has to do any magic for mounting Windows folders to work?

Anyway, curious if it's possible to just run Docker Engine in WSL2 myself, if there are any gotchas, and if mounting Windows folders "just works"?


Reading this I was thinking I would wake up tomorrow and go back to using `docker-machine` and a separate VM for development on my work mac. Interesting timing that they started the work to deprecate/remove `docker-machine` three weeks ago: https://github.com/docker/roadmap/issues/245


What stops any developer of a large org using Docker Desktop through their personal license/account? How is such a restriction going to be imposed?


Kind of annoying that Docker Hub and Docker Desktop are both sold in one package.

As someone who uses Docker Hub (the public repos) but not Docker Desktop (their Windows/Mac proprietary apps), it pisses me to have to pay for crap software I don't use nor care about.

Then again, the ecosystem did end up being built to be centralised around on single repository, so we did all kinda buy into this.


Tech companies should not be able to offer something for free and then later charge for it. It should be illegal and prosecuted as anticompetitive behavior. It's not at all fair to potential competition, who can't beat free and therefore don't enter the marketplace. It's just a way to try and capture a whole market unfairly. Then when the price goes up you can end up with regretful consumers who may have chosen another option if not for the misleading pricing.

If they want to do this, it should have to say in big bold font "this is free for now and we will charge for it later, you will have X months of warning before being required to pay at that time".

In some non-software industries this is already considered illegal behavior: https://en.wikipedia.org/wiki/Dumping_(pricing_policy)

I'd personally even go so far as to say that there should be some legislation around using very cheap loss-inducing prices for extended periods of time to capture the market and then jacking them back up. Like how Ubers used to cost ~30% of what they do now in San Francisco. But that's a lot harder to specify as there are legitimate use cases for sales and loss leaders.


I don't get it. There still seems to be a free version, which includes bundled docker engine, right? I think this is the only part of docker desktop that most devs need. Private repos were always a paid feature AFAIK.

The concerning text written is "Limited image pulls per day". What's the limit here?


I wonder how much Docker is paying Synk.io for 200/mo Local vulnerability scans with Snyk?


My team and I use Docker Desktop and I don't think anyone signs in. We just use it to install the docker cli tools on the Mac.

We're a small dev team, but it seems like nothing is changing at all for those of us who don't sign in to Docker.


It's free anyway if you sign in and your company is less than 250 employees and less than $10mil in revenue


I'm a bit puzzled what costs and what not, as it is a first time I see "Docker Desktop" name.

I use docker on linux, mainly executing "docker build", "docker run", etc.

Does it still cost if I do it in a 1k+ company during work?


Docker Desktop is the only way to run Docker "natively" on Windows and MacOS (I say "natively" because it's really using a linux VM behind the scenes.)

So if you're on Linux, nothing has changed (yet).


The article says there are no changes to the command line tool. This is the first time I'm hearing of Docker Desktop as well.


God forbid a company charge money for software they make. I don't understand why people are so angry considering they're still allowing free use outside of businesses.



Thanks!

Although that thread was posted earlier, I think we'll merge it into this one, on the principle that corporate press releases tend to make worse HN submissions. This is something of an exception to HN's original source rule.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


Companies with more than 250 employees or $10 million USD in annual revenue must pay a monthly subscription to adhere to the new terms of service.


Well, that's one way to drive Podman adoption.


Company has half a million employees. Most are developers. Many use Docker daily.

No way the company is fronting these costs.


It's shameful that this product was once free is now going to be charged, even if it's only for larger businesses.

I wonder how sustainable it is for docker to be like other open source entities and rely on consistent donations from major corporations to rely on income.

I also wonder if this will impede on docker adoption in the coming months. I guess time is the only one that can tell


Shameful? What's shameful about it? Software entitlement is outrageous. It's the only field where people expect professionals to keep doling out labor for free and then complain about the free stuff.

If it isn't already abundantly clear to you: free software isn't sustainable. It's built on the backs of people who provide it for whatever reason they choose.

This be a beggar so I can continue to have free stuff mentality has got to go.


Shameful may have been a harsh word , disappointing is more appropriate.

The disappointing part is that a product that was once free and distributed in abundance is now requiring licensing starting immediately for small businesses. There is a transition gap but the policy is effective starting today. There were no added features of value, it was just a random change of price from nothing to something.

I think the new enterprise features that they are boasting about which is probably where they would end up making most of their money could have been suffice as this new policy is going to be difficult for them to enforce.

I'm sure docker desktop originally being free contributed to them being this popular. It made using containers for development super easy on windows and Mac.

Now that they have the huge user base, they're in a good position to dictate terms in their favour whether we like it or not.


[flagged]


If you continue to break the site guidelines like this, we will ban you. We've warned you multiple times before, and this is seriously not cool—regardless of how right you are or feel you are.

https://news.ycombinator.com/newsguidelines.html


> The only thing you’re saying here is that you don’t recognize its value and you don’t want to pay for it. No, wait, you don’t want businesses—fully capable organizations who can pay—to pay for it.

No, I communicated my point clearly that I find it disappointing for the shift.

I understand that a business needs to be viable but I'm commenting on the fact that they probably wouldn't have the mass adoption that they have now if Docker Desktop had the licensing structure today that it had from the beginning.

The whole internet basically runs on Linux which is largely supported by monetary donations and companies making source code contributions so let's not forget that alternative altogether.


How do you even install Docker Engine on Windows without WSL2? Same goes for macOS.


If you're referring to Docker Desktop on Windows, you can use the Hyper-V backend instead of WSL2. MacOS uses HyperKit API to spin up a Linux VM to run it. There is no native engine for Windows or MacOS.

https://docs.docker.com/desktop/windows/install/


Sure, but I've never seen a non-Docker Desktop installer for any of the tools like the CLI, Compose etc., and I can't seem to find one now either.


https://download.docker.com/win/static/stable/x86_64/

Not an installer, and doesn't include docker-compose, but this is what you're talking about, I think.

Only supports Windows containers.


Surprised no one like Microsoft has stepped in to buy Docker out-right.


So using the CLI is still free on Mac, just not the gui desktop app?


I believe "docker desktop" on mac includes all the various plumbing to get the docker cli working transparently (vs. running docker yourself in a VM)


That’s surprisingly cheap


Just use Linux desktop


Linux containers ought to update and extend their product subscriptions too.


“ the Docker Desktop updated terms only apply to Mac and Windows “


RIP


right move


I think HN needs to update their algorithm. If there is a large number of upvotes and flags, flags should count as votes from some point on. More people need to see this post and discussion needs to happen instead of pushing it off the front page in less than an hour.


If this isn’t front page headline news on HN then something NZ gone very wrong, agreed.


I think since flags effect on ranking has becoming more known, there's more people using it as a post downvote, too.


What a strange sentence.. As if "Docker" is some third party. They're referring to themselves in the 3rd person.

Be warned


Meh, it's just for the headline. If someone shared the article and the title is scraped, "we're" isn't as self-explanatory and required you to look at the URL for context.

The article itself uses "we"


"Docker is Updating and Extending Our Product Subscriptions"

It's plain wrong. You can't refer to yourself as Docker AND as "we"


they just have to delete the word "our" in the headline and all would be fine. this is just weird.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: