Would it kill companies to be honest and upfront about these sorts of issues? I feel like "Hey, we're unable to pay our bills unless we can better monetize our product" comes across a lot more honest and trust-worthy than this "We're improving user experience! Trust us!" pride-and-accomplishment nonsense that everyone keeps regurgitating. We're not preschoolers, the Internet can spot marketing slogans a mile away.
Being dishonest like this is one of the fastest way to lose customers. Please can spot it a mile away.
They should make it optional. Have the email collection form prominently there and ask users if they want to sign up to receive the newsletter, free tutorial, free e-book, whitepapers, etc.
Then also have a "No thanks, take me to the download".
While email to download is fine in most marketing contexts, it is NOT fine for open source products. If they want to collect emails they need to offer something in addition. It doesn't even have to be that much. A free e-book on how to use Docker or something.
The download page could also have a "Please support us by ..." section.
I think Canonical does a pretty great job of this.
If I download an Ubuntu Server ISO, I get my download immediately, but the page has a nice prompt to register for a whitepaper to get the most out of my new server product.
If I download the Ubuntu Desktop ISO, I still get my download immediately, but additionally I see some nice prompts about donating to support their operations.
Everything about both flows inspires trust that they aren't trying to withhold my download for the sake of selling consulting services or soliciting donations.
Even MySQL community download has a small link at the bottom "No thanks, just start my download.", but no registration is possible despite two very huge buttons that just scream to sign up or sign in to an Oracle account.
The MySQL download page is how I ended up with 3 oracle accounts because I always forgot I had one. The small "no thanks just download" button is (was) very deceptive.
Desktop end user: I donated to Ubuntu, and to Debian via SPI, and a bit to OpenBSD when they had their funding wobble. Since 2015 I've been using Slackware and bought a DVD subscription (it turns out that the Slackware BDfL wasn't getting much of the income from the sale of the DVDs or merchandise so I donated again recently).
We are talking the price of a hipster coffee per fortnight here, but a few thousand people putting that on a recurring payment adds up.
For organisations like Docker I'm wondering if the RStudio model would be viable? The 'enterprise' subscription is something like $995 a year and can thus be budgeted for &c.
I also sometimes buy apps even if I'm not really sure I'll keep using them (sometimes just to encourage them to continue),
and keep subscribing to a newspaper even if I don't often read subscriber only content (since they sometimes have some great investigations && I want to support them but don't want to disable Adblock).
What I don't do:
- most monthly subscriptions that isn't payment for an actual service .
They're indeed giving away Docker for free, but part of the reason I hypothetically want Docker in the first place is that I trust the people distributing it to not do anything underhanded with their flagship tool's position as a product which runs thousands of other businesses' infrastructure.
There is a certain degree of trust involved in using a product like Docker since it is so critical to a businesses' operations, and I think a lot of people feel like using any kind of tracking (like mandatory registration to download) erodes that trust - we're forced to sit here and wonder what other restrictions might come in the future, or what other information they might start requiring for use of their product... and uncertainty never pairs well with infrastructure tooling that tends to be very important and very longlived within an organization.
So you have a complete attitude of entitlement that your preferred software must be delivered to you on your terms according to common convention. Nobody is permitted to act otherwise or we're gonna have ESR storm their office and take names. You're not a Berkeley grad are you?
Your thinly-veiled derision doesn't mask the fact that you are being willfully ignorant of the intent of the person you're replying to. You're not here from Reddit, are you?
It's not about entitlement, but it is about common convention. Docker's offering isn't unique enough for them to betray that convention.
The point of making something open source is to benefit from collaboration. Is not making the argument, "locking software distributions behind a login wall is harmful," simply a form of collaboration?
Does offering the source not qualify as offering something? What is the closed source company offering that you are ok with giving your email away?
Really an honest question. I find Docker useful, and was put back also by the email/ login request, but why are they getting so much hate, compared to every other company, just because of this?
Because a closed-source company is open about being a for-profit commercial entity that is trying to make a dollar first and foremost.
Docker is "revolutionising infrastructure" or something. It doesn't have "make heaps of money" as it's primary goal. Partly this is because open source. There's an expectation that open source is also "for the greater good".
The cardinal sin of our times is hypocrisy. Being a money-worshiping greedy capitalist bastard is fine, as long as you're open that that's what you are. Pretending to be altruistic while actually being greedy will generate all the hate.
But it's not one or the other. It's perfectly legitimate to start a company with the goal of improving the world, while also needing to make some profit so you can continue to be a functioning company. No greed or bastards involved.
What does being open source and being altruistic have to do with each other? Personally, I think they enabled the login, so that new users would be able to use docker cloud, after installing docker, without needing to create an account. But still, being open source, allows you to verify security, be transparent about the product design/intentions, possibly extend or customize for your needs, etc. There is a lot of value there, that shouldn’t warrant so much hate.
You may not feel this way, but think of downloading docker, as a non developer. You are probably following some tutorial, and really have no idea what you are actually doing. What if a container you are downloading is dangerous, or becomes dangerous. What if the version of the platform you have eventually becomes risky, due to a hack/0day? You are basically downloading an entire OS/execution environment, that makes it seamless to run an entire environment, while doing nothing. How would the company send an email/warn you etc, of some basic info that could really help you, or make your experience better? This isn’t som H&M mailing list to take your money. This is real, marketing aside, maybe they actually care?
> What does being open source and being altruistic have to do with each other?
The "open" in "open source" is about encouraging cooperation and collaboration. And not using lock-in or patents or walled gardens to obstruct competition.
If the altruistic aspect is still not obvious: many projects encourage a gift economy by accepting donations.
Astroturfing is really not compatible with what you called "be transparent about the product design/intentions"
I would only ask someone being so harsh, what they have personally contributed as a project directly or indirectly (supporting an existing project) before being as harsh or judge mental as this, especially toward a product like docker, that probably revolutionalized an industry. To clarify, I mean popularized an entire paradigm of running software, not necessarily the first.
I agree. There's nothing intrinsic to Open Source that means "for the greater good". And Open Source is beneficial even when done by greedy corporate bastards. There's even an argument that by crowdsourcing pull requests for free, an Open Source company is actively being greedy and capitalist.
However, the kind of mindset that enjoys being a greedy capitalist bastard finds it very very hard to accept the Open Source philosophy - it's all fear-based, "do unto others before they do unto you" and so "if they can rip my code off, they will", because that's what they'd do. I've experienced way too many hard conversations about open-sourcing code with this type of person.
So there tends to be a correlation between Open Source software and a co-operative mindset that would find this type of coercive marketing bullshit to be evil and reject it. This correlation becomes an expectation.
None of that is related, proven by the simple fact that Docker and other open-source for-profit companies exist and have already contributed significantly to the industry.
This was a bad marketing-driven move, but that's all it is.
IMHO Free software done right: http://onehouronelife.com/ (except the call to action is below the fold :-P)
Warning: past the home page, there is some possibly NSFW content. The game defaults to having some cartoon nudity (although there is a non-nudity mod) when the players haven't made clothes, so you might see some pictures of that if you dig around.
It's a game. He puts all the code (and assets) in the public domain. You can go to github and download it and build it for free if you want. But on the website: $20 please. He tells you exactly what you get: lifetime server account, all future updates, full source code, tech support.
Although the forum is not exactly a haven of mature discussion (in fact, it's downright awful at times), I've not even seen one complaint that "I could have got it for free". In fact, there have been several discussions where people say, "$20 is too high. Is there any way to get a discount?" and the reply is "You can download the code for free and play on these free servers". Inevitably the person says, "But I want the official version. I guess I'll pay the $20".
No idea how much money he's made so far, but for a 1 person indy game, he's done astonishingly well: https://onehouronelife.com/newsPage.php?postID=377 (description of sales in the first 2 weeks last March). According to other posts he's made in the forum (which I can't find), sales have continued to be brisk.
If you want to charge for downloading the official build of free software, then do it. Even the FSF will cheer you on (as long as you include source code ;-) ).
The game itself is really fun and griefers are much rarer than you might imagine from reading the forums. But yeah.... It's absolutely nuts there sometimes. The developer is a massive free speech advocate and doesn't mind hosting this horrible crap. But what's insane is that he gives moderator ability to some of the worst offenders. So I just don't know what's going on in his head sometimes.
There used to be a list of alternative servers, but seems to be gone now :-( Possibly nobody is hosting one any more. I'm tempted to do it myself, but I'm in Japan so the lag would be unacceptable anyway (I'd be playing by myself, which I do anyway...) But it's an option. If you can find a group of people to play with, it can be quite fun just to run on your own server. It takes very little CPU from my experience.
It's the issue with discussion platforms where the speech is not regulated, they tend to bring people that have been kicked out of other platforms, even if they are not really interested in the main focus of the forum.
> The paradox states that if a society is tolerant without limit, their ability to be tolerant will eventually be seized or destroyed by the intolerant. Popper came to the seemingly paradoxical conclusion that in order to maintain a tolerant society, the society must be intolerant of intolerance.
Which in practice just means that everyone labels anything they don't like "intolerance", bans it and toddles off congratulating themselves on how tolerant they are.
I have no idea what the solution is. I suspect anyone coming up with one would win all the Nobel Peace Prizes from now until the end of time. I do think it's a useful rule of thumb that if you're not finding tolerance excruciating and infuriating at times, you're not really doing it.
This has helped me to view the GNU GPL in a new light. That is, to ensure freedom, we curtail the freedom to limit the freedom of others with respect to software.
> The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
Then take a AWS free Virtual Machine and host it :).
Just check the traffic regularly (because only 15GB outgoing traffic is free, incoming does not cost anything. You can set a alarm for 1 USD).
Or get the smallest instance from scaleway, which is so far the smallest one with a dedicated hardware core (not thread) on reasonably modern chips at 2 EUR ~ 2.5$/month, unmetered 100Mbit, 1GB ram and some 25GB fast ssd. If you know any cheaper ones, let me know. They even have 3$/month bare-metal ones with slightly higher specs, but using a Marvell ARM chip and only supporting exotic NBD storage.
Heck, if you'd be fine with an EU server, contact me, I'd sponsor it, including a short subdomain.
interestingly, the game itself has anti-griefing features built in. Communication is limited until your character has survived quite a while and until you're grown up you're completely dependent on the help of other players who are already at the adult stage. If you die due to a negligent mother, you're back very quickly. If you get a griefer child, it's your call whether to feed them.
the game overall has some very interesting features around community and cooperation, and rogue griefers are disincentivised against because the way the game scales seems to inherently require cooperation between strangers.
Neat, that makes me want to check it out. I'm pretty tired of communities that are so busy virtue signaling what is "toxic" that actual discussion is hindered. Those types of communities tend to be overly ban-happy to anyone who speaks against the views of the mods/admins too.
Out of context from the rest of your post, but about this "below the fold" thing... The page is fugly and horrible, but the links are interesting reads.
For reference and so people can be outraged without RTFA, here was the original close comment:
-- joaofnfernandes (2 months ago)
"I know that this can feel like a nuisance, but we've made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward.
As far as I can tell, the docs don't need changes, so I'll close this issue, but feel free to comment."
90% of changes just make things better for the user. 10% make things worse, but they’re obvious and honest and we’re okay with it.
And then 1% is the stuff like this. The Netflix/Qwikster you’re paying less but more debacle. The EA “sense of pride and accomplishment”. The Netflix (wait, why are they here twice?) show recommendations that aren’t ads that’s happening right now.
Opt-out phone-home, telemetry, crash reporting, quality analytics, whatever you call it. That's my prediction.
My Synology personal file server started nagging about enabling telemetry and needing a privacy notice to store my files on my drive on my network. Screw that.
I struggled to upgrade my graphics card drivers on an old gaming PC just the other day because I'd forgotten my "GeForce Experience" password and had to reset it.
My thoughts exactly. What a weird turn of events when it's easier to install Nvidia drivers on Linux using your distro's package manager than it is on Windows!
At least thanks to GDPR these companies now have to tell you and (in some cases) explicitly request your specific consent, rather than just doing it behind your back.
I use a Synology personal server, but an older one. I have been considering upgrading to a more modern version. Do you remember about when this nagging began? Was it related to a firmware upgrade or was it "right out of the box"?
At least for me the prompt isn't a one time thing, unless you consent to enabling telemetry. I get asked whether I want to enable it all the time when logging in.
Yes, you can disable it forever [1]. Oh and there's actually a tiny "skip" on the nag screen in dull contrast, more user hostility. It's the concept I object to, though: If I pay for it, I should not be the product. And a privacy notice on something that's by definition supposed to be private.
If you like the Synology software stack you can also use XPenology which is pretty much DSM but then for non-Synology devices. Linux-based, so btrfs instead of ZFS.
as soon as you see nonsense business speak like that you know things are going downhill.
see also, netflix, and twitter lately.
I really don't know why they bother uttering or writing those kinds of words. They contain no information, nobody is fooled by the lie. It is a waste of their time.
I thought the same way until I worked with lots of types of people. The vast majority seem to much prefer BS, even blatant BS to honesty that is even the least bit blunt. I think part of it is that since everyone exaggerates, actual blunt truths are assumed to also be a positive spin, which would put their reality in the toilet.
I wouldn't call them idiots. They just are not as exposed to the concept of online privacy as a lot of us are.
We might be on the other side of table in many other ways, or idiots as you put it. Like the way my doctor friends avoid some OTC drugs that I never even think twice before taking, or some food, or some ready made edibles. I have a friend in textile industry and when he buys clothes it's a whole new level and makes me wonder what the hell I have been wearing so far. It amazes me how he sees through all those "Giza cotton" tag-lines and gimmick features of breathability and what not that are usually followed by a (™).
Can we all not learn everything from The Internet? No, we can't and it does not make any of us an idiot.
I wasn’t talking about lacking domain expertise in a given area, I think obviously. I’m talking about something much broader and universal, but I’m still sorry for causing you offense, that wasn’t intended.
And moreso, if you want to make money just charge money for something. Don't do scumbag stuff like collect emails to sell to marketers. Make something people are willing to pay for, and charge them money for it.
Doesn't Docker have Docker Enterprise or something like that that they charge money for? So the email harvesting is really just in addition to charging money.
This is a socially acceptable reason to collect emails, and it should be openly labeled as "do you want to receive information about Docker products and services?" or the like, like non-annoying companies do.
The metamorphosis from startup to company is complete. The people who would have been straightforward with you have been replaced by endemic office-dwellers.
In Docker's defense they don't really barrage you with marketing. This isn't a situation where if you create an account you're going to get emailed every day about "5 tips on moving your Enterprise application to Docker!".
I've had a Docker Hub account since Docker Hub was a thing and the only content I really ever get from Docker is a weekly newsletter (which you can opt out of) and notifications about the platform itself (such as any downtime reports, etc.).
I do think it's a bad idea though, mainly because for newer people getting into Docker it's a barrier of entry to overcome. I'm very suspicious of anyone asking me to register for things like this. On the other hand, I don't have insights that Docker has, so to make such a bold move, they probably have a plan.
> Although they may not barrage you now, there is no telling what the future holds with stunts like this.
I think the future is pretty predictable.
In the off chance they just wake up and start slamming you with unsolicited marketing then you can click unsubscribe in the footer of their email and you'll never see another email from Docker again.
But really, I don't think Docker is foolish enough to do that. They've spent a lot of years building up their brand and business, and aren't reckless enough to put all of that at risk by relentlessly emailing their users with marketing agendas (if that's what they wanted to do they could have been doing that for years).
Docker already knows that almost everyone uses the free community edition anyways, so they really have nothing to sell to us anyways, except for maybe Docker Hub private repo access. Anyone who already downloads Docker already knows the benefits of using Docker, so they don't need to sell us on Docker as a technology. What are they going to market to us?
Lastly, let's not forget that the Docker for Windows / Mac clients have allowed you to login to the Docker Hub for a long time now and nothing bad has came from that (unexpected marketing attempts).
> In the off chance they just wake up and start slamming you with unsolicited marketing then you can click unsubscribe in the footer of their email and you'll never see another email from Docker again.
No, you see tons of email from everyone Docker sold your "Guaranteed Live And Active" email address to, once it verified liveness and activity by you clicking the "Unsubscribe" link at the bottom of the email. And that's assuming Docker doesn't just keep spamming you, secure in the knowledge you're reading their earnest missives and care enough to respond to them personally and by hand.
> But really, I don't think Docker is foolish enough to do that. They've spent a lot of years building up their brand and business, and aren't reckless enough to put all of that at risk by relentlessly emailing their users with marketing agendas (if that's what they wanted to do they could have been doing that for years).
If they're suddenly in a different financial position, or change leadership, or for any of a number of different reasons, they could indeed go off a cliff like that.
Agreed. It is extraordinary that intelligent, honest, discourse is so rarely employed when companies explain things like this. Why MBAs, sales and business types find it so hard to understand how appalling intelligent people find this style of discourse baffles me. Surely many MBAs, sales and business types are intelligent and empathetic people?
To me this seems to be a 'when all you have is a hammer, everything looks like a nail' type of issue, in the sense that they try to apply PR and marketing tactics from mass marketed consumer products to a niche product for professionals.
Stuff like this makes me feel better about focusing on Ansible Container instead of Docker. You can use it to create multiple different types of containers without being married to Docker itself.
Is this a viable alternative to Docker? I’m about to launch a fairly large new project and had planned on going with Docker but this definitely causes me concern.
IMO docker is a dead end, it essentially ended up being a glorified ZIP file, the real solution what docker was trying to do (reproducibility) is what Nix does, and if Nix is not a solution then something in that direction.
In nix, you're basically describing the whole dependency tree of your application all the way to libc. When you build your application it builds everything necessary to run it.
The great thing about it is that your CDE essentially is identical to your build system, and the builds are fully reproducible, it takes over being a build system, package manager and as mentioned CDE.
They went even further with that (I have not explored that myself yet) and used the language to describe the entire system (called NixOS) which looks like CMS is no longer necessary and also nix is used for deployment (NixOps, also did not tried it)
If you are into containers you can still deploy into systemd lxc containers, or even create a minimalistic docker image.
The disadvantage is that there is a significant learning curve, it's a new language, and it is a functional, lazily evaluated language. The language is not really that hard, but many people are not used to functional programming. It is especially popular for deployment of Haskell code, since the language is also functional and lazily evaluated.
A good alternative to Docker is podman the cli built on top of libpod (https://github.com/containers/libpod). It has the same api than docker but lets run build and run containers without the need to have root permission.
You can try LXC containers by Ubuntu, this is what Docker was based on initially. The main difference is LXC runs an init in the container so you get a standard multi process OS environment while Docker containers are single process environments.
We have been working on Flockport [1] that supports LXC containers and provides orchestration, an app store, service discovery and repeatable builds. It's still in early preview and we have not started proper outreach but it may be worth looking at.
Ubuntu also provides the LXD project that provides some orchestration across servers.
As a data point, Docker itself - in Swarm mode - doesn't yet do IPv6 to any decent level natively.
It's seems possible to get IPv6 working through alternative orchestration though. eg Theres a guide on getting it working with Kubernetes and Calico.
But if you're looking for something that's production grade IPv6 - eg people can work out WTF is wrong when problems hit - it's probably not there yet. At least, not for small teams that I can tell. ;)
It's not just the dishonesty, but the way the github issue was closed just like that after just providing an improper, half-assed solution that still doesn't address the core problem.
> Would it kill companies to be honest and upfront about these sorts of issues? I feel like "Hey, we're unable to pay our bills unless we can better monetize our product" comes across a lot more honest and trust-worthy than this
The problem is that it's almost never that.
It's not "we're unable to pay our bills". It's "we've got more money than we need already, but we think we could get a lot more this way".
You think, say, Netflix is a struggling business and that's why they have to put more ads than before? No. In a capitalist system, leaving money off the table is increasingly unjustifiable as the number you're leaving off grows. Docker absolutely is in that same situation.
Exactly. The reasoning behind this decision is obvious:
"Hey, look at all these downloads CE is getting. We need to start following up with these users to try and promote Enterprise and other products. Start capturing emails at the point of download."
Yea, it's like when the auto mechanic says its "unsafe" to not replace your brake rotors when you change your pads and that he won't replace your breaks unless you pay for rotors too. Just say that the job is too small to be worth it unless you replace the rotors.
The internet does not always throw a fit every time a company tries to add a little monetization to what is an essentially free service.
The community is expressing a desire for companies to be be honest and upfront about these sorts of issues (i.e., monetization). Refer to the post you responded to for more information.
Docker isn't a service. It's an easily replaceable software that is dominant due to the network effect. All the hard work was done by the Linux kernel before Docker existed.
Do companies always have to gaslight users by suggesting that ads, trackers, malware and other features are "user-enhancing"? Monetization is OK, and in fact is a good thing in many cases, but trying to couch it in BS irritates many people.
The internet doesn't care about companies trying to improve their revenues, particularly when it comes to "free" products. It does care about a great deal when it's lied to.
Haha yes. The truth is that every critic of your service will do that if you monetize. It's definitely one key thing to take care of when you try to switch to generating money. You need to have a good story around how you're doing it.
The guys who act like "it's only about the lies" are usually not decision makers, but it's important to be able to have enough a story that the decision makers don't get their thoughts contaminated by the perennially negative.
I think Docker will be fine with what they're doing. This is a storm in a teacup. But they should've bundled it with other features like auto-updates or something.
Docker Hub[1] is also blatantly in breach of the GDPR. Wording on the pop-up:
> We and our advertising partners use cookies on this site and around the web to improve your website experience and provide you with personalised advertising from this site and other advertisers in AdRoll's network. By clicking "allow" or navigating this site, you accept the placement and use of these cookies for these purposes.
It’s not a modal, but supposedly ignoring it opts you into the tracking, with the only choices being “Allow” or “Learn More” and the [x] button also being labelled “Allow”.
IANAL, but it’s not informed individualised consent if there’s literally no opt-out, and there’s not a lawful basis unless advertising-cookies are suddenly the enabling technology behind downloadable containers.
I’d report them to the Information Commissioner‘s Office myself if I didn’t think they were about to fold anyway, after their piss-poor sunsetting of Docker Cloud and painting a target on their own back for a few adbucks.
The opt-out, is to navigate away and not use their service. Which matches the GDPR - if you need the data to create a contract - like 'we use your data in exchange for your use of our site' then you can keep it.
> there’s not a lawful basis unless advertising-cookies are suddenly the enabling technology behind downloadable containers.
Yes they are. Advertising cookies are how those downloadable containers are provided. That's an enabling technology. It wouldn't exist otherwise in the technology ghetto of the EU.
Your legal analysis is incorrect. From the UK ICO's guidance¹:
> The ‘consent’ is a condition of service
> If you require someone to agree to processing as a condition of service, consent is unlikely to be the most appropriate lawful basis for the processing. In some circumstances it won’t even count as valid consent.
Instead, if you believe the processing is necessary for the service, the better lawful basis for processing is more likely to be that the “processing
is necessary for the performance of a contract” under Article 6(1)(b). You are only likely to need to rely on consent if required to do so under another provision, such as for electronic marketing.
It may be that the processing is a condition of service but is not actually necessary for that service. If so, consent is not just inappropriate as a lawful basis, but presumed to be invalid as it is not freely given. In these circumstances, you would usually need to consider ‘legitimate interests’ under Article 6(1)(f) as your lawful basis for processing instead.
And in regards to tracking specifically:
> You are also likely to need consent under ePrivacy laws for most marketing calls or messages, website cookies or other online tracking methods, or to install apps or other software on people’s devices.
The GDPR does not accept lack of action - dismissing dialogs, ignoring them, etc. as consent. You have to give clear, free and affirmative consent.
You basically have to have a modal "do you consent to tracking? [yes] [no]" dialog. Which obviously nobody who does tracking wants to do, but that's kind of the point.
>Docker Hub is also blatantly in breach of the GDPR.
Truth is, no one cares. GDPR is an overreach designed to shake down American mega-corps. Docker has no money so the EU isn't going to do anything to them.
>I’d report them to the Information Commissioner‘s Office myself if I didn’t think they were about to fold anyway
I'm sure they're inundated with complaints from unsuccessful companies trying to shoot down their biggest competitors already. Adding one more to the pile is only going to waste your time and that of EU regulators.
> GDPR is an overreach designed to shake down American mega-corps.
The GDPR is the result of mega-corps (American ones in particular) not giving two shits about how their users' personal data is handled. Cry all you want now that the milk's spilled, it won't change the fact that this legislation was not conjured in a vacuum, but as a response to the way corporations behave when not obliged to care about personally identifiable information.
> Docker has no money so the EU isn't going to do anything to them.
A formal reprimand might suffice. Contrary to the naive american view I see here on HN, EU data regulators don't immediately try to shut you down by barging into your company's office with a SWAT team.
> I'm sure they're inundated with complaints from unsuccessful companies trying to shoot down their biggest competitors already.
How sure? 100%? 50%? Less? What are you basing your assertion on?
> Adding one more to the pile is only going to waste your time and that of EU regulators.
There's a characteristic nearly all government departments share: they may be slow, but they're steamrollers. They'll get to you eventually.
This is just inaccurate. GDPR is derived from warranted concern over rampant data abuse. And it's actually much easier to make a startup GDPR compliant than it is to overhaul a large company with rigid systems already in place. If anything, GDPR favors startups.
It hurts small startups trying to perpetuate the same blatant disregard for human rights as American startups have done in the past. It doesn't hurt small startups that are privacy-aware and treat their users with respect.
Not giving users a way to delete their accounts was never okay. Tracking user behavior without consent was never okay. Holding users' data hostage was never okay. Not giving people a way to correct the data you keep about them was never okay.
US startups have been playing on easy mode by getting to ignore human rights and just follow the local letter of the law even when going international.
If anything you'd think HN "classical liberals" would love this as it evens the playing field, allowing for fairer competition between already privacy-aware EU companies and the previously unfairly advantaged US companies entering the EU market. Of course this assumes you think privacy and data ownership should be protected as human rights in the first place.
> Not giving users a way to delete their accounts was never okay. Tracking user behavior without consent was never okay. Holding users' data hostage was never okay. Not giving people a way to correct the data you keep about them was never okay.
Sure. If being GDPR compliant just meant you just don't have to do those things, it wouldn't be a problem. But with GDPR you now have to spend time (=money) understanding what GDPR means (probably with a lawyer's help) and ensuring that you are in fact compliant. "I try to protect user's privacy" isn't good enough when the EU could effectively put you out of business if you aren't. You'll have to deal with Data Access Requests, most of which are from trolls. You may need a DPO, which might require hiring someone. I'm all for protecting privacy, but the GDPR adds quite a bit of burden, which large corporations will be able to eat, but will set back smaller corporations. Really medium size companies are in the best position, since they have the resources to meet GDPR obligations, but don't have to do massive overhauls like the big corps do.
As a tool for building containers its cumbersome, has bizarre and frustrating limitations and has issues that haven’t been addressed in years. You can’t use semver for tags, it eats up all your disk space and you have to manually GC it, etc… Multi-build support is basically useless and invariably you end up writing convoluted bash scripts to get the thing to work.
Caching is terrible pretty much across the board. How many petabytes of data are wasted every day re-syncing apt-get?
As a runtime engine its basically dead for production use. If you ever try to use it you’ll quickly discover that it has tons of problems. It locks up, orphans processes, stops responding to commands, forgets about containers, etc. And its not safe to run arbitrary containers. You’d be surprised by how many companies using Kubernetes gave up using docker a long time ago.
Please save yourself a lot of heartache and just use containerd.
As a concept, containers never really lived up to their potential. "A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application." As long as that application runs in linux.
As a company Docker is a failure. All their cloud stuff is clunky, poorly thought out and at this point largely irrelevent. I'll be surprised if it will last 5 years. Kubernetes won. Those cute, cuddly characters and the Docker name are basically all they have going for them, and all these things, the Moby rename, the requiring login to download, etc... are the death throes of a dying company. Docker is the next jQuery.
People don't realize just how easy all this stuff will be to replace. Google already did it. Like 5 times. (gVisor, jib, kaniko, ...) It was probably some intern's side project.
Microsoft should just get it over with and acquire the company. It would be the perfect cherry on top to their Github cake.
> Please save yourself a lot of heartache and just use containerd.
Part of the appeal of Docker is the ease with which developers new to the concept of containers can pick it up. I cannot say the same for containerd. Try a search for containerd tutorials and compare what you find with docker tutorials.
> People don't realize just how easy all this stuff will be to replace.
Think carefully about why that is. If it were easy, it would have been understood. So, no - understanding the replaceability of the container components is not easy. That is why people have not realized it.
The issue with many docker's competitors is it appeals to docker experts. The abstraction and terminology itself is not straightforward.
A lot of hyperbole in your comment is unnecessary and/or undeserved. Of course I realize that this is HN where k8s, rkt and containerd are used largely by the visiting audience. But don't forget the dark matter developers.
although I've not seen much on K8s + containerd without Docker in terms of tutorials, it is a very new space, and (IIRC) cri-containerd is also pretty new.
I'm not aware of many k8s distro's running anything other than Docker by default, at the moment
IBM Cloud has 1.11+containerd in production (https://twitter.com/estesp/status/1029739247606145025), GKS has containerd as an "alpha" option during k8s cluster creation; Azure has stated direction to move to containerd for AKS (and their OSS deploy tool already supports containerd on cluster creation)
Cool, I guess I'm not surprised that IBM Cloud are leading the way there :) are you aware of any on-prem options shipping Containerd as a default so far?
I know our own on-prem (IBM Cloud Private) has containerd in testing and looking for a late year release where containerd is the primary engine. I'm not as up to speed on who else is offering on-prem K8s; but I do know minikube is also getting containerd integration as we speak.
Awesome, I had a play with getting containerd running without Docker a little while back and it all worked, just required a bit more effort than Docker :) It'll be interesting to see what happens as more k8s options adopt it
In my opinion docker is not a good fit for the problems its trying to solve.
On the developer side I'm trying to build a artifact of my application which can run on servers. I'm probably using windows or osx as my development environment, and I'm probably targeting linux. So docker is a tool I can use to create a container to achieve my goal.
But it doesn't do a good job of that. The dockerfile format can be very frustrating to work with. And all the terminology sets a very high barrier of entry for usage. Yes its all googleable, but sometimes I think the solution is harder to use than the problem it set out to solve.
Go, Java, .Net, even javascript with Node, all do a much better job of making it easy for developers to build cross-platform applications. Without the need to learn an arcane file format and clumsy cli tool.
As an example of that frustration, building a cross-platform Go app is as easy as setting an environment variable. No additional configuration is required and there are no additional tools you need to install. The binary that is produced has everything it needs to run and can be deployed without having to use specialized tools.
Whereas building a Go app in a container with Docker means you end up retrieving all the dependencies every time and you get no package caching for builds so they take 10 minutes. For any large project you will inevitably break out your docker files into separate steps with some sort of make file. You will then bang your head against a wall for days trying to get docker in docker to work for your CI build system.
Is that really the best we can do?
But docker is also about isolation, file formats, conventions for deployments, protocols, etc. And yes it's a step forward in many ways. Containers are great for allowing the ops side to focus on running an application instead of making sure its configured properly.
But on so many of these dimensions docker has had major problems. It's not really safe to run arbitrary containers, so the isolation is an illusion. Because of the way the tooling works the containers are way too big and contain way too many unnecessary dependencies - which end up being security liabilities because they aren't upgraded often enough.
But ultimately docker turned out to be way too low level. It's why something like Kubernetes exists. It's a much higher level way of describing how an application should work.
So docker is getting pressure from both sides and that's why I think its days are numbered. The formats and conventions will stick around, but once they start requiring people to pay for the local development tool, everyone will move on and we'll stop talking about docker at all anymore.
Wow I'm so glad that I'm not the only one with this opinion. Every single time I've had to use docker I've had to slog through inane design decisions which seem to make everything harder than it needs to be. Everything is inconsistent and there are several commands which seem to essentially do the same thing but subtly do not, and so on .. It's like every design decision has just been made by a single developer who likes to come up with new and "smarter" ways to do things.
I wouldn't go so far to call it a mess but there is some truth to it. Naively, one might expect the following use cases:
- test-drive even the most complex deployments on your laptop
- spin up all the cool admin tools you ever wanted without wasting months of precious life time
- reproducible setups
- everything is always up to date
But as you say, caching is at best in a mediocre state. If you indeed want to run heavy-weight servers (like ELK), be ready to have a Terabyte space just for images unless you want to GC all the time when tweaking the configuration.
Also the laptop thing doesn't work unless you either have a very beefy machine or just work with very lightweight software. In reality Docker can be used for these use cases:
- Test-driving some heavy weight or complex server software
- CI to some degree
I'm starting to think good old Unix tools in combination with automation tools like Chef, Ansible, ... are the way to go - or even just plain .deb files...
At Datadog we are using containerd. We've heard of other companies using cri-o.
There's also a lot of mixed deployments of container vs native out there. It seems Kubernetes is popular but not many companies with a large number of servers are willing to bet the farm on it, so they may only run a subset of their services with it (stateless, or test environments)
You can use Docker, just be ready to account for the instability. With proper detection and strategies to evict bad nodes you can build a reliable platform out of it. (though for stateful things you may end up with a real mess on your hands)
Weve seen issues with cointainerd too, but at least so far it seems more stable.
Basically every piece of docker is being replaced. The runtime (containerd, crio), the tools to build containers (google has several), the server to host images (ecr, gcr, etc)... it's weird to call it docker when none of the components actually use docker anymore.
The gui and installer is nice for local development, so I guess it has that going for it.
+1 for containerd (+ cri-containerd, which is included by default now), it's served really wonderfully (meaning I didn't have to touch it much) as the base for k8s machines I've boostrapped recently -- I don't even install Docker anymore (as in literally docker the daemon is not installed or running, just runc + containerd).
The replacement of docker is a good thing IMO (though probably not a good thing for Docker the company) -- it's one of the main benefits of the kubernetes hypetrain, the development of C*I (Container <something> Interface) has been great for the ecosystem.
I personally find docker's CLI way more ergonomic however.
Wtf are you talking about? I've been running docker in production for 3 years (30+ microservice platform) with litterallly 0 of those problems. Caching works, hence why you need to clean your disk space. And multi stage builds are awesome for many circumstances. It sounds like you don't really understand how to use docker all that well.
You've never had docker do anything weird in production?
It didn't happen often, but we definitely had problems. We're running kubernetes clusters with several hundred nodes.
Caching is very crude. When building its based on lines in the dockerfile which means adding a dependency means redownloading everything. You also can't mount a directory for builds.
Multi stage builds are very limited in what they can do and often aren't powerful enough to implement efficient builds. You end up either having 20 minute builds or complex make files to work around the inefficient default workflow.
FWIW an intelligent caching mechanism should not require manual cleanup. Thankfully kubernetes does this for us in production... but the crazy 1 GB images you end up with for a moderately complex python app make it hard. (especially when people use :latest and then there are 12 versions of the app laying around)
On our CI server, I setup an apt-cache container, and have a 'base' image derived from bitnami/minideb that sets the container's apt proxy to apt-cache. There's also Squid for HTTP caching, Archiva for Java artifacts, and Devpi for Python ones. So, sure, changes will require re-downloads, but they're pretty fast since everything's local. For getting multi-stage builds to work nicely, I just use a Makefile, defining separate targets for 'builder' images and 'runtime ones', and COPY artifacts from the former into the latter to form the final application images.
> As a concept, containers never really lived up to their potential. "A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application." As long as that application runs in linux.
I like the concept of containers as a lightweight alternative to virtualization. I even use containers (FreeBSD Jails, specifically) for some particular use-cases where I need stronger isolation than separate UIDs, but don't need a full-blown VM. I don't mind this use-case at all.
I really dislike the reality of containers which seem to be "Our deployment procedure and dependencies are so insane so there is no hope of packaging this as an RPM or DEB, so here's an entire userland for you".
As an example of papering over a crazy deployment/dependencies nightmare: I remember seeing a project[0] which used four Docker containers to apply some machine learning based automation to Hue lights. Two of the containers are basically infrastructure pieces (RabbitMQ and Cassandra), one container was dedicated to the machine learning piece and one acts to tie the other three together.
I have no idea why this project needs to run four separate operating system instances to do this job. If I were building it I'd do it as one application, so no need for RabbitMQ, and I'd use an ORM to let the user choose which database is most suited. I'd have an SQLite database as a reasonable default. Maybe I'd have an option to publish stuff to some kind of message queue so that it could be consumed by other systems.
Don't get me wrong, I love the idea behind this project and want to try it out, but it really feels like the author went container-crazy because they could and didn't stop to think about whether they should.
My fellow employed engineers have been moving towards Docker for over a year; slow progress because there's so many problems that Docker introduces whose solutions are convoluted at best.
That's on top of the security auditing nightmare if you ever decide to use stuff from Docker Hub.
> "As a concept, containers never really lived up to their potential. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application. As long as that application runs in linux."
This is how Erlang releases have worked for 20 years
I never really got how a shipping container on a whale makes for a good logo, unless it's just meant to be reflective of the apparent reliability:
> the "whale" your container is riding on may just fucking dive underwater for hours at a time, but hey containers are designed to be in the ocean.. Wait, they're not?
The level of toxicity in the github issue's conversation is so astounding that I must say something. People on the internet are people. I hope that we don't talk to people in "real life" the same as we do when we make toxic comments online. Complain, write letters/emails to Docker, make your opinion heard, etc. But remember that you are dealing with a fellow human-being, who has their own life and own emotions. Treat them with the same respect you would like to be shown when people are unhappy with you.
The comments in that github issue reflect very poorly on us as a professional community to the point that I'm embarrassed.
I agree with you. BUT, even from a pure software hosted on github point of view, he didn't respond appropriately - The person who opened the issue had a legitimate concern - WHY.
The maintainer not only skipped addressing the core issue, which was that you need an auth-wall to download the setup files, he also closed the issue with a very dishonest answer without addressing the WHY. He even said the docs don't need to be changed. WHAT? It's gray area, but to me personally, it's an unethical move to not mention such a drastic change in the docs. Unless, they hoped that this would pass off either unnoticed as acceptable. But in this case, it didn't and the community held them accountable.
If you notice, most of the comments aren't personally attacking him, but rather suggesting alternatives or either just re-iterating the core issue once again.
But then, what I DO agree with you is the issue page slowly turning into a reddit thread, which I fear would lead to personal attacks, etc.
But, do you know what would have stopped all this? Just a simple apology/honest discussion about this change and perhaps, actually talking about a solution. That's all.
One person called the company "jerks" without singling anyone out. Another posted a rage meme to express their displeasure. The overwhelming majority of the hundreds of comments were even more restrained.
Perhaps I'm missing some of the more "choice" comments but as a whole I have far more of a problem with the official corporate-speak than the reactions too it.
Two wrongs don't make a right. "Jerks", as a comment, is purely subtractive. We're all adults. There are plenty of critiques to their action and their defense. Let's use our words to make reasoned arguments rather than just lashing back.
"Jerk" is a pretty mild word to express dissatisfaction.
I would maybe ascribe it to a mismatch in expected decorum, hardly call it toxic. Like, do you expect a church or an informal conversation among peers? Or if you will, a cathedral or a bazaar? :)
And it was just one comment .. "We're all adults" goes both ways, let's not fall all over ourselves because of a little mildly bad word. Personally I don't think that's very constructive either, in the sense that it lights the fire under something that distracts very much from the actual topic.
They called it an astounding level of toxicity. Whatever your thoughts on the expected level of decorum, that seems a tad hyperbolic to call out over usage of the word "jerk". Such hyperbole is definitely not helping, and in my personal opinion, actually more "subtractive" than generically calling a group of people that did something you don't like "jerks". For one thing, it's not astonishing. It just isn't.
> “Jerk” is a pretty mild word to express dissatisfaction.
I certainly wouldn’t defend it. It’s not toxic because “jerk” is a bad or particularly strong word by itself, it’s toxic because calling someone a jerk is a personal attack, and it’s a judgement call that is purely mean spirited and doesn’t address any concerns. It doesn’t explain the frustration, it’s escalating things in a negative way, and it’s an insult. That is not a socially acceptable way to express dissatisfaction.
It's not a personal attack if it is directed at a company or the actions of a company rather than at a person.
Its purpose is not to explain the frustration but to signal the intensity while clarifying the attribution: the person making the comment believes that Docker acted knowingly against their interest and does not buy the excuse that Docker put forward. I challenge you to express the same concepts with the same clarity in five times as many words.
I contest that name-calling is not an escalation over doublespeak.
Finally, whether or not something is socially acceptable is up to society, not to you.
> It's not a personal attack if it is directed at a company or the actions of a company rather than at a person.
Splitting hairs perhaps? “Jerks” is intended to be an insult no matter how many people you’re talking to or about.
> it’s purpose is not to explain the frustration but to signal the intensity
Sure, that’s a plausible assumption, but it doesn’t help the conversation, nor make it okay or socially acceptable to hurl insults. Are you certain that was the purpose? Have you clarified that with the author of the comment?
Perhaps a better way to signal intensity is to explain what material impact the decision has on their workflow and daily lives. What is the cost in terms or time or money, or something else?
> whether or not something is socially acceptable is up to society, not to you.
Yeah, that’s correct. Did I claim it was up to me? I stated a fact, not my personal opinion. Throwing insults around is not socially acceptable, according to society, not me.
Exactly, we are all adults, this kind of non-answer-marketing-speak: "we've made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward" is dishonest. I, as an adult, would definitely call someone who gives me this sideway kind of answer a jerk.
Pulling this kind of crap on customers also reflects poorly on us as a professional community. Which other profession steals so much personal data from others through manipulation and trickery?
So all in all it looks like we're all part of a low standars professional community that should be perpetually embarrassed of itself.
As a professional, you should realize that you are not a customer of Docker, you are a user that provides them no value.
They have little revenues and an over valuation to justify, while being attacked from all sides by competitors (Google, AWS, RedHat, Pivotal). Docker is in a bad position and they are desperate.
I took a quick look at the thread on github and did not see that much toxicity. Maybe I missed most of it, but IMO (which seems to be shared by others) Docker is seriously wrong here. Saying that login wall is a change to "improve user experience ... moving forward" is wrong. Whoever is saying this knows this is not true. Calling such an obvious case of BS when we see it is not treating people with disrespect.
On "write letters/emails to Docker" instead, no thank you. If I suspect the company of some dishonesty (as I now do based on the thread at github), public shaming may be more effective and do most good. This discussion needs to be public.
For the _most_ part I don't see what was so toxic about it.
But compare it with the corporate response. Doing this kind of thing without warning to loyal customers and then being dismissive about it is also very toxic.
I disagree with the decision, and I am a Docker user, but I'm not sure I agree with calling them "customers."
For the record, I skimmed the discussion too, and I don't understand "astounding level of toxicity" either.
What do you call a person who dismisses your valid concern and declines to respond to perfectly plain, honestly formulated questions in good faith? That's a jerk.
All the comments were aimed at the company not the people who work there. Corporations aren't people no matter what the Supreme Court says and they don't have feelings.
Lying, especially for company profits or personal gains, is easily one of the most toxic and despicable things to do and people who do it so casually don't deserve a lot of respect.
So yeah, my real life response to that would be very similar.
It appears that people on GitHub hate bullshit. And why wouldn't they? Most of them are engineering types. Docker added bullshit to the action chain of acquiring their only product. Then to really seal the deal, they lied about the reasoning with some marketing fluff bullshit. On GitHub of all places. Things that engineers appreciate: good data, transparent systems, no bullshit. Docker is 0 for 3.
They have badly misread their users, and they are paying for it. There's not much else to the story.
People react when they face an issue that affects them. Tone policing like this says nothing but that you value the appearance of civility more than you do the harm people are complaining about. This is a fairly trivial example. But this is the same mentality that goes into telling NFL players not to kneel, so it's worth calling out.
When you focus on style instead of substance you draw attention away from the core of an issue. It's an effective tactic when it's what you want to do, but anyone who wants to use it should stop and think about why they want to obscure the point instead of addressing it directly. If it's because you don't have a valid objection, perhaps you don't have a stake in the discussion and you should think twice about entering it. Digressions that address the civility of the participants always serve to defend the status quo. Is it a status quo worth defending?
I laugh when I got to the "guess I'll die" image and the commenter quoting Big Shaq though. You know something's controversial when the github issues thread devolves into image macros and memes.
Wow. The reaction to your completely reasonable comment shows just how susceptible the "smartest people in the world" are to joining an old fashioned pile-on, or at least justifying it.
> Treat them with the same respect you would like to be shown when people are unhappy with you.
Why do you think everyone deserves respect?
I prefer the brutal honesty of the internet rather than the fake civility you advocate for. It cuts through the static and gets to the core issue. My experience has been that people dislike harsh comments because many times it contains the truth and they don't want to be confronted by the truth.
Also, instead of crying that the world is harsh, why not toughen up? When did it become fashionable to be so soft and weakminded? Especially over something so silly as github comments?
Personally, I feel the people who are turning the internet into a toxic mess are people like you who attack speech. If you don't like harsh comments, don't read them. What's so hard about that?
Besides, everyone has different levels on what they consider toxic. I and nobody I know considers "jerks" a toxic word. Why should everyone lower themselves to your definition of toxic?
With all the complaining and name-calling, I'm quite surprised that nobody has proposed forking the project and maintaining a tracking-free version. Certainly, it would appear as if there's a demand for such a project.
What project? I'm assuming most of this commentary is regarding the Docker for Mac and Docker for Windows products which are an assembly of a significant number of open source projects: OCI's runc, CNCF containerd, CNCF Kubernetes, CNI provider, the docker/cli project as well as docker compose and kubectl binaries built for the host OS, the Docker CE engine built and packaged in a VM run by LinuxKit, xhyve, DataKit, VPNKit (all open source projects). I guess you could fork all those, but since many of them are not even controlled by Docker I'm not sure what the purpose would be.
What I hope is relevant from the long list of projects that I just mentioned is that a company has spent a significant # of engineering years assembling, packaging, and supporting that combination in a way that makes it dead simple to do container-based development on non-Linux systems; mostly focused on developer laptops. No one else has that capability. It is a wide open field if anyone else wants to spend that same effort and time assembling a popular and free product that makes all that work together seamlessly on a Mac or Windows system.
I am not saying I don't have an opinion on whether it's good or bad to make people sign in to download this free product. That is a company's prerogative who controls that product, and market forces will determine whether people will deal with such additions/changes. I of course would love to see direct downloads not impeded by such a change, but that's just my opinion. The silliness of HN is revealed when people start listing a bunch of other totally unrelated projects (cri-o, rkt, containerd) which don't provide any of the functionality of Docker for Mac or Docker for Windows. I say that as a huge proponent and maintainer of containerd. Again, if there is any other offering that makes that possible out of the box for Mac and Windows-based developers, then people are free to get behind that. To me, the only alternatives are to throw a VM together with Docker, Kubernetes, and whatever else you want and hack together the scripting and updating to make it work for you, and Docker nor anyone else are preventing or impeding anyone from doing just that.
The pressure to fork Docker went away over the last year with the development of podman[1] and buildah[2].
There's some interest in the idea of wrapping podman/buildah into something that can be consumed by Windows and Mac users in a similar fashion to how Docker is right now. But it'll take some time to pull that off.
How do you generate profit if you built something wonderful but the competition already built everything around it that you wanted to sell later (Kubernetes & Co)? User data.
Well, yes and no. They've said why they've done this:
> we've made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward.
but at the same time, that explanation is clearly bollocks. Something I realised and find very helpful to remember is this:
> If someone gives a reason for something, and the reason is clearly bullshit, then it means the person giving the reason has a hidden agenda which is likely to be negative for the explainee. - "Will's law of corporate bullshit"
Here's how it works. People do stuff for a reason, for instance I ate lunch because I was hungry. I have opened the windows because it is hot and I like the breeze.
It is usually easy to match the action with the reason given, there is no suspicion here, there is no cognitive dissonance.
So let's take the example in question, Docker moving downloads of their software behind a login. Without attempting to guess at their motivations it seems clear that this is a very inconvenient thing to do for end users. As someone has pointed out, the steps to download the software are nearly doubled, and there are fears of getting corporate spam.
So OK, that's the action, what's the reason given?
> we've made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward.
Well, that's clearly bullshit, right? It isn't possible to match the reason given with the action. It's not going to allow for a better experience for end users.
Let's apply the logic. Company does something -> Reason given is bullshit -> there is likely a hidden agenda that is bad for the explainee.
So we have arrived at a situation where we are pretty sure that the hidden reason for Docker to make this change is negative. We don't know exactly what yet (we can speculate), but we are pretty sure it's negative.
So you are right, "We don't know why Docker did this", but we can be fairly certain it's not going to be for the benefit of us end users.
> > we've made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward.
> but at the same time, that explanation is clearly bollocks
Whenever a corporation/someone explains their decision is to "improve the experience for our users" as the major reason, without explaining how exactly the decision relates to an improved experience, it's usually disingenuous[0].
I'm actually curious if there's counterexamples against this rule.
It really is annoying to need to be able to login to download the binaries and being dishonest with the user base is not a good approach in order to build great company-customer relations.
But all this hate and frustration being channeled on the docker team/this issue is like people just bandwagoning on a issue where somebody addressed a issue in a respectful manner and they see it as an invitation to just shit on people, like "oh they fucked up, lets give em hell until the people responsible for this curse the day they were born".
It is a really poor behaviour. I don't believe any of them haters would approach people like this when being confronted with them face to face.
It seems like Docker is in a really awkward place as a company. Their strategy was clearly to get everyone on the container hyper train then monetize by selling the production orchestration / runtime. But Kubernetes happened and GCP / AWS have rolled out competition in all the other supporting system needed; container registries, build pipelines, etc. What is the actual pitch for Docker EE now? Their marketing is pushing hard on 'security', but otherwise it seems like they are just selling a pretty UI for K8s. I don't see that being enough to support a company that has taken 250M in funding.
The k8s community is moving full steam ahead toward replacing docker containers with OCI-compliant cri-o/podman. Docker had a good run with a great idea, but it doesn’t make sense to leave all of this vendor-agnostic tooling reliant on Docker (the company).
I don't think that all the k8s distros are looking at cri-o, I think that many are considering cri-containerd instead, which kind of makes sense as they're all using containerd, so whether you need a whole separate program to manage that interface is debatable.
I wonder whether Google greenlighted significant investment into Kubernetes because they saw Docker Inc as a threat to Google Cloud and wanted to kill it early.
It's not precisely the same playbook as Embrace, Extend, Extinguish and it's been played out significantly less evilly (is that a word?) and more openly, but, well, they embraced Docker, extended it with K8s and are now well on the path towards "extinguish".
Whilst I have no inside knowledge that's not the way I think this has developed.
Docker got in as a container runtime, without any orchestration capabilities, then K8s came along to do orchestration but didn't provide it's own container runtime, so they used Docker.
After that there was some tension as Docker wanted to move "up the stack" to provide features like orchestration (with Docker Swarm) but the k8s community saw that as unecessary, they wanted a simple container runtime to sit under their orchestration layer.
Now we have options like cri-o or cri-containerd which are likely in the medium term to take over from Docker as the container runtime underneath Kubernetes installs. I'd expect that Docker will see more use in dev/test environments where full scale clusters are not required.
nobody who wants to make money likes being at the bottom of the stack (as it's where commoditization happens). Moving up means you have a bit more lock-in (via specific business requirements), and/or provide services that can be bought by stakeholders (vs a tech choice by the "lowly" engineers).
This is why i want my infrastructure to be owned by a non-profit. Just maintain the commodity infrastructure, no fancy, shmancy value-adds.
It's way more likely that Google was targeting Amazon with Kubernetes. AWS is the leading cloud providing and commoditizing cloud computing with containers would level the playing field and maybe even tilt it towards the creator for the tool.
It seems to have worked, at least a bit, since Kubernetes has gained adoption. AWS is still ahead, though.
In any case, "to make sure we can improve the Docker for Mac and Windows experience for users moving forward" is yet another example of the official-sounding-yet-bullshit, vague and meandering language that seems to permeate into everything these days. (Is there a specific term for it? "Business-speak" doesn't have enough of a negative connotation for the "We did it this way and you will like it. If you don't, fuck off." that they really want to say.)
FWIW, as far as I know, Docker for Windows and Docker for Mac really are closed source freeware. My honest opinion is we should move away from it.
As for what to move to... That's definitely an open question. Docker for Windows is definitely one of the best developer experiences I've had on Windows because they put actual engineering work into it. All the same, I still found it buggier and less supported than Docker for Mac. At least on Windows, I wish we could combine the cool LXSS work with a development-only Docker implementation.
Ultimately, Docker should not be synonymous with containers anyway. Future versions of Kubernetes will not use Docker and instead run their own containers on top of libcontainer by default, as I understand it. I also feel rkt has a much nicer design than Docker, doing away with the daemon aspect of it. Hoping to see more development in the future.
> one of the best developer experiences I've had on Windows because they put actual engineering work into it
Like, this is exactly what they're talking about, right? This is the user experience that they've improved because VCs have been willing to give them money because they show increases in MAUs.
Am I crazy here? You seem to be complaining about exactly the thing that you like
The app phoning home to do analytics is fine with me. I just want the download links so I don't have to sign in to download the app. I will almost definitely sign into the app so I can use the features where signing in are actual value adds. (Also, if they ever removed the direct links, it would really suck for automated deployments.)
But really, I don't care that strongly. I do, however, wish to use, if possible, an open source solution. Why? Because I had a problem with Docker for Windows and I couldn't debug it. As I understand it, this is actually pretty similar to the reasoning behind Linux being developed.
I want that too, but there's unfortunately nobody likely to do that work for free.
(I find it annoying that you have to hunt for the download link too, tbf. I just figure that, OK, that's how they make their $ to build their mostly excellent software)
I think the problem is who controls the container world. Companies like Facebook and Google do not build their businesses on developer tools, but they collectively benefit with open source. Docker may very well not benefit much from open source because their business is containers and sharing the secret sauce does not make them more money.
Er, any links about the Kubernetes changes? They spent multiple releases adopting and moving to CRI. Meaning any CRI implementation can be used in his where docker(shim) was previously used. This includes -- containerd, crio, kata, pouch, gvisor, and at least three more whose names I forget.
Hardcoding another container runtime would be disappointing to see and highly surprising.
Also, since no one anywhere in this thread is talking about it... You can get Dockers container execution engine without any of the "docker". Docker has spent significant time and presumably money splitting out `containerd` and I'm surprised it's not mentioned in this thread so far. It's compatible with Kuberenetes CRI, etc, etc.
Sorry, what I meant was that the default CRI implementation would be based on containerd. This is as I understood it as an outsider and may not be true.
If you want lightweight virtual machine functionality I would recommend LXD/LXC.
If you want app virtualization and packaging, I have been pretty intrigued by what I have found out experimenting with `snaps`.
The biggest issue for me that I have hit with LXD/LXC is that host-to-container mount sharing is not as easy...which means I had to do some workflow alteration for moving from `docker` containers to `lxd` ones for existing projects...but otherwise I have been really happy with LXD.
Edit: And just to be clear, host-to-container mount sharing is possible, I just had to work at it and slightly alter my workflow to get the best solution.
Well, what I really want is to develop, test and utilize containers as part of my development process. For example, doing work in Docker containers allows reproducible, hermetic builds (importantly, it allows this, though it certainly doesn't guarantee or force it.)
Many containers that I develop or use will also be used in a Kubernetes cluster or at least as part of CD, so most importantly I want a local setup that mirrors to a degree what the remote setups look like.
Snaps provide something a bit different, more geared toward end users. I have no huge opinion regarding them, the isolation aspect is novel but unfortunately a bit of a PITA as of today.
LXD is more like virtual machines than containers, though. Also, in context of this thread, it doesn't solve much that Docker doesn't; after all, Docker for Linux is in fact fully open source, and even if it wasn't, Linux has no shortage of mature container implementations (like rkt.) For a developer on Mac or Windows, it'd be nice to have a universal dev tool, even if only to handle Linux containers.
> Linux has no shortage of mature container implementations (like rkt.)
As someone who is a maintainer of the core of what actually runs containers under Docker (runc), LXC (and by extension LXD) is _the_ most mature container implementation on Linux. I've worked with the LXC folks and I am constantly impressed how on-top-of-everything they are.
> As for what to move to... That's definitely an open question.
Is it really? I find that Linux is an excellent development platform, and also an excellent home-computer OS. I've been using it exclusively in a professional software-development context for over a decade now (prior to that my employer made me use Windows in addition to my Linux box), and I couldn't be happier.
I highly recommend https://github.com/codekitchen/dinghy for MacOS. It predates docker for Mac but offers most of the same features while mostly being "just docker".
The issue in the post talks about "Docker for Mac" and "Docker for Windows" which is basically what Docker for Desktop is.
The links you provided are just for the docker binary (which is, mac/windows they're just the docker client –not sure about what the linux binary contains). They're not the same thing.
The Docker binary for Linux appears to also be behind a login wall, at least if you follow the most obvious navigation links.
(Even then, it's not trivial to find, because the standard Linux version of Docker isn't mentioned at all in the "getting started" section of the site. You have to know that they apparently call it "Docker Engine" now.)
It's not a good move, but I don't understand the vitriol expressed on Github, calling the developers jerks etc.
Our company has never paid Docker for anything, but we've benefited greatly from having Docker images be the output of our backend build jobs. I never have to dig into classpath errors from conflicting JARs any more, or deal with out of date JDKs living on QA and production machines. I can deploy a dependent service that I know nothing about without digging through code and documentation. Yes it has its issues but the positives far outweigh the negatives.
I understand that it's a frustrating move and the wrong path to go to for monetization, but can we also recognize that it's a company has done a lot for the community and is struggling to secure its future, instead of calling them names?
I don't understand the vitriol expressed on Github
Some people see a change making the UX worse promoted as a change that improves the UX, and feel it's an honest mistake that will be corrected by straightforward, constructive feedback.
Other people see the same thing and think the dishonesty is intentional. Docker would hardly be the first company to do this [1] - but some people may have seen Docker as 'not like those other companies' and be disappointed to discover they are.
Your summary doesn’t line up with the linked-to GitHub issue. “Straightforward, constructive” feedback was provided by the issue creator and was seemingly ignored.
Cheers to that. We recently had to support a customer stuck in a Linux distro from a few years ago... this was an older guy that like how things were made back in the day and who hasn't upgraded his system because nobody uses good'ol init scripts anymore in Linux...
We had to spend the best part of a week creating by hand a binary of our product that could run in his distro, including versions of system libraries with critical security patches.
Docker the company and docker the tooling may have issues, but those Docker images are a life-saver at times.
Can anyone post an experience using rkt [0]. I'd love an alternative to using Docker for containers, though I'm not sure how well a community has evolved around rkt.
Docker for Desktop (which is what the issue text describes) is not the same thing as rkt so it would not be an apples to apples comparison. You can still install docker with "apt-get" on Linux. https://docs.docker.com/install/linux/docker-ce/ubuntu/ Docker for Desktop is the GUI experience that runs the docker engine in a VM on your machine with some port forwarding magic etc. rkt equivalent would be containerd probably.
Yes when it needs tweaking on every update and repeatedly fails. It's obnoxious. It was immature technology in my personal experience. Maybe others have had a better experience but I was turned off by numerous issues.
Please check the Open Containers Initiative [0]. That lists a number of runtimes. There are now entire tool chains developed independent of anything from Docker Inc.
Why do you believe that churn is a good thing, especially for a thin wrapper around Linux namespaces? There should not be much this is actually doing, so lots of commits would be worrying.
The github-issue and a sibling are folks asking this question. Why? Lingering bugs? Waiting for features? That the question is open is the concern.
And again, compared to my existing tooling which is not having that question and has momentum.
Evaluated against docker (a recent move for us) but didn't choose. We also looked a "roll your own" type solutions - required expertise our team didn't have.
Last commit 19 days ago. Code commits merged into master seem to be around every 3 months for the last couple years.
That seems healthy to me.
As for documentation, it seems to have monthly or bimonthly updates.
The question in your issue is that their is a lack of activity... Came about three weeks after a commit, and two weeks before another.
The issue just seems to be that rkt prefers their mailing list to GitHub issues, though they do pay attention to both. And I don't see a problem with that.
I'm surprised to see rkt mentioned so much. This thread has a very 2017 vibe and doesn't seem to account for m(any) of the changes in Kubernetes (CRI), docker/containerd, anything with regard to what RH is doing as they bet more and more on this space. Anyway...
Red Hat's crio is a much more obvious contender in this space (my issues with RH "marketing" of crio vs containerd aside). A slightly different scope, but even Kata's future seems more promising than rkt's.
Or, just use containerd if you just want a clearly OSS engine with the weight of docker behind it. ('containerd' is Docker's container engine, effectively)
I cant believe people are still taking and using technologies from 2017 /s
To be fair this thread is also a testament to the sad state of affair in the container industry. How can you even chose a technology confidently? It seems even more madness than the JS world.
Sometimes I feel like a dinosaur with my Vagrant boxes and my dedicated server. With threads like this I feel great.
Rkt didn't get much attention in 2017 at all. I only said that because 'rkt' was often portrayed as the anti-docker years ago, and it's really no longer a good way to look at the playing field.
It wasn't about making fun of rkt or anything. Also, I don't know, CRI has been in the works for ages at this point, as well as crio. It just seems like people who aren't paying attention in this space remember HN threads from 18-24 months ago where people thought rkt was going to be the savior from docker. It's just pretty out of date.
The comment about vagrant and a dedicated server embody an attitude that is just depressing to me. Yes, if you stick with old, functional technologies, you won't need to pay attention and/or learn anything new. And that's fine, if you're happy with it. There's a reason the rest of us moved on. Learning about CRI and knowing that rkt is dead and crio is the alternative to containerd is not that crazy of knowledge to have, or maybe I'm underestimating how much I know in the k8s space.
The slight concern I would have with rkt, alongside some other of CoreOS' projects is that their future Post RedHat acquisition isn't super clear.
In the case of rkt, AFAIK Redhat's container runtime (Cri-o) uses containerd and runc under the covers, and I've not seen any indication that they'll change that, so not too sure where rkt fits in that landscape.
If your goal is to just use containers, you don't need Docker. You can use containerd, lxc, nspawn, or use clone directly. Docker is an extremely bloated piece of software, and you can probably get by with just a little knowledge of Linux APIs and a solid understanding of your project's requirements.
If you're frustrated with Docker's slow pace of development are are looking for a good alternative, have a look at Singularity containers. They're interoperable with Docker Hub and OCI compatible images, and offer a much better experience for HPC, machine learning and big data environments. Their GPU support is top notch. They don't run on as many platforms as Docker, but are aiming to grow into cross-platform. Sylabs is a small team in Austin, TX that does a lot of the development work, but they are quickly gaining adoption in academia. It's available on Cedar and Graham in Compute Canada (a large supercomputing cluster that serves most of Canada). We use Singularity on all our lab machines as it is much easier for IT admins to manage, and doesn't require sudo access. Their GPU support is top notch.
It's important to mention that Singularity does not cover the same functional scope as Docker, and therefore isn't a good alternative for all use cases. It's much better for some, close to useless for others.
To make it short:
- huge pro: it relies on user rights and doesn't require root, so it's easier to trust for an admin, and to use for a non-admin;
- huge con: it provides no isolation (it's not that it does a bad job at doing it: it doesn't even try, by design).
There are probably many other differences, but that's what has stroke me the most.
Don't get me wrong: I've only had positive feedback about it; it just isn't a one-to-one replacement for Docker.
Is there any overhead due to the virtualization layer, relative to bare metal? Scientific computing usually has much more stringent requirements for CPU/GPU performance than the average web startup. If the performance is good enough, containerization would be a great match for the domain, as you’re often shipping code to government-funded clusters you have little control over.
Docker is too heavy for golang, which is all I use scaled horizontally anymore. I can easily create a binary for any operating system. With dep (and even more so vgo) reproducible builds just really aren't a problem with golang. But when i've got a virtually unlimited number of green threads I can spin up with goroutines, why do I need to containerize to scale horizontally?
Since Go programs are statically linked, is even something like Alpine required? Isn't it possible to just build the Go binary from the "scratch" image? Maybe throw in glibc in there
Annoying but not the end of the world IMHO. In any case, I have a dockerhub account so I can publish containers there; so I might as well use it. Similarly I have a Github account and am actually a paying customer as well. The bottom line is I have accounts left right and center for things that I depend on professionally. Some of these I even pay for but most of them I don't pay a penny.
This is a closed source product distributed for free. They are well in their rights to charge money for it even. Given that and given how important this product is for my day to day routine (I run lots of docker stuff all the time), and given that docker has been contributing a lot of OSS code in this space, I think this is a relatively small and entirely fair price to pay.
I don't agree with the sense of entitlement in this thread. Docker providing a free community edition of a product they've built that allows running docker on top of Mac/Windows is not charity but a means for them to up-sell their commercial solutions. That's the only reason it exists. I'm grateful that it exists and I hope that they can make this work so I can continue to depend on this.
An alternative is getting a linux laptop and running pure open source versions of whatever containerization you require.
I also can't delete my account myself after having created it. I believe the best way to operate is to make something as easy to delete as it was to create...
"Customers may view, update or change their registration information by logging in to their accounts at www.docker.com. Requests to access, change, or delete your information will be handled within 30 days."
"Questions regarding this Privacy Policy or the information practices of the Website should be directed to privacy@docker.com or by mailing Docker Privacy, 144 Townsend Street, San Francisco, CA 94107."
Particularly in this specific line of business. Would you use a management platform that pretends to delete containers and associated files but doesn't?
Docker is my perfect example of a OSS being poisoned by excessive money.
OSS needs money to survive but too much of it (a too heavy burden to turn a profit) can poison it leading to significant issues being ignored because either the paying customers are not affected or because the issue is an integral part of the business model.
I wished someone were able to fork docker and strip off all of the extra fluff (mainly service stuff) but I don't think anyone without deep pockets can support such massively complex software.
Risking a derail how is chocolatey nowadays? I really liked the idea but stopped using it because most of the packets were way behind the current versions.
Chocolatey is in the same space as the AUR (Arch User Repositories) in my opinion. It can be a hit or miss, but usually a hit. The support is fantastic when there's a bunch of users consuming the project as many users will contribute fixes to the package (e.g. packages for devs like node, cURL, git, etc). On the other hand, less popular packages will occasionally suffer from bitrot, though that all depends on the maintainer and whether or not the package has been configured to update automatically upon new releases.
Anything popular is up to date. But many windows programs have their own update methods anyway. So in some cases, you install the older version, but then update when asked.
Ubuntu for raspberry pi requires a ubuntu login continue installing it, after formatting and setting the network up. It won't let you boot into a working setup until you create account on their website. Kind of hard if you don't have access to any other device.
I work for Canonical on snapd, so can provide some background here.
You're probably describing Ubuntu Core instead of classic Ubuntu. The UX there is oriented for devices, and it was cooked to avoid default passwords in an environment in which the device often will have no display. So once you boot, the device is in a running state, and the brand (manufacturer) that cooked the image has the choice of allowing individuals to login or not. In addition to a store account, the brand can also offer a "system-user" assertion, that is a signed document that you can present devices to get a system user in. That assertion may detail remote login, SSH keys, and also a hashed password for independent logins. That only works once on the device, though, for obvious reasons.
For generic Ubuntu Core devices the "brand" is Canonical, and for those devices you can get an assertion signed and with it log into any number of devices you want. That procedure may be done over USB storage, for example. Just insert a USB key into the device and your user credentials will be setup, even if it's completely offline. Again, that only works once on the device. If you lose the keys the device will need to be factory-reset.
I really want to like this and conceptually the idea of being a simple command is appealing. That said, I feel like writing a shell script and especially their example is really really off putting. For one the flexibility it offers means people are going to use that flexibility to do weird stuff, second a shell script far too easily captures dependencies to it's environment again making things non-reproducible. If you allow people to put local paths, dependencies on local weirdnesses and so on in their build scripts they will, I've seen it happen time and again.
> Like any other business, Open Source needs funding.
That's a valid comment... for a different discussion. The actual issue here is that Docker's business model is falling apart, with alternative container runtimes (cri-o, rkt, etc.) and alternative orchestrators (mostly k8s) eating their lunch. And all of those are open source too.
I wonder how requiring login to download helps a company in general? Docker says "improve experience for users" but is collecting download counts for each accounts important?
All they need to do is take every e-mail address that's not @ gmail/hotmail/outlook.com/other free providers, zip them through something like the Clearbit API to flesh out info about the person, and then forward promising prospects on to the sales team for some personal attention.
I don't even mind companies doing this, as long as they're honest about why they're asking me to register.
Hi I head up Docker Developer Relations. There are plenty of ways to get Docker CE without logging Docker Store. See my comment on GitHub for more details.
As someone pointed out in the Github issue, Docker Toolbox is legacy. I'm not sure how I feel about recommending outdated software to Mac and Windows. It's also slower and not the default way to get Docker.
It's in Docker's interest to capture as much value as they can from their products, their calculus is changing. It's their decision to make, however passing it off as in the best interest of users is a bit of a whopper and not the best way to communicate it. I don't think entitlement to other people's work is the best response though.
The outrage is 90% from the fact that they call it "improve the user experience going forward".
If they had just said "we need some leads/emails to monetize this thing a little more, hope you understand. Use a throwaway email or the well-hidden direct download if this change makes you uncomfortable", nearly no one would have complained.
While some people might be reacting based on simple entitlement, I suspect that for the majority this triggers a strong reaction because it is a warning sign. If Docker were a big company, it'd just be part of the normal drift toward customer abuse. Because Docker is still a relatively small company, it looks like a desperate attempt to stay afloat through increased monetization. The unintended side effect is that it makes me less inclined to purchase anything because I don't want to bet on a sinking ship.
Docker is just a thin wrapper around Linux (API + namespaces). Dockerfiles frequently install a whole userspace (such as Alpine's or Debian's) as a base, to then add proprietary bits. Basically, Docker can be seen as GPL circumvention device. People aren't entitled so much as they're frustrated as they realize they've given too much control to some inessential piece of commercial software.
Many distros (I’d guess almost all of them) bundle proprietary software. The Nvidia graphics drivers are the first example I can think of. (This could be out of date since I haven’t dealt with it in several years)
Kubernetes will eventually just kill docker. It's the best way to run docker in the cloud right now and keeps getting better. Plus it abstracts docker with pods. Better support for OCI/Rkt and docker will just die.
This is probably the straw that broke the camel's back for Docker. Not that anyone expected it to stick around for that long anyway, they've had a terrible history of failure and poor design choices.
Well, I suppose it is time to realise that Docker has perhaps become a bit too dominant and that the ecosystems of which it is a part, might benefit from there being more alternatives.
You'd be pleased to use docker-registry, then! We use Artifactory that runs its a docker-registry compatible something (maybe it's running docker-registry itself?) for our artifacts.
guix environment --container [ list of packages ] -- command
I haven't looked back since they added that in the last version. Configurable network namespace sharing or isolation, configurable fs namespace sharing or isolation, configurable environment variables within the container, and the option to "pack" the entire thing as a tar.gz to deploy.
Correct me if I am wrong, but all components of Docker CE are open source and one could therefore still download and build the binaries themselves, right?
I really don't see the issue of requiring hoops to get precompiled binaries as long as there is a way to get and build the binaries yourself.
I noticed this today! Thankfully I was able to dig up their instructions for Ubuntu installation which just depend on apt, and they still worked fine. You can practically hear the Docker devs fighting with PMs/business guys about this one.
https://linuxcontainers.org/lxd/ <--- you're welcome.
if u don't need some producer who obviously prefers money over user happiness or quality. just use true open source things. ultimatley that's the attitude that leads to them seling your information or maybe worse...(wether they do it NOW or not is irrelevant) docker is your infra... nice spot for backdoor once it closes itself further from public..., wether they are doing it NOW or not, i'd look for another road to this rome you seek
I have never installed Docker through any route other than a Linux distribution's package manager. Why would anyone use the download in the first place?
I had the tab open for a while so didn't see their reply. So no, not "necessary" just accidental. Sorry to have cause you such mental anguish by answering your question which would have been answered if you had read any of the thread or the linked GH issue.
I'm very curious how this develops. It was already a bit strange to notice that one of the several container technologies available got significantly more visibility thanks to a smart marketing team and targeting Mac and Windows users. Personally, I'm using LXC and related technologies and see no benefit in switching to Docker whatsoever. Eventually most people run their containers on Linux anyway, and the LXC's way of doing things seems very native and natural in this environment.
I'm really sorry for the proprietary OS users, that's really a shame that they have to give their email to docker inc. now, if they haven't already (docker hub anyone ?), everytime they install docker on a machine (once per install ?), all this crying its makes mes wanna cry :'(
So you have to log in to Docker Store on Windows and on Mac. I sort of get the frustration. Using one of these systems for development is a painful experience on its own, no need to make it more miserable.
You can download Docker from many other sources without a login. And they are only putting the Docker Community Edition behind the email paywall as pretty much every open source/freemium company does these days.
We decided to spend a minute of your time for registration to improve your experience (and another minute that it will take to unsubscribe from spam later).
But I think it's only fair. Those developers who dislike this probably do the same to their users.
Bit of a cliché, but nothing is free. You are choosing to use technology provided by a for-profit company. Community Edition does not mean it's owned by the community. It's absolutely irritating to have to create an account and login when you just want to try something, but if you're invested in a technology and the company that provides it (at no direct monetary cost to you), I don't think it's worth getting stroppy about. Plenty of other more sinister and cynical things happening in tech right now to get riled about
It does in fact ask the user to login before giving them a download link. The files themselves don't appear to behind a login-wall since people are posting them far and wide. But the website very much implies that you need to register.
This is a link specifically from the Docker store. Download pages are also linked from multiple places in the documentation, binaries are installable via Docker's package repositories, and generally available far and wide. This is a misleading, sensationalist headline, i.e. clickbait.
Are you sure you loggin wasn't saved & you were actually logged in? :) I had to install docker for a new project @ work recently and was forced to create a login & sign in (as I was not previously registered). I guess direct downloads are still possible since people are posting links, but navigating through the Docker website does not present you with those options. Super disappointing
I'm 100% positive I did this last week because I explicitly remember asking my colleague if we need to create an account or not. He said no but it appears the site has changed.
It appears that it did change in last few days because I had a conversation with a teammate and remember asking about signing up for an account and he showed me how to download without. However, today I cannot find the same navigation path when trying so I think something changed.
Good. Maybe we can get back to writing software, rather than spending our time on pointless deployment scripting, tools, and reiventing wheels. Or have developers interested in containers spend their efforts on standardized rather than Linux-only solutions. Docker has IMHO never made any sense for the use cases people seem to have in mind, as it really doesn't isolate you from anything (neither from the host O/S nor Docker itself). Docker, Inc. touting the security aspect doesn't hold merit given Docker must be run as root, and containers can't use host system permissions etc.
> Good. Maybe we can get back to writing software, rather than spending our time on pointless deployment scripting, tools, and reiventing wheels.
Not sure what you're getting at here. Since our team switched to Docker/Kubernetes, I've spent considerably less time on pointless deployment scripting and reinventing wheels.
I understand the rationale for filing this issue, although I personally disagree with it... Docker for Mac is great freeware, and I don’t mind logging into Steam and Mac Store for my other apps, so I don’t have an issue with doing the same with Docker. But for someone who greatly cares about not having to give their email, I can see how it can be annoying to see the policy change after they started using the software...
Either way... Holy shit is that github thread full of negativity and entitlement! Pretty shocking. Even if you’re unhappy with something, that’s no way to treat a team that is giving you a tool for free!
>Either way... Holy shit is that github thread full of negativity and entitlement! Pretty shocking. Even if you’re unhappy with something, that’s no way to treat a team that is giving you a tool for free!
I think the fact that you focus on the "entitlement" and that I focus on the fact that the maintainer gave a bullshit, misleading answer to an important question, is related to the fact that you don't mind having to log in to download a tool and I do.
While I agree there seems to be a significant amount of negativity on display, I'm not sure I'm shocked by it. The ticket response by joaofnfernandes is remarkably tone deaf and reads as fairly dishonest.
Literally. the first version required you to register by phone or mail in order to get the full game.
If you mean the most recent version... they know exactly which commercial retailers purchased the game, and in what quantities. They might not know exactly which consumer ended up with those games, but those consumers were not their customers, the retailers were.