Being dishonest like this is one of the fastest way to lose customers. Please can spot it a mile away.
They should make it optional. Have the email collection form prominently there and ask users if they want to sign up to receive the newsletter, free tutorial, free e-book, whitepapers, etc.
Then also have a "No thanks, take me to the download".
While email to download is fine in most marketing contexts, it is NOT fine for open source products. If they want to collect emails they need to offer something in addition. It doesn't even have to be that much. A free e-book on how to use Docker or something.
The download page could also have a "Please support us by ..." section.
There are so many better ways to go about this.
If I download an Ubuntu Server ISO, I get my download immediately, but the page has a nice prompt to register for a whitepaper to get the most out of my new server product.
If I download the Ubuntu Desktop ISO, I still get my download immediately, but additionally I see some nice prompts about donating to support their operations.
Everything about both flows inspires trust that they aren't trying to withhold my download for the sake of selling consulting services or soliciting donations.
Actually you have to click past a "please donate some money" nagging dialogue.
It may be a little annoying (at least if you download ISOs all the time), but it's obvious what they are asking for, not to mention why.
And it's definitely not dishonest, so it doesn't tarnish their reputation as a trust-worthy company.
We are talking the price of a hipster coffee per fortnight here, but a few thousand people putting that on a recurring payment adds up.
For organisations like Docker I'm wondering if the RStudio model would be viable? The 'enterprise' subscription is something like $995 a year and can thus be budgeted for &c.
I also sometimes buy apps even if I'm not really sure I'll keep using them (sometimes just to encourage them to continue),
and keep subscribing to a newspaper even if I don't often read subscriber only content (since they sometimes have some great investigations && I want to support them but don't want to disable Adblock).
What I don't do:
- most monthly subscriptions that isn't payment for an actual service .
whether i agree with the decision or not, i am confused by this sentiment. aren’t they offering docker, for free?
There is a certain degree of trust involved in using a product like Docker since it is so critical to a businesses' operations, and I think a lot of people feel like using any kind of tracking (like mandatory registration to download) erodes that trust - we're forced to sit here and wonder what other restrictions might come in the future, or what other information they might start requiring for use of their product... and uncertainty never pairs well with infrastructure tooling that tends to be very important and very longlived within an organization.
It's not about entitlement, but it is about common convention. Docker's offering isn't unique enough for them to betray that convention.
The point of making something open source is to benefit from collaboration. Is not making the argument, "locking software distributions behind a login wall is harmful," simply a form of collaboration?
Really an honest question. I find Docker useful, and was put back also by the email/ login request, but why are they getting so much hate, compared to every other company, just because of this?
Docker is "revolutionising infrastructure" or something. It doesn't have "make heaps of money" as it's primary goal. Partly this is because open source. There's an expectation that open source is also "for the greater good".
The cardinal sin of our times is hypocrisy. Being a money-worshiping greedy capitalist bastard is fine, as long as you're open that that's what you are. Pretending to be altruistic while actually being greedy will generate all the hate.
Would you spend time out of your day to contribute to software that requires your users to sign up for someone else’s spam list?
The "open" in "open source" is about encouraging cooperation and collaboration. And not using lock-in or patents or walled gardens to obstruct competition.
If the altruistic aspect is still not obvious: many projects encourage a gift economy by accepting donations.
Astroturfing is really not compatible with what you called "be transparent about the product design/intentions"
However, the kind of mindset that enjoys being a greedy capitalist bastard finds it very very hard to accept the Open Source philosophy - it's all fear-based, "do unto others before they do unto you" and so "if they can rip my code off, they will", because that's what they'd do. I've experienced way too many hard conversations about open-sourcing code with this type of person.
So there tends to be a correlation between Open Source software and a co-operative mindset that would find this type of coercive marketing bullshit to be evil and reject it. This correlation becomes an expectation.
This was a bad marketing-driven move, but that's all it is.
Warning: past the home page, there is some possibly NSFW content. The game defaults to having some cartoon nudity (although there is a non-nudity mod) when the players haven't made clothes, so you might see some pictures of that if you dig around.
It's a game. He puts all the code (and assets) in the public domain. You can go to github and download it and build it for free if you want. But on the website: $20 please. He tells you exactly what you get: lifetime server account, all future updates, full source code, tech support.
Although the forum is not exactly a haven of mature discussion (in fact, it's downright awful at times), I've not even seen one complaint that "I could have got it for free". In fact, there have been several discussions where people say, "$20 is too high. Is there any way to get a discount?" and the reply is "You can download the code for free and play on these free servers". Inevitably the person says, "But I want the official version. I guess I'll pay the $20".
No idea how much money he's made so far, but for a 1 person indy game, he's done astonishingly well: https://onehouronelife.com/newsPage.php?postID=377 (description of sales in the first 2 weeks last March). According to other posts he's made in the forum (which I can't find), sales have continued to be brisk.
If you want to charge for downloading the official build of free software, then do it. Even the FSF will cheer you on (as long as you include source code ;-) ).
I love this, never occurred to me that people might mod nudity out of a game instead of into it.
Some players are really toxic. Twitter is a peaceful, loving place compared to that. No way I will spend my free time in this game.
There used to be a list of alternative servers, but seems to be gone now :-( Possibly nobody is hosting one any more. I'm tempted to do it myself, but I'm in Japan so the lag would be unacceptable anyway (I'd be playing by myself, which I do anyway...) But it's an option. If you can find a group of people to play with, it can be quite fun just to run on your own server. It takes very little CPU from my experience.
> The paradox states that if a society is tolerant without limit, their ability to be tolerant will eventually be seized or destroyed by the intolerant. Popper came to the seemingly paradoxical conclusion that in order to maintain a tolerant society, the society must be intolerant of intolerance.
I have no idea what the solution is. I suspect anyone coming up with one would win all the Nobel Peace Prizes from now until the end of time. I do think it's a useful rule of thumb that if you're not finding tolerance excruciating and infuriating at times, you're not really doing it.
Heck, if you'd be fine with an EU server, contact me, I'd sponsor it, including a short subdomain.
People are downloading just over 30GB/day (each weekday) per server, and the servers only seem very lightly loaded.
If the source for whatever runs the needed forums can work on ARM64, then these cheapo servers seem pretty decent so far. :)
the game overall has some very interesting features around community and cooperation, and rogue griefers are disincentivised against because the way the game scales seems to inherently require cooperation between strangers.
Hackernews is pretty good at this, very seldom I see toxic people here, yet people argue all the time when they dont agree.
This actually makes me curios, what draws you to this heavily moderated forum here?
It was the first or second thread I skimmed. No, thanks.
-- joaofnfernandes (2 months ago)
"I know that this can feel like a nuisance, but we've made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward.
As far as I can tell, the docs don't need changes, so I'll close this issue, but feel free to comment."
"In our quest to improve the service for you, the user, we're making it worse.
It was great before. It's going to be terrible now, but you're going to love the changes.
We asked our investors and they said you're very excited about it being less good, which is great news for you."
90% of changes just make things better for the user. 10% make things worse, but they’re obvious and honest and we’re okay with it.
And then 1% is the stuff like this. The Netflix/Qwikster you’re paying less but more debacle. The EA “sense of pride and accomplishment”. The Netflix (wait, why are they here twice?) show recommendations that aren’t ads that’s happening right now.
Which makes sense if you've got a pile of money but are trying to bootstrap a streaming service.
I expect most of that will disappear when it comes up for renewal and they've built out their original content (in the same way it did for Netflix).
What did I miss?
i.e. "We do not give one single fuck what you think about this decision, and will not be reading any of your replies."
Opt-out phone-home, telemetry, crash reporting, quality analytics, whatever you call it. That's my prediction.
My Synology personal file server started nagging about enabling telemetry and needing a privacy notice to store my files on my drive on my network. Screw that.
1. Control Panel -> Info -> Device Analytics.
As for Synology, I did opt out of that as well.
It'd be great if this crap was opt in... since the industry doesn't agree, I propose we make it law.
Docker being purchased by Oracle and then users being subject to audits for the audacity of running Docker.
We all know why. There isn’t a benefit to the end-user.
see also, netflix, and twitter lately.
I really don't know why they bother uttering or writing those kinds of words. They contain no information, nobody is fooled by the lie. It is a waste of their time.
We might be on the other side of table in many other ways, or idiots as you put it. Like the way my doctor friends avoid some OTC drugs that I never even think twice before taking, or some food, or some ready made edibles. I have a friend in textile industry and when he buys clothes it's a whole new level and makes me wonder what the hell I have been wearing so far. It amazes me how he sees through all those "Giza cotton" tag-lines and gimmick features of breathability and what not that are usually followed by a (™).
Can we all not learn everything from The Internet? No, we can't and it does not make any of us an idiot.
I've had a Docker Hub account since Docker Hub was a thing and the only content I really ever get from Docker is a weekly newsletter (which you can opt out of) and notifications about the platform itself (such as any downtime reports, etc.).
I do think it's a bad idea though, mainly because for newer people getting into Docker it's a barrier of entry to overcome. I'm very suspicious of anyone asking me to register for things like this. On the other hand, I don't have insights that Docker has, so to make such a bold move, they probably have a plan.
Although they may not barrage you now, there is no telling what the future holds with stunts like this.
I think the future is pretty predictable.
In the off chance they just wake up and start slamming you with unsolicited marketing then you can click unsubscribe in the footer of their email and you'll never see another email from Docker again.
But really, I don't think Docker is foolish enough to do that. They've spent a lot of years building up their brand and business, and aren't reckless enough to put all of that at risk by relentlessly emailing their users with marketing agendas (if that's what they wanted to do they could have been doing that for years).
Docker already knows that almost everyone uses the free community edition anyways, so they really have nothing to sell to us anyways, except for maybe Docker Hub private repo access. Anyone who already downloads Docker already knows the benefits of using Docker, so they don't need to sell us on Docker as a technology. What are they going to market to us?
Lastly, let's not forget that the Docker for Windows / Mac clients have allowed you to login to the Docker Hub for a long time now and nothing bad has came from that (unexpected marketing attempts).
No, you see tons of email from everyone Docker sold your "Guaranteed Live And Active" email address to, once it verified liveness and activity by you clicking the "Unsubscribe" link at the bottom of the email. And that's assuming Docker doesn't just keep spamming you, secure in the knowledge you're reading their earnest missives and care enough to respond to them personally and by hand.
> But really, I don't think Docker is foolish enough to do that. They've spent a lot of years building up their brand and business, and aren't reckless enough to put all of that at risk by relentlessly emailing their users with marketing agendas (if that's what they wanted to do they could have been doing that for years).
If they're suddenly in a different financial position, or change leadership, or for any of a number of different reasons, they could indeed go off a cliff like that.
In nix, you're basically describing the whole dependency tree of your application all the way to libc. When you build your application it builds everything necessary to run it.
The great thing about it is that your CDE essentially is identical to your build system, and the builds are fully reproducible, it takes over being a build system, package manager and as mentioned CDE.
They went even further with that (I have not explored that myself yet) and used the language to describe the entire system (called NixOS) which looks like CMS is no longer necessary and also nix is used for deployment (NixOps, also did not tried it)
If you are into containers you can still deploy into systemd lxc containers, or even create a minimalistic docker image.
The disadvantage is that there is a significant learning curve, it's a new language, and it is a functional, lazily evaluated language. The language is not really that hard, but many people are not used to functional programming. It is especially popular for deployment of Haskell code, since the language is also functional and lazily evaluated.
We have been working on Flockport  that supports LXC containers and provides orchestration, an app store, service discovery and repeatable builds. It's still in early preview and we have not started proper outreach but it may be worth looking at.
Ubuntu also provides the LXD project that provides some orchestration across servers.
It's seems possible to get IPv6 working through alternative orchestration though. eg Theres a guide on getting it working with Kubernetes and Calico.
But if you're looking for something that's production grade IPv6 - eg people can work out WTF is wrong when problems hit - it's probably not there yet. At least, not for small teams that I can tell. ;)
Sounds pretty married to me.
The problem is that it's almost never that.
It's not "we're unable to pay our bills". It's "we've got more money than we need already, but we think we could get a lot more this way".
You think, say, Netflix is a struggling business and that's why they have to put more ads than before? No. In a capitalist system, leaving money off the table is increasingly unjustifiable as the number you're leaving off grows. Docker absolutely is in that same situation.
"Hey, look at all these downloads CE is getting. We need to start following up with these users to try and promote Enterprise and other products. Start capturing emails at the point of download."
1) Users think they are free
2) They are not actually free
The result is of course stuff like this.
The community is expressing a desire for companies to be be honest and upfront about these sorts of issues (i.e., monetization). Refer to the post you responded to for more information.
The guys who act like "it's only about the lies" are usually not decision makers, but it's important to be able to have enough a story that the decision makers don't get their thoughts contaminated by the perennially negative.
I think Docker will be fine with what they're doing. This is a storm in a teacup. But they should've bundled it with other features like auto-updates or something.
It’s not a modal, but supposedly ignoring it opts you into the tracking, with the only choices being “Allow” or “Learn More” and the [x] button also being labelled “Allow”.
IANAL, but it’s not informed individualised consent if there’s literally no opt-out, and there’s not a lawful basis unless advertising-cookies are suddenly the enabling technology behind downloadable containers.
I’d report them to the Information Commissioner‘s Office myself if I didn’t think they were about to fold anyway, after their piss-poor sunsetting of Docker Cloud and painting a target on their own back for a few adbucks.
> there’s not a lawful basis unless advertising-cookies are suddenly the enabling technology behind downloadable containers.
Yes they are. Advertising cookies are how those downloadable containers are provided. That's an enabling technology. It wouldn't exist otherwise in the technology ghetto of the EU.
> The ‘consent’ is a condition of service
> If you require someone to agree to processing as a condition of service, consent is unlikely to be the most appropriate lawful basis for the processing. In some circumstances it won’t even count as valid consent.
Instead, if you believe the processing is necessary for the service, the better lawful basis for processing is more likely to be that the “processing
is necessary for the performance of a contract” under Article 6(1)(b). You are only likely to need to rely on consent if required to do so under another provision, such as for electronic marketing.
It may be that the processing is a condition of service but is not actually necessary for that service. If so, consent is not just inappropriate as a lawful basis, but presumed to be invalid as it is not freely given. In these circumstances, you would usually need to consider ‘legitimate interests’ under Article 6(1)(f) as your lawful basis for processing instead.
And in regards to tracking specifically:
> You are also likely to need consent under ePrivacy laws for most marketing calls or messages, website cookies or other online tracking methods, or to install apps or other software on people’s devices.
You basically have to have a modal "do you consent to tracking? [yes] [no]" dialog. Which obviously nobody who does tracking wants to do, but that's kind of the point.
Truth is, no one cares. GDPR is an overreach designed to shake down American mega-corps. Docker has no money so the EU isn't going to do anything to them.
>I’d report them to the Information Commissioner‘s Office myself if I didn’t think they were about to fold anyway
I'm sure they're inundated with complaints from unsuccessful companies trying to shoot down their biggest competitors already. Adding one more to the pile is only going to waste your time and that of EU regulators.
You mean, like Google continuing to compile Location statistics while assuring users they're not?
"designed to shake down American..."
Or, it's not just a scam after all ... for whatever reason, some places in the world feel a need to protect themselves from US ...
... and they're actually trying to protect their citizens. Unlike 'our representatives' (hah!) in the US Congress.
You will, if you have EU customers.
> GDPR is an overreach designed to shake down American mega-corps.
The GDPR is the result of mega-corps (American ones in particular) not giving two shits about how their users' personal data is handled. Cry all you want now that the milk's spilled, it won't change the fact that this legislation was not conjured in a vacuum, but as a response to the way corporations behave when not obliged to care about personally identifiable information.
> Docker has no money so the EU isn't going to do anything to them.
A formal reprimand might suffice. Contrary to the naive american view I see here on HN, EU data regulators don't immediately try to shut you down by barging into your company's office with a SWAT team.
> I'm sure they're inundated with complaints from unsuccessful companies trying to shoot down their biggest competitors already.
How sure? 100%? 50%? Less? What are you basing your assertion on?
> Adding one more to the pile is only going to waste your time and that of EU regulators.
There's a characteristic nearly all government departments share: they may be slow, but they're steamrollers. They'll get to you eventually.
And yet it hurts small startups that don't have the resources to become fully GDPR compliant more.
Not giving users a way to delete their accounts was never okay. Tracking user behavior without consent was never okay. Holding users' data hostage was never okay. Not giving people a way to correct the data you keep about them was never okay.
US startups have been playing on easy mode by getting to ignore human rights and just follow the local letter of the law even when going international.
If anything you'd think HN "classical liberals" would love this as it evens the playing field, allowing for fairer competition between already privacy-aware EU companies and the previously unfairly advantaged US companies entering the EU market. Of course this assumes you think privacy and data ownership should be protected as human rights in the first place.
Sure. If being GDPR compliant just meant you just don't have to do those things, it wouldn't be a problem. But with GDPR you now have to spend time (=money) understanding what GDPR means (probably with a lawyer's help) and ensuring that you are in fact compliant. "I try to protect user's privacy" isn't good enough when the EU could effectively put you out of business if you aren't. You'll have to deal with Data Access Requests, most of which are from trolls. You may need a DPO, which might require hiring someone. I'm all for protecting privacy, but the GDPR adds quite a bit of burden, which large corporations will be able to eat, but will set back smaller corporations. Really medium size companies are in the best position, since they have the resources to meet GDPR obligations, but don't have to do massive overhauls like the big corps do.
As a tool for building containers its cumbersome, has bizarre and frustrating limitations and has issues that haven’t been addressed in years. You can’t use semver for tags, it eats up all your disk space and you have to manually GC it, etc… Multi-build support is basically useless and invariably you end up writing convoluted bash scripts to get the thing to work.
Caching is terrible pretty much across the board. How many petabytes of data are wasted every day re-syncing apt-get?
As a runtime engine its basically dead for production use. If you ever try to use it you’ll quickly discover that it has tons of problems. It locks up, orphans processes, stops responding to commands, forgets about containers, etc. And its not safe to run arbitrary containers. You’d be surprised by how many companies using Kubernetes gave up using docker a long time ago.
Please save yourself a lot of heartache and just use containerd.
As a concept, containers never really lived up to their potential. "A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application." As long as that application runs in linux.
As a company Docker is a failure. All their cloud stuff is clunky, poorly thought out and at this point largely irrelevent. I'll be surprised if it will last 5 years. Kubernetes won. Those cute, cuddly characters and the Docker name are basically all they have going for them, and all these things, the Moby rename, the requiring login to download, etc... are the death throes of a dying company. Docker is the next jQuery.
People don't realize just how easy all this stuff will be to replace. Google already did it. Like 5 times. (gVisor, jib, kaniko, ...) It was probably some intern's side project.
Microsoft should just get it over with and acquire the company. It would be the perfect cherry on top to their Github cake.
Part of the appeal of Docker is the ease with which developers new to the concept of containers can pick it up. I cannot say the same for containerd. Try a search for containerd tutorials and compare what you find with docker tutorials.
> People don't realize just how easy all this stuff will be to replace.
Think carefully about why that is. If it were easy, it would have been understood. So, no - understanding the replaceability of the container components is not easy. That is why people have not realized it.
The issue with many docker's competitors is it appeals to docker experts. The abstraction and terminology itself is not straightforward.
A lot of hyperbole in your comment is unnecessary and/or undeserved. Of course I realize that this is HN where k8s, rkt and containerd are used largely by the visiting audience. But don't forget the dark matter developers.
containerd is just a building block. You'll actually be looking for a Kubernetes tutorial, which the internet is sprawling with.
choco install kubernetes-cli
(You'll need Hyper-V, however.)
I'm not aware of many k8s distro's running anything other than Docker by default, at the moment
On the developer side I'm trying to build a artifact of my application which can run on servers. I'm probably using windows or osx as my development environment, and I'm probably targeting linux. So docker is a tool I can use to create a container to achieve my goal.
But it doesn't do a good job of that. The dockerfile format can be very frustrating to work with. And all the terminology sets a very high barrier of entry for usage. Yes its all googleable, but sometimes I think the solution is harder to use than the problem it set out to solve.
As an example of that frustration, building a cross-platform Go app is as easy as setting an environment variable. No additional configuration is required and there are no additional tools you need to install. The binary that is produced has everything it needs to run and can be deployed without having to use specialized tools.
Whereas building a Go app in a container with Docker means you end up retrieving all the dependencies every time and you get no package caching for builds so they take 10 minutes. For any large project you will inevitably break out your docker files into separate steps with some sort of make file. You will then bang your head against a wall for days trying to get docker in docker to work for your CI build system.
Is that really the best we can do?
But docker is also about isolation, file formats, conventions for deployments, protocols, etc. And yes it's a step forward in many ways. Containers are great for allowing the ops side to focus on running an application instead of making sure its configured properly.
But on so many of these dimensions docker has had major problems. It's not really safe to run arbitrary containers, so the isolation is an illusion. Because of the way the tooling works the containers are way too big and contain way too many unnecessary dependencies - which end up being security liabilities because they aren't upgraded often enough.
But ultimately docker turned out to be way too low level. It's why something like Kubernetes exists. It's a much higher level way of describing how an application should work.
So docker is getting pressure from both sides and that's why I think its days are numbered. The formats and conventions will stick around, but once they start requiring people to pay for the local development tool, everyone will move on and we'll stop talking about docker at all anymore.
I am looking for features similar to Norton Ghost bust faster than waiting for installers every time I build.
- test-drive even the most complex deployments on your laptop
- spin up all the cool admin tools you ever wanted without wasting months of precious life time
- reproducible setups
- everything is always up to date
But as you say, caching is at best in a mediocre state. If you indeed want to run heavy-weight servers (like ELK), be ready to have a Terabyte space just for images unless you want to GC all the time when tweaking the configuration.
Also the laptop thing doesn't work unless you either have a very beefy machine or just work with very lightweight software. In reality Docker can be used for these use cases:
- Test-driving some heavy weight or complex server software
- CI to some degree
I'm starting to think good old Unix tools in combination with automation tools like Chef, Ansible, ... are the way to go - or even just plain .deb files...
What are they using instead? cri-o? rkt?
There's also a lot of mixed deployments of container vs native out there. It seems Kubernetes is popular but not many companies with a large number of servers are willing to bet the farm on it, so they may only run a subset of their services with it (stateless, or test environments)
You can use Docker, just be ready to account for the instability. With proper detection and strategies to evict bad nodes you can build a reliable platform out of it. (though for stateful things you may end up with a real mess on your hands)
Weve seen issues with cointainerd too, but at least so far it seems more stable.
Basically every piece of docker is being replaced. The runtime (containerd, crio), the tools to build containers (google has several), the server to host images (ecr, gcr, etc)... it's weird to call it docker when none of the components actually use docker anymore.
The gui and installer is nice for local development, so I guess it has that going for it.
The replacement of docker is a good thing IMO (though probably not a good thing for Docker the company) -- it's one of the main benefits of the kubernetes hypetrain, the development of C*I (Container <something> Interface) has been great for the ecosystem.
I personally find docker's CLI way more ergonomic however.
Is there another way to define what the container should look like?
There are also tools like jib which can build containers without docker: https://github.com/GoogleContainerTools/jib
It didn't happen often, but we definitely had problems. We're running kubernetes clusters with several hundred nodes.
Caching is very crude. When building its based on lines in the dockerfile which means adding a dependency means redownloading everything. You also can't mount a directory for builds.
Multi stage builds are very limited in what they can do and often aren't powerful enough to implement efficient builds. You end up either having 20 minute builds or complex make files to work around the inefficient default workflow.
FWIW an intelligent caching mechanism should not require manual cleanup. Thankfully kubernetes does this for us in production... but the crazy 1 GB images you end up with for a moderately complex python app make it hard. (especially when people use :latest and then there are 12 versions of the app laying around)
That's on top of the security auditing nightmare if you ever decide to use stuff from Docker Hub.
I like the concept of containers as a lightweight alternative to virtualization. I even use containers (FreeBSD Jails, specifically) for some particular use-cases where I need stronger isolation than separate UIDs, but don't need a full-blown VM. I don't mind this use-case at all.
I really dislike the reality of containers which seem to be "Our deployment procedure and dependencies are so insane so there is no hope of packaging this as an RPM or DEB, so here's an entire userland for you".
As an example of papering over a crazy deployment/dependencies nightmare: I remember seeing a project which used four Docker containers to apply some machine learning based automation to Hue lights. Two of the containers are basically infrastructure pieces (RabbitMQ and Cassandra), one container was dedicated to the machine learning piece and one acts to tie the other three together.
I have no idea why this project needs to run four separate operating system instances to do this job. If I were building it I'd do it as one application, so no need for RabbitMQ, and I'd use an ORM to let the user choose which database is most suited. I'd have an SQLite database as a reasonable default. Maybe I'd have an option to publish stuff to some kind of message queue so that it could be consumed by other systems.
Don't get me wrong, I love the idea behind this project and want to try it out, but it really feels like the author went container-crazy because they could and didn't stop to think about whether they should.
This is how Erlang releases have worked for 20 years
I never really got how a shipping container on a whale makes for a good logo, unless it's just meant to be reflective of the apparent reliability:
> the "whale" your container is riding on may just fucking dive underwater for hours at a time, but hey containers are designed to be in the ocean.. Wait, they're not?
The level of toxicity in the github issue's conversation is so astounding that I must say something. People on the internet are people. I hope that we don't talk to people in "real life" the same as we do when we make toxic comments online. Complain, write letters/emails to Docker, make your opinion heard, etc. But remember that you are dealing with a fellow human-being, who has their own life and own emotions. Treat them with the same respect you would like to be shown when people are unhappy with you.
The comments in that github issue reflect very poorly on us as a professional community to the point that I'm embarrassed.
The maintainer not only skipped addressing the core issue, which was that you need an auth-wall to download the setup files, he also closed the issue with a very dishonest answer without addressing the WHY. He even said the docs don't need to be changed. WHAT? It's gray area, but to me personally, it's an unethical move to not mention such a drastic change in the docs. Unless, they hoped that this would pass off either unnoticed as acceptable. But in this case, it didn't and the community held them accountable.
If you notice, most of the comments aren't personally attacking him, but rather suggesting alternatives or either just re-iterating the core issue once again.
But then, what I DO agree with you is the issue page slowly turning into a reddit thread, which I fear would lead to personal attacks, etc.
But, do you know what would have stopped all this? Just a simple apology/honest discussion about this change and perhaps, actually talking about a solution. That's all.
One person called the company "jerks" without singling anyone out. Another posted a rage meme to express their displeasure. The overwhelming majority of the hundreds of comments were even more restrained.
Perhaps I'm missing some of the more "choice" comments but as a whole I have far more of a problem with the official corporate-speak than the reactions too it.
I would maybe ascribe it to a mismatch in expected decorum, hardly call it toxic. Like, do you expect a church or an informal conversation among peers? Or if you will, a cathedral or a bazaar? :)
And it was just one comment .. "We're all adults" goes both ways, let's not fall all over ourselves because of a little mildly bad word. Personally I don't think that's very constructive either, in the sense that it lights the fire under something that distracts very much from the actual topic.
They called it an astounding level of toxicity. Whatever your thoughts on the expected level of decorum, that seems a tad hyperbolic to call out over usage of the word "jerk". Such hyperbole is definitely not helping, and in my personal opinion, actually more "subtractive" than generically calling a group of people that did something you don't like "jerks". For one thing, it's not astonishing. It just isn't.
I certainly wouldn’t defend it. It’s not toxic because “jerk” is a bad or particularly strong word by itself, it’s toxic because calling someone a jerk is a personal attack, and it’s a judgement call that is purely mean spirited and doesn’t address any concerns. It doesn’t explain the frustration, it’s escalating things in a negative way, and it’s an insult. That is not a socially acceptable way to express dissatisfaction.
Its purpose is not to explain the frustration but to signal the intensity while clarifying the attribution: the person making the comment believes that Docker acted knowingly against their interest and does not buy the excuse that Docker put forward. I challenge you to express the same concepts with the same clarity in five times as many words.
I contest that name-calling is not an escalation over doublespeak.
Finally, whether or not something is socially acceptable is up to society, not to you.
Splitting hairs perhaps? “Jerks” is intended to be an insult no matter how many people you’re talking to or about.
> it’s purpose is not to explain the frustration but to signal the intensity
Sure, that’s a plausible assumption, but it doesn’t help the conversation, nor make it okay or socially acceptable to hurl insults. Are you certain that was the purpose? Have you clarified that with the author of the comment?
Perhaps a better way to signal intensity is to explain what material impact the decision has on their workflow and daily lives. What is the cost in terms or time or money, or something else?
> whether or not something is socially acceptable is up to society, not to you.
Yeah, that’s correct. Did I claim it was up to me? I stated a fact, not my personal opinion. Throwing insults around is not socially acceptable, according to society, not me.
So all in all it looks like we're all part of a low standars professional community that should be perpetually embarrassed of itself.
They have little revenues and an over valuation to justify, while being attacked from all sides by competitors (Google, AWS, RedHat, Pivotal). Docker is in a bad position and they are desperate.
On "write letters/emails to Docker" instead, no thank you. If I suspect the company of some dishonesty (as I now do based on the thread at github), public shaming may be more effective and do most good. This discussion needs to be public.
But compare it with the corporate response. Doing this kind of thing without warning to loyal customers and then being dismissive about it is also very toxic.
For the record, I skimmed the discussion too, and I don't understand "astounding level of toxicity" either.
What do you call a person who dismisses your valid concern and declines to respond to perfectly plain, honestly formulated questions in good faith? That's a jerk.
So yeah, my real life response to that would be very similar.
They have badly misread their users, and they are paying for it. There's not much else to the story.
When you focus on style instead of substance you draw attention away from the core of an issue. It's an effective tactic when it's what you want to do, but anyone who wants to use it should stop and think about why they want to obscure the point instead of addressing it directly. If it's because you don't have a valid objection, perhaps you don't have a stake in the discussion and you should think twice about entering it. Digressions that address the civility of the participants always serve to defend the status quo. Is it a status quo worth defending?
Why do you think everyone deserves respect?
I prefer the brutal honesty of the internet rather than the fake civility you advocate for. It cuts through the static and gets to the core issue. My experience has been that people dislike harsh comments because many times it contains the truth and they don't want to be confronted by the truth.
Also, instead of crying that the world is harsh, why not toughen up? When did it become fashionable to be so soft and weakminded? Especially over something so silly as github comments?
Personally, I feel the people who are turning the internet into a toxic mess are people like you who attack speech. If you don't like harsh comments, don't read them. What's so hard about that?
Besides, everyone has different levels on what they consider toxic. I and nobody I know considers "jerks" a toxic word. Why should everyone lower themselves to your definition of toxic?
What I hope is relevant from the long list of projects that I just mentioned is that a company has spent a significant # of engineering years assembling, packaging, and supporting that combination in a way that makes it dead simple to do container-based development on non-Linux systems; mostly focused on developer laptops. No one else has that capability. It is a wide open field if anyone else wants to spend that same effort and time assembling a popular and free product that makes all that work together seamlessly on a Mac or Windows system.
I am not saying I don't have an opinion on whether it's good or bad to make people sign in to download this free product. That is a company's prerogative who controls that product, and market forces will determine whether people will deal with such additions/changes. I of course would love to see direct downloads not impeded by such a change, but that's just my opinion. The silliness of HN is revealed when people start listing a bunch of other totally unrelated projects (cri-o, rkt, containerd) which don't provide any of the functionality of Docker for Mac or Docker for Windows. I say that as a huge proponent and maintainer of containerd. Again, if there is any other offering that makes that possible out of the box for Mac and Windows-based developers, then people are free to get behind that. To me, the only alternatives are to throw a VM together with Docker, Kubernetes, and whatever else you want and hack together the scripting and updating to make it work for you, and Docker nor anyone else are preventing or impeding anyone from doing just that.
There's some interest in the idea of wrapping podman/buildah into something that can be consumed by Windows and Mac users in a similar fashion to how Docker is right now. But it'll take some time to pull that off.
I think many who posted on the Github issue will later regret the tone of their reaction. The treatment of the Docker employee was particularly nasty.
I'll take a guess: Money.
How do you generate profit if you built something wonderful but the competition already built everything around it that you wanted to sell later (Kubernetes & Co)? User data.
Well, yes and no. They've said why they've done this:
> we've made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward.
but at the same time, that explanation is clearly bollocks. Something I realised and find very helpful to remember is this:
> If someone gives a reason for something, and the reason is clearly bullshit, then it means the person giving the reason has a hidden agenda which is likely to be negative for the explainee. - "Will's law of corporate bullshit"
Here's how it works. People do stuff for a reason, for instance I ate lunch because I was hungry. I have opened the windows because it is hot and I like the breeze.
It is usually easy to match the action with the reason given, there is no suspicion here, there is no cognitive dissonance.
So let's take the example in question, Docker moving downloads of their software behind a login. Without attempting to guess at their motivations it seems clear that this is a very inconvenient thing to do for end users. As someone has pointed out, the steps to download the software are nearly doubled, and there are fears of getting corporate spam.
So OK, that's the action, what's the reason given?
Well, that's clearly bullshit, right? It isn't possible to match the reason given with the action. It's not going to allow for a better experience for end users.
Let's apply the logic. Company does something -> Reason given is bullshit -> there is likely a hidden agenda that is bad for the explainee.
So we have arrived at a situation where we are pretty sure that the hidden reason for Docker to make this change is negative. We don't know exactly what yet (we can speculate), but we are pretty sure it's negative.
So you are right, "We don't know why Docker did this", but we can be fairly certain it's not going to be for the benefit of us end users.
> but at the same time, that explanation is clearly bollocks
Whenever a corporation/someone explains their decision is to "improve the experience for our users" as the major reason, without explaining how exactly the decision relates to an improved experience, it's usually disingenuous.
I'm actually curious if there's counterexamples against this rule.
 bullshit makes the flowers grow
Maybe not to you.
It really is annoying to need to be able to login to download the binaries and being dishonest with the user base is not a good approach in order to build great company-customer relations.
But all this hate and frustration being channeled on the docker team/this issue is like people just bandwagoning on a issue where somebody addressed a issue in a respectful manner and they see it as an invitation to just shit on people, like "oh they fucked up, lets give em hell until the people responsible for this curse the day they were born".
It is a really poor behaviour. I don't believe any of them haters would approach people like this when being confronted with them face to face.
And here’s a little info on why we’re using cri-o at SUSE: https://www.suse.com/c/cri-o-container-runtime-on-suse-caas-...
(Disclaimer, I work in the same company, but not in the same department/group)
or keep using docker's dev client, it can target a locally running k8s now.
I wonder whether Google greenlighted significant investment into Kubernetes because they saw Docker Inc as a threat to Google Cloud and wanted to kill it early.
It's not precisely the same playbook as Embrace, Extend, Extinguish and it's been played out significantly less evilly (is that a word?) and more openly, but, well, they embraced Docker, extended it with K8s and are now well on the path towards "extinguish".
Docker got in as a container runtime, without any orchestration capabilities, then K8s came along to do orchestration but didn't provide it's own container runtime, so they used Docker.
After that there was some tension as Docker wanted to move "up the stack" to provide features like orchestration (with Docker Swarm) but the k8s community saw that as unecessary, they wanted a simple container runtime to sit under their orchestration layer.
Now we have options like cri-o or cri-containerd which are likely in the medium term to take over from Docker as the container runtime underneath Kubernetes installs. I'd expect that Docker will see more use in dev/test environments where full scale clusters are not required.
nobody who wants to make money likes being at the bottom of the stack (as it's where commoditization happens). Moving up means you have a bit more lock-in (via specific business requirements), and/or provide services that can be bought by stakeholders (vs a tech choice by the "lowly" engineers).
This is why i want my infrastructure to be owned by a non-profit. Just maintain the commodity infrastructure, no fancy, shmancy value-adds.
It seems to have worked, at least a bit, since Kubernetes has gained adoption. AWS is still ahead, though.
In any case, "to make sure we can improve the Docker for Mac and Windows experience for users moving forward" is yet another example of the official-sounding-yet-bullshit, vague and meandering language that seems to permeate into everything these days. (Is there a specific term for it? "Business-speak" doesn't have enough of a negative connotation for the "We did it this way and you will like it. If you don't, fuck off." that they really want to say.)
As for what to move to... That's definitely an open question. Docker for Windows is definitely one of the best developer experiences I've had on Windows because they put actual engineering work into it. All the same, I still found it buggier and less supported than Docker for Mac. At least on Windows, I wish we could combine the cool LXSS work with a development-only Docker implementation.
Ultimately, Docker should not be synonymous with containers anyway. Future versions of Kubernetes will not use Docker and instead run their own containers on top of libcontainer by default, as I understand it. I also feel rkt has a much nicer design than Docker, doing away with the daemon aspect of it. Hoping to see more development in the future.
Like, this is exactly what they're talking about, right? This is the user experience that they've improved because VCs have been willing to give them money because they show increases in MAUs.
Am I crazy here? You seem to be complaining about exactly the thing that you like
But really, I don't care that strongly. I do, however, wish to use, if possible, an open source solution. Why? Because I had a problem with Docker for Windows and I couldn't debug it. As I understand it, this is actually pretty similar to the reasoning behind Linux being developed.
(I find it annoying that you have to hunt for the download link too, tbf. I just figure that, OK, that's how they make their $ to build their mostly excellent software)
Hardcoding another container runtime would be disappointing to see and highly surprising.
Also, since no one anywhere in this thread is talking about it... You can get Dockers container execution engine without any of the "docker". Docker has spent significant time and presumably money splitting out `containerd` and I'm surprised it's not mentioned in this thread so far. It's compatible with Kuberenetes CRI, etc, etc.
edit: Here's an actual link. https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-...
If you want app virtualization and packaging, I have been pretty intrigued by what I have found out experimenting with `snaps`.
The biggest issue for me that I have hit with LXD/LXC is that host-to-container mount sharing is not as easy...which means I had to do some workflow alteration for moving from `docker` containers to `lxd` ones for existing projects...but otherwise I have been really happy with LXD.
Edit: And just to be clear, host-to-container mount sharing is possible, I just had to work at it and slightly alter my workflow to get the best solution.
Many containers that I develop or use will also be used in a Kubernetes cluster or at least as part of CD, so most importantly I want a local setup that mirrors to a degree what the remote setups look like.
Snaps provide something a bit different, more geared toward end users. I have no huge opinion regarding them, the isolation aspect is novel but unfortunately a bit of a PITA as of today.
As someone who is a maintainer of the core of what actually runs containers under Docker (runc), LXC (and by extension LXD) is _the_ most mature container implementation on Linux. I've worked with the LXC folks and I am constantly impressed how on-top-of-everything they are.
Is it really? I find that Linux is an excellent development platform, and also an excellent home-computer OS. I've been using it exclusively in a professional software-development context for over a decade now (prior to that my employer made me use Windows in addition to my Linux box), and I couldn't be happier.
Oooh, thanks for this, hugely useful.
The links you provided are just for the docker binary (which is, mac/windows they're just the docker client –not sure about what the linux binary contains). They're not the same thing.
(Even then, it's not trivial to find, because the standard Linux version of Docker isn't mentioned at all in the "getting started" section of the site. You have to know that they apparently call it "Docker Engine" now.)