Hacker News new | past | comments | ask | show | jobs | submit login
Containers Will Not Fix a Broken Culture (acm.org)
170 points by signa11 on Feb 11, 2018 | hide | past | favorite | 74 comments



So many companies I've interviewed for are rushing toward microservices and containerization as the cure to all problems. The problem is that the champions often have no clue what any of this means.

I recently spoke with a company that had no testing whatsoever for a large production app. When I asked about it, they proudly said "Oh, we do CI. We have Jenkins!" Any tests? "We're going to add them after we move to microservices. Moving away from our monolith is top priority because monoliths are difficult to debug."

I see a ton of companies shitting all over best practices and then chanting buzzwords to pretend that they're all about it. That, or gross misunderstanding of any concept behind buzzwords.

X company uses Docker. We should use Docker. "Um. This code runs on an FPGA." "Does it run Docker?"


If it weren't containers it would be a new programming language, framework, or another agile methodology. Your argument has very little to do with containers.

Containers are just a better tool for writing OS configuration scripts. (If your team is full of Chef experts then it's not "better" for your team, but for a lot of teams it is).

What you're saying applies a lot more to microservices, which are a fundamental architecture choice. Containers aren't, they're just better than a tangle of bash scripts which create stateful VMs. And the problems you're describing apply no matter which tools a team uses.

Remember that you can use containers without complicated orchestration or microservices. I think a better argument would be to untangle these three things and describe how each one can solve certain problems or make the problem worse, and under which conditions.


> "If it weren't containers it would be a new programming language, framework, or another agile methodology. Your argument has very little to do with containers."

Agreed, containers are just one way to package software, they aren't the be-all-end-all when it comes to making software modular.

One example of a modular design abstraction that does not rely on containers is the data access layer ( https://en.wikipedia.org/wiki/Data_access_layer ). The idea being you can design a service that sits on top of a data store (whether that's a RDBMS or otherwise), that encapsulates the core business logic that you want the applications in your business to adhere to. This data access layer can potentially be shared by differing applications. The implementation of this does not rely on containers. Also, just in case "data access layer" (DAL for short) seems like a corporate IT term, I'd say the best tool I've ever seen that's used to build DALs is GraphQL.

http://graphql.org/


>> Containers are just a better tool for writing OS configuration scripts. (If your team is full of Chef experts then it's not "better" for your team, but for a lot of teams it is).

No, not really. You could argue that dockerfiles are part image provisioning script and part process environment specification, but I think you'd still be missing the main advantage. Dependency isolation is the thing that usually gets touted, but that's only part of the picture. After all you can isolate dependencies now by baking images. That works great, it's well proven and reliable. But the vm that runs a single boot image can potentially run dozens of different containers, and using an orchestration platform you can easily and quickly shift those loads around, scale up and down, reconfigure and redeploy, all with far less overhead then deploying an image to a vm requires. Containers didn't become a popular tool because they don't add value. The use case for them has been clear for over four years now.


> Containers didn't become a popular tool because they don't add value. The use case for them has been clear for over four years now

Well, yes, they did become popular despite not providing anything substantially new. The main value of containers is that programmer who works with network now doesn't need (initially) to understand how to configure the network, which is a dumb idea by itself. All the other things added by containers boil down to distributing a tarball with whole operating system, so you can run that in a chroot.

From where I stand it seems that programmers didn't want to learn how to build, distribute, and configure software with OS packages, so they invented their own binary packages system.


The irony here being I had to port our production RPM (rhel-based) build system to Docker just so it could have a reasonable API and be anything close to maintainable.

Edit: the extreme portability and "free" concurrency were just a bonus.


Containers are not OS configuration scripts.. they still need a host to run on. They still need networking managed.. im not sure how you figure that one out.

Chef experts likely deploy your underlying hosts, setup other required services (eg load balancers, state)

while you can provide a generic "container" that will run whatever you want and provide consistent ingress/egress points so the "chef experts" can run it without caring what it is they are running


Not your point, but I guess it may Be little different for the Chef experts if container pipelines are defined with Packer.io using Chef provisioners.


I agree you shouldn't always go for microservices but you should always have separate well defined domains with well defined boundaries - even in a monolithic application.

I'm a C# guy so I'm going to speak in c# terms. I don't see any reason you shouldn't always have in a monolithic solution, c# projects where all of the internal classes are "internal" with a "public API" that is either a single class or interface.

That gives you the optionality of creating either in process Nuget packages by extracting the project or creating a microservice later when it makes sense. It also makes merge conflicts less likely and it lends itself to easier testability. In the last year and a half, I've been combining and separating projects from one application to another between microservices, Nuget packages and even Git subtrees as it made sense.


I agree. Progamming languages give us so many ways to organize domains without increasing deployment complexity and rewrites like microservices do. I wonder why microservice components talk as if those approaches don't exist or aren't equally valid. There used to be this term called "refactoring" which has gone out of fashion.


Too many people would use the term "refactor" when they really mean "rewrite".

As far as microservices, one benefit is that it makes blurring the lines between domains much harder. A mediocre developer can easily go into a well designed code base and make a mess of it quickly. In a microservice setup, their mess is mostly contained to one domain at a time.


> Too many people would use the term "refactor" when they really mean "rewrite".

I am a fan of "refactor from zero".

Yes, you're right. People do often mean rewrite. I suspect that using "refactor" instead comes from working in environments where rewriting is seen as akin to proposing sacrificing babies to Satan, but refactoring is a daily event.


There's still a distinction to be made. A rewrite usually means 'one day we will turn off this "legacy" system' and replace the whole thing at once', but a 'refactor from zero' can clearly involve running two systems at once for a while, slowly offloading functionality until you have replaced each part. The former is usually such a bad idea that it's worth clarifying that you're not doing it.


I agree completely and I implemented a microservice like architecture for just that reason. My team consists of junior devs and contractors. I didn't want bad design to infect the system.

But, I made sure we had an easy to use CI/Cd solution, orchestration and service discovery.


Complex social systems spaces are highly prone to fads. In technical areas, we call this "cargo culting".

The reason is that a deep understanding of the problem is hard and expensive. A proposed solution, particularly one that's being widely adopted in other places, has the surface appearance of a potential solution, but it is difficult to tell in advance whether or not this is true.

Hence: software, programming, technical, and management fads.

This also appears in clothing, music, diet, arts, and language (most especially dialects and/or slang).

The foundations are in information theory.

https://www.reddit.com/r/dredmorbius/comments/62uroa/clothin...


Another problem particularly bad for software developers:

Once we've solved a problem once, we 'understand' it. But we don't want to solve the same problem again. And even if we do show up for another one of these we have to somehow explain it all to people who won't believe it til they see it anyway. It's boring, thankless work.

For instance I've done things that look a lot like CMSes many, many times. I can predict pretty accurately what the bosses will be pissed about in 6 months. I'm only surprised by a production issue if it's actually dumber than I thought we could possibly be. Yeah, of course that broke. I've been warning you for months.

But if I had a nickel for every time I said "You really don't want to do it that way, do it this way", and people actually listened, I wouldn't be able to afford a cup of coffee. Only the Jr devs listen. The rest think they're special and will avoid the problems that everybody runs into.

At this point, I should probably have my head examined for showing up again. I have resolved that next time I will work on something where I can make all new (to me) mistakes and have a chance to learn. But here's the rub: that's probably exactly what 90% of my coworkers were thinking when they joined this project.


I am coming to a very, very sad realization that teams that 'need a rewrite' probably don't deserve them. The desire for a do-over is a little childish to begin with, but the fact that you can't find a route from A to B means you lack a useful combination of imagination and discipline.

From my personal experiences and those of my peers, I don't think you can trick people into discipline by rewriting the application and then letting them in after you've "fixed everything".

First, you are most assuredly deluding yourself about your own mastery of the problem space. The problems you don't see can kill you just as badly as all the ones you do. Two, the bad patterns will sneak back in the first time you are distracted. Which will be almost immediately, because you just made assurances about when major pieces of functionality will be ready to use.

If the team has enough discipline already, you can start refactoring the code to look more like what you wanted. By the time your rewrite would objectively ship you'll be a long way toward it already (and maybe discover some even cooler features along the way.) Refactoring is the Ship of Theseus scenario. You get a new ship but you still call it by the old name.


There's a classic equation:

Culture > Strategy > Process

CI/Containers/Microservices come under process. Without a culture that fosters a solid engineering strategy that supports and enriches them those processes will die on the vine.

Every. Time.


As a systems engineer, I struggle with this virtually every day. We're called 'DevOps' by most and anytime we encounter a new problem, everyone invariably screams for containers. Containers aren't a magic bullet.

My favourite example is when our AWS TAMs offer a solution, knowing we have ZERO pipeline/infrastructure setup for supporting containers. They always push containers. We don't use containers, stop forcing them down our throat. We've tried, we've been burned, VMs work for us. Stop!

When did containers become perceived as the end-all solution? I see their value and uses but they don't meet ours so why have we started ignoring the right solution for the job? I see this everywhere I go.


You need containers to run on Kubernetes my dude. And running on Kubernetes is critical.


Yes I know you were being sarcastic.

But you don't need k8s or containers for orchestration.

I chose Hashicorp's Nomad (I'm the dev lead for our company) precisely because I didn't want to commit to Docker from day one but I did want to leave that option open. Nomad works with everything - Docker containers, jar files, shell scripts, raw executables, etc and is dead simple to set up - one < 20Mb self contained executable that works as a client, server and as a member of a cluster. Configuration is dead simple if you use Consul.


Reminds me of this: http://www.mongodb-is-web-scale.com/

"Docker, docker, docker!"

Edit: I'm not knocking either of these products - I actively use Mongodb in production. I like docker/containers/Kubernetes and have used them for various projects. I just take offence with how people have started ignoring common sense, like: we don't have the tooling in place to support this product, or: it doesn't meet our business needs.


I think orchestration is critical. Kubernetes has obviously "won", but we're getting alot of wins from running on ECS that we'd have to rollback and reimplement on K8S, at least until EKS becomes available.

(honestly couldn't tell if you were being sarcastic, so assumed you weren't)


If you aren't currently struggling to manage a large pool of servers / services, then you don't need Kubernetes. Not yet, at least. It's over-complicated overkill if you don't need it yet. So are containers, actually, but the bar is a lot lower for them.


Rants like this are commonplace at my "devops" team as well. Docker is being pushed down everyone's throats and next up is Kubernetes.


It could be worse-- Your AWS TAMs could be pushing Lambda functions


This is really off-base, misses the point, and is another form of the "you don't need containers" criticism which has become very tired at this point.

This is mostly a critique of microservice architectures, not containers. If that were the main point I'd have little disagreement.

> Someone in security is weeping for the unpatched CVEs...

> ...the heavyweight app containers shipping a full operating system aren't being maintained at all...

This is just wrong, it's the opposite of that. Never have I had more up-to-date operating systems, programming languages, and frameworks than when I started using containers. It's just so damn easy, especially if you use `FROM python:3` instead of `FROM python:3.6.2`. It auto-updates every time you deploy.

> There is no substitute for experimentation in your real production environment; containers are orthogonal to that...

They're not orthogonal to it, they're a really useful way to get very, very close to production. The maxim isn't untrue, but again, I sleep better than I ever have in my life because I know that these problems are now rare for me. The difference between my local, staging, and production is tiny. I haven't encountered such an issue in over a year.

All of the problems in the article are true no matter what tools you use to build and deploy. The author focuses a lot on developers' desire to go off in a corner and build their own little world. That's still a risk if you're using Ansible or Chef.

Bottom line: writing a Dockerfile is the most powerful way I've ever found to define your OS's configuration in code. Stop discouraging people from trying it just so you can make grand arguments about the types of problems every engineering team faces.


It’s not off-base, just outside of your personal experience. This shows in your first comment about updating when you deploy: think about a large and not especially functional environment where there’s stuff written by vendors, contractors, acquisitions which have been folded into various areas, etc. They might not be deployed very frequently at all, and nobody in ops is sure which can handle a rebuild, whether the upstream images for all of those containers are still getting updates or whether upgrades will break something else, etc. They track Red Hat CVEs but not Alpine, etc. The corporate security scanner looks at the base OS but doesn’t know how to introspect containers – or it’s Red Hat’s and doesn’t know how to handle anything which doesn’t use rpm, etc. Performance is similar: containers add a layer of complexity which requires a lot of tools and practice to change – arguably for the better but many places weren’t doing so well even before the problem got harder.

I still think containers are the best answer to a ton of operational needs but it’s absolutely true that better tooling is needed for a bunch of problems, and this is repeating the classic hype cycle where it’s being billed as a cure-all when in fact it’s still going to require time, staffing, and a commitment to do the job well.

As an example, start with the easiest problem: say you want to prove that you’ve installed the latest OpenSSL patch. On traditional servers, this is a well solved problem. If you’re using Docker, your options are to buy a commercial offering with a support contract or, if your purchasing process is dysfunctional, build something around Clair, which has a bunch of usable but not great tools. If you’re the ops person looking at that, you’re probably thinking this just made your life worse even if there’s the promise that in some indefinite future it could get better. I’m hoping that the OSS community starts rounding out rough edges like that because it’s definitely an enterprise adoption barrier.


How does "It auto-updates every time you deploy" fit with "The difference between my local, staging, and production is tiny"? It sounds like you have little control over your dependencies, and the differences between your local, staging, and production environments are the newer versions of your dependencies which you haven't tested against.

I like the idea of being able to precisely control both my code and all of my dependencies, so that I know for certain that I'm deploying exactly the same overall system that I tested. Containers are much better for that than the old way, because you could never be certain that your OS and system software were exactly the same in production as they were local and staging. But to achieve that precision, you need to use precise version numbers, and you need to install your dependencies from a local repository to be really sure.


I think that by "It auto-updates every time you deploy." the OP meant - every time you build an image to deploy. But that exact same image goes through testing environments. (test -> staging -> prod)

Local may test with something that's a point release further, but in that case you'd find the issue when testing and pin the python to a specific 3.6.x until you resolve the problem and start rolling again.

There's also nothing preventing your from building images with precise control over versions of your dependencies. You can do it in the image in the same way you'd do it anywhere else. Specify your own repos and use lock files, or whatever your language allows.


You're right, I didn't make that clear.

What you're missing is that the beginning of the "deploy" process is building the image on your local (or on CI). That's when the update happens. Then you test it on staging, and if all is well you deploy to production.

If there's a problem it's easy to change your Dockerfile from "python:3" to "python:3.6.2" in order to go back to what you had. Or stick with "python:3.6" if you only want security patches. Or, if you want to miss out on those security patches in order to guarantee more stability, go with "python:3.6.2" and decide when to test and deploy an upgrade.



No, I am more talking about a standard process of applying security patches (and/or bugfix patches). I'm countering the claim in the article that using containers somehow makes software orgs more prone to "unmaintained" OS's.

I'm saying that has little to do with containers, and if anything containers make it a lot easier to get security updates.


> How does "It auto-updates every time you deploy" fit with "The difference between my local, staging, and production is tiny"?

It builds once, and then that very container with those dependencies gets pushed through testing, staging, and production. No more changes happen after the build.


Ok, that definitely works, assuming you set your process up that way. In my mind, "deploy" starts with build and targets a single environment. Eg: deploy to staging, deploy to production. I've been working with too many clients these past few years who do it that way. One has a real TeamCity CI setup, but they rebuild for production too.

Way back in the day, I helped push my company towards a process where build artifacts were tied to specific commit ids (SVN, back then) so that everything that reached production could be traced back through QA and Development. So, basically the process you described. No containers back then, of course, and no VMs either. We had real servers in our server farm.


I thought the very basis of continuous integration/delivery was the same build moving through various stages of readiness and environments until it’s delivered to production, automatically and continuously. In short, if it’s called continuous delivery, that’s about the only way the process could be setup with containers and still be called CD. Or am I mistaken?


I happened to get thrust into a large orgs ops management position 4 years ago and got to be on the bleeding edge of usable docker and kubernetes. I'm right with you on this.

I was lucky enough to get to slog through this stuff hard and have been building containerized development and operations systems as a huge amount of my work ever since. I get the author's position and I think it's likely the case for most people if they are asked to "Just use containers." It takes planning, knowledge, and a pretty multiclassed skill set to put together great container operations but even from day one when I realized I could isolate node (way before nvm) it was a godsend.

I barely run anything on my base system anymore. Everything I put together is now a cascading series of helm charts that easily deconstruct into bare metal deploy. The developers on my team are able to move fast with it because I stay ahead of it and make sure the tools are usable and documented before they are even thinking about them. I can take really obtuse customer integrations and quickly come up with solutions that don't create friction because of how fast I can break them and "infra as code" their way into our stack so no one has to deal with the fact that the API is garbage. I deploy things with health and liveness checks, I get reporting across the board of usage. Anything I want to flight to the world or internally is authenticated through our LDAP/Directory/GHA and I don't need a server troll to administer it.

I fully understand people not wanting to use them and just stick to what they know, but containers are amazing and I use them at micro to macro scale. Like you, my code has never been so up to date.

It's fun to write a glib article about how you don't like things that are happening. Great if you don't wanna learn them soup to nuts, but to dismiss their value so absolutely really misses a ton of opertunity. Even if it's many many pain in the ass weekends to get fluid with it.


> It's just so damn easy, especially if you use `FROM python:3` instead of `FROM python:3.6.2`. It auto-updates every time you deploy.

There's an issue with that. You're trusting whoever builds the python:3 image to actually update it and be secure.

There are a couple high CVEs in python:3 image, including a 10:

https://security-tracker.debian.org/tracker/CVE-2017-17458

https://security-tracker.debian.org/tracker/CVE-2017-17499

Then there are a bunch of other medium and low CVE, mostly from imagemagick, which is kind of a shame to include if you really don't need it. Same goes for that 10 for mercurial if that's useless to your project too.

You are best off receiving a base image from a trusted source, eg, if your organization maintains a set, or there is some distribution you trust who provides just the OS. Grab the most minimal set, then add your application on top of that. Make sure you go through a check to ensure you're not adding any insecurities yourself.


I'm the lead dev for our company and I chose as phase 1 for our implementation Nomad for orchestration, a bunch of "micro-apps" (single purpose executables), and Consul for configuration and service discovery. I chose that combination for ease of use and flexibility - Nomad works with everything raw executables and containers.

Of course later on the consultants came along later and scoffed at the simplicity of our process - our deployment process is basically one step in our continuous delivery pipeline - copy the bin/release directory to the destination folder.

I tried to get them to articulate a business case for us to use containers. They couldn't come up with one.

Then my manager, someone I really respect for his technical acumen finally gave me one. If we go to containers, we don't have to provision servers on AWS. We can use AWS Fargate. Lambda isn't an option we have long running processes that make more sense as apps.

I wanted to do Docker anyway eventually just to add a bullet point on my resume and I could have as the dev lead but it felt unethical to make a choice that wasn't best for the business. Now I think it's the right way to go.


"Then my manager, someone I really respect for his technical acumen finally gave me one. If we go to containers, we don't have to provision servers on AWS. We can use AWS Fargate. Lambda isn't an option we have long running processes that make more sense as apps."

Yeah, Fargate may be an inflection point: a lot of discussion of Docker ignores the fact that you need orchestration to make it work in prod, and orchestration is overhead.


Sounds like your consultants were not sold on docker/containers as you already supported them. They wanted to lock you into a managed service so that you keep using them.

If they really cared about containers they would have helped you continue using nomad/consul as thats a fine combination


I'm not opposed to a managed service and yes Consul+Nomad is great. But I like the idea of not having to manage VMs. We will probably move to a hybrid approach.


isn't debugging a bunch of small executables annoying?

How do you debug/trace the flow of execution between them easily?

You can't even get a good stack trace inter-executable, can you?


Our architecture is a hub and spoke. Each executable maps data from an external source to a common domain model stored in Mongo we use a central api as the hub that guarantees a consistent write model and validates the model based on MSs data annotation attributes. We create a corellation Id that tracks it between input and output (another executable). We use Serilog for structured logging. The hub Also publishes an event when the model changes.


Containers are not there to solve a problem. They are there to produce buzz, then busy engineers, then bills, all packaged in a way that overloaded (and sometimes quite unskilled) manager can show something to their bosses.

And in that regard Containers are successful as hell. That's why we have a religion around them now. You can hate it but you can't really ignore it if you need to pay rent and work in the industry.


I downvoted because the first claim is so badly wrong. Containers solve several problems which many places struggle with, the most important of which is having a common baseline for every project. Having a clear ops/dev responsibility boundary, tools, auditing, documentation, etc. which is identical for every team, every language, and every environment is huge in any organization over the smallest startups.

That’s a shame because the rest of your comment is right: that space is on the red hot point of the hype curve and there’s a ton of money being spent so someone can brag that they’re doing the same thing as the cool kids at Google or Netflix when in reality that’s like saying you’re an Olympic marathoner because you bought the same shoes. It’s profitable now but I wonder whether we’re due for some backlash.


Containers may solve some problems, but they're hardly the only solution and the others seem (note: personal bias) simpler to me.

> common baseline

"Here is the standard image with standard packages; if you need something else, add it to Chef/Ansible/whatever-we-use"

> responsibility boundary

"Ah, this binary is in /opt/$COMPANY - go ask the devs why it's broken"

> which is identical

So use configuration management (Chef, Ansible, whatever) for the system and tarballs/packages for your stuff, rather than shipping whole system images per-project?


First, this is engineering so there's no right answer for everyone. If deploying things using the configuration management tool of your choice works for your project and staffing, it's the right choice.

Here are some complications for the traditional approach you mentioned:

1. Containers provide a standard interface for managing things in any language: that means you don't have to know that this Java team used Tomcat, someone else used Jetty, the Python apps use mod_wsgi or gunicorn depending on when they were written, etc. Yes, Chef/Ansible/etc. can coordinate that too but you have to maintain conventions for editing shared configuration, permissions, storage, firewall ports, dealing with religious arguments about systemd, etc. That's especially true if you're using vendor or open-source apps where it's really nice not to have to spend time repackaging something where someone at Oracle, Atlassian, etc. use the LSB docs as rolling paper rather than reading material.

Again, having done it for years I'm not saying this cannot be done but it's refreshing not to ever have a talk about user IDs, using a sane config .d layout on RHEL, etc. again. That brings me to:

2. Coordinating changes or updates: again, yes, it's always possible any way you choose to do it but it's really nice not to have to maintain changes in multiple branches for teams at various stages of upgrading, deal with special cases, etc. Shoveling everything into a container means that the only team which deals with those questions is the one best equipped to answer them. That's especially nice when the problem is something like upgrading a common distribution package and it'd take a non-trivial amount of time versus no time to answer the question of whether your backport will break something else used somewhere in your company.

3. This is similar: “Ah, this binary is in /opt/$COMPANY - go ask the devs why it's broken”. In simple cases, yes, that's easy but once it gets more complicated — “this gets slow every Tuesday night”, “we're getting sporadic disk full errors”, etc. — it's really nice to have an inside/outside division which is consistent across every system and every project. Following the LSB rules gets you a lot but that's not universally followed so you're going to have to spend time on exceptions, arguing with vendors, or getting upstream open source projects patched. Again, that's all valid work but some times you don't have the time to spare for.


> not to ever have a talk about user IDs, using a sane config .d layout on RHEL, etc. again

until you have to maintain/debug/etc N^N combinations/branches of common code used within the different self produced containers to accommodate all of this 'freedom'...


Could you explain the scenario you have in mind? If you're having trouble maintaining convention across teams producing different shared components, that seems likely to cause problems no matter how you end up deploying things.

(This is similar to the argument for containers with developers who aren't great sysadmins: they're usually making changes anyway and this way they don't have root on the host, etc.)


I upvoted because you are explaining why you downvoted. My argument is, that they don't solve a new problem. Where there are already solutions it's fine (e.g. multiprocessing), but where there aren't solutions yet (e.g. low bandwidth cluster storage, or simply networking) it doesn't add any value. And instead of solving any of the hard problems it adds a layer of abstraction on top of the problems that makes it harder for the every day engineer to solve the problems.

Pretty similar to the article's point of view, but not that it hides cultural problems under a tech layer, but technical problems.

And yes, for the pure developer it might be better, because he can now focus more on his software only. But someone needs to maintain the infrastructure the software runs on, and this job just got harder.

(this is a copy of a comment, but it was meant to be in this path of the comment tree not the other one)


You're getting downvoted because of the impression of a sort of caustic nature of the comment, I suspect, but I agree with your real point.

I put containers and "orchestration" like kubernetes right up there with "Big Data", Kafka and a bunch of other technology that is the current fad. All of these have a legitimate use case. Odds are the use case of anyone reading this comment isn't one. But because of the terribly broken interview process and bandwagon effects, engineers feel compelled to force them into the development process in order to bypass filters (human and automated) on their resumes and keep a sort of social cache among their peers.


It's really funny how you describe my job perfectly. It started as Hadoop bandwagon, and now it's Kafka/Big Data/Kubernetes.

The basic idea of Kubernetes is really great. But it's not really finished development yet, and already bloated as hell. If you add that most of the stuff also runs on Openstack, which has the exact same problem, and developers who still don't understand that this is not a silo-build sysstem, you end up in an environment where 100+ people only work on getting your stuff installed with the same control you would have with a signle developer, ssh and init scripts.

PS: my comment isn't even in the minus anymore. It often happens that the first 1-2 views give negative comments and then it gets upvoted quite far. Not sure why. From my perspective I write as objective as possible. But also don't really want to think about it too much. For some people my thoughts seem valuable enough, so it's fine.


> "I put containers and "orchestration" like kubernetes right up there with "Big Data", Kafka and a bunch of other technology that is the current fad."

I'd put "cloud computing" in that list. The only explanations I can think of as to why it's taken off so quickly are:

1. Not all companies are large enough to hire their own sysadmin(s).

2. Mid to large sized companies can have byzantine budget bureaucracies, making it easier to invest in short term fixes rather than considering long term savings.

However, there are also large tech-savvy companies using cloud computing, and paying through the nose for the privilege, that's the part I really can't understand.


I downvoted it, because it's an opinion about things some people do and has nothing to do with the technology. You can use it well, you can use it badly. If you claim "They are there to produce buzz, then busy engineers, then bills", then you haven't seen a problem they solve. That's fine. Just don't try to tell people there is no such problem.


I think it has everything to do with the technology. Specifically the technology solves problems most (almost all) organizations don't have, yet their adoption is practically ubiquitous as is their requirement to appear on a resume to get past gatekeepers. If in the end the technology is used more as a means to "produce buzz, then busy engineers, then bills" than to solve problems for their legitimate use cases it hardly is an error to point it out.


Are you saying almost all organisation don't need to solve the problem of consistent deployment artifacts and easily reproducible testing/dev environment? Because these are some of the problems containers can solve. (not the only problems, and they're not the only solution - but they can be a solution to those specific cases)


I upvoted because you are explaining why you downvoted.

My argument is, that they don't solve a new problem. Where there are already solutions it's fine (e.g. multiprocessing), but where there aren't solutions yet (e.g. low bandwidth cluster storage, or simply networking) it doesn't add any value. And instead of solving any of the hard problems it adds a layer of abstraction on top of the problems that makes it harder for the every day engineer to solve the problems.

Pretty similar to the article's point of view, but not that it hides cultural problems under a tech layer, but technical problems.

And yes, for the pure developer it might be better, because he can now focus more on his software only. But someone needs to maintain the infrastructure the software runs on, and this job just got harder.


Containers solve one problem: if your team is bad at packaging and you have dependencies you aren't aware of.


They really did solve a lot of problems for us, but I don't think we over-engineered our solution, which I think is an easy rabbit-hole to dive into.


i am sure there are folks who solved a lot of their problems by writing monolith apps. they just don't walk around preaching their religion.


Yeah, I would agree there's quite a disconnect between "rubber hits the road" and the "Gospel of Docker".

This was magnified when I attended Dockercon in Austin last year. As a very small team that has used Docker (and ECS) to solve some problems, I was excited to learn more. It became quickly apparent that I was at the wrong conference for that: it felt like an enterprise vendor party where everyone was passing around cups of kool-aid.


Yeah the kool-aid stuff is really annoying. Btw have you checked out the details of systemd slices? I somehow thought it's nearly the same as docker, just without the hype. Just practical technology on the level it's supposed to be.


Well Linux is quite a monolith despite day-1 criticism about that fact. They call their style "pragmatism" and in some regards the market seems to validate it.


Given developer x and developer y, each with adequate technical skills but poor teamwork abilities, there's some business value in being able to employ both given that the cost of dealing with their poor teamwork doesn't exceed the business value they can each deliver.

In the past we might have to decide to forego employing one or both. Now we kind of have the option of giving each dev their own "playground" via containers, and not actually expect either to improve their teamwork skills. Again, as long as the cost of supporting the container infrastructure is lower that the business value each dev can deliver, it's a net win.

In practice I'm not sure if containers can really deliver on this promise, but it's a very seductive idea.


Same is true for VM's, BSD jails, or even chroot()

think the issues being pointed out here are more about the whole 'linux container ecosystem' which has a notion of statically built containers, automated system orchestration, etc, and primarily in an operations context..


If an enterprise adopts a technology that allows them to go faster, but doesn't change any of their processes to make them go faster, they have effectively thrown money down the drain. I'm sick of seeing tech companies eagerly switch out one tech for another without at least _trying_ to address the people and process problems.

Stop trying to reduce costs. Work out your cost of delay, and focus on getting things done more quickly and more effectively, not more cheaply and more efficiently.


I'm not sure what the point of this piece is. It could have stopped at "Complex socio-technical systems are hard; film at 11" and been just as informative.


Technology can't fix socio-economic problems, but it can shift the landscape of what is possible.

Old-school software engineering is very much about reports, forms, documents. Each tries to gather to itself every instance of some kind of information. Here is the Customer Requirements Document. Here is the Software Requirements Document. Here is the Software Design Document. Here are the Software Test Plans.

These days most places work out of an issue management system. JIRA tickets, Github issues, stories in Pivotal Tracker, whatever. The work is broken into small chunks with their own lifetime. We don't wait while all the requirements pile up before opening the dam and letting them flow downstream in a batch. Each goes when it's ready to go.

This is not possible without the right tools.

That's a lie. It's totally possible. With 1980s word processing and spreadsheets you could absolutely do everything Tracker or JIRA do. You could use stacks of 3x5 cards to track thousands of items across dozens of teams.

But you probably don't.

The tooling lowers the threshold of the possible, in a social and economic sense.

No, containers aren't miraculous. Of themselves, they do nothing to fix other problems. But they make it possible to achieve improvements that are more expensive and difficult in other ways. They lower the barrier of possibility. The landscape of alternatives shifts, mountains become hills.

I've been on both sides of that divide now. As a consulting engineer I saw projects rapidly iterating but not being able to deploy ("ops are too busy right now"), leading to dozens of handsomely-billed hours being squandered in meetings and workarounds and emails and chats and phonecalls trying to get the code into any kind of production. I've also seen projects where deployment took an hour or two to set up and that was that. People got on with the job. And a major difference was the platform.

One more thing.

> Development teams love the idea of shipping their dependencies bundled with their apps, imagining limitless portability. Someone in security is weeping for the unpatched CVEs, but feature velocity is so desirable that security's pleas go unheard. Platform operators are happy (well, less surly) knowing they can upgrade the underlying infrastructure without affecting the dependencies for any applications, until they realize the heavyweight app containers shipping a full operating system aren't being maintained at all.

This is a problem buildpacks have solved for well over a decade on multiple independent PaaSes.

Disclosure: I work for Pivotal, we do some stuff with containers, but we sorta focus on the parts on top of and before them.


well as somebody already pointed out. docker/containers are useless. combine them with a useful system like kubernetes, they can be really useful.


We used them in a web agency that has lots of different projects running on different stacks, and we have to go back to old projects from time to time. Having your dependencies, version, setup and even build tools all referenced in a compose file made it really easy and quick to switch projects or get a new developer set up. Simply git clone, docker-compose up and voilà. No more messing around with installing different and conflicting dependencies on the same machine, and much faster, easier and more stable than the vagrant/ansible-combo we used before that.


i don't know about "useless".

scientific computing, for instance -- you may have a weird set of dependencies for some code that you want to deploy a copy of on 200 slightly heterogeneous nodes, once, and hopefully never again; but, it's vitally important that people in the future have the possibility of replicating your computation.

containers are the perfect solution for this. in fact, in my experience they essentially do fix a broken culture around reproducibility of computational experiments.


100% wrong. Containers normalize deployment making the underlying tech stack irrelevant. This is their first and still primary use and they work really well for this. There is nothing wrong with 1 container per system, just like you used to do with standard deployment.


> docker/containers are useless. combine them with a useful system like kubernetes, they can be really useful.

Not exactly. For example, running your core infrastructure via Docker containers makes "ordinary" maintenance/disaster recovery really easy.

We run e.g. JIRA and Confluence in Docker... and it's a breeze to make a consistent backup and restore: stop the application and DB container, rsync delta to Netapp, restart the containers and the Netapp automatically does a snapshot. Daily, fully consistent backups with only 5 minutes of downtime - and because a restoration is as easy as "rsync the data out of the Netapp/LTO and do a (documented) docker run command with appropriate mounts" it's easy to verify that the backup actually works.

Prior to using Docker, testing the backup involved setting up a server, copying data around and manually importing it. Needless to say it was not done very often. But now we can apply this policy to all our services, plus we can test version upgrades with minimal effort compared to setting up the software from scratch.

Oh, and upgrading the software versions itself is also easy: rsync delta to backup, stop container, quick rsync again to ensure consistency (e.g. lock files, not flushed DB writes), stop & remove old docker container, run new docker container w/ new version, check if upgrade went OK - which it always did so far, but in case it did not, it's a matter of minutes to do a full rollback.

The downside, of course, is that we depend on the container author(s) to provide regular new builds which incorporate not only new app versions but also updates for the packages of the "OS" inside the container. For example if the thread vector is something like "PHP/Java does a RCE when passed a certain HTTP header", on an "old school" system I'd do an apt-get update && apt-get upgrade and that's it - with Docker I either depend on the container vendor or have to roll my own docker images with e.g. "FROM <vendorimage>; RUN apt-get update && apt-get dist-upgrade && apt-cache clean"...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: