While it's true that some of our customers have an army of consultants, the vast majority of our customers don't use consulting at all or that only use it very infrequently. If you don't want a lot of customization, have the right amount of people and the right people and realistic expectations you don't need consulting at all.
When people want to get a highly customized experience (often, for the wrong reasons), or want to get in production after 2 months, when their people have no experience in kubernetes and didn't do adequate testing (load, fault-tolerance, etc) it will give lot of problems. But that's the case for everything, not only for kubernetes
In my opinion (mine, not Red Hat's) getting someone to deploy it with you the first time, showing how it's done and why things are done, and after that and do a workshop explaining the basic concepts has great value, saves a lot of time and isn't expensive.
Neither helm or istio are part of the core.
Kubernetes is more than what's in the Kubernetes core GitHub repo, it's the collection of tools that go along with Kubernetes that "average users" expect to just be there. For example, an ingress controller, helm, likely soon cert-manager and external-dns too.
The split out into multiple projects actually makes the landscape harder. Where by people and distributions have to make choices about the set of expected vs set of provided features.
A great example of this I saw was an internal thread here at SUSE (I work on their Kubernetes product) where ingress controllers were being compared. Despite the Ingress data model actually being in the core, features like nginx-ingress-controllers ability to do arbitrary TCP and UDP ports on the same external-IP were being seen as a reason to choose that controller over some of the others. I can't argue the feature is useless, because it's not - it's very useful - but it's a great example of how things can very easily become conflated, even for people who should know better! A feature being part of the core just means it's part of a minimal Kubernetes deployment, while a complete Kubernetes deployment includes much more than core.
Are you asserting that any modern production environment that takes less than 2 months to set up is bound to cause problems? Or the whole company/startup/project?
If the former, I'd say there are many tech startups who would disagree, or at the very least, point out that it doesn't matter because they'd cease to exist during those 2 months without an MVP in production.
The time intensive phase during the first Kubernetes deployment is changing the mindset of the engineering team and everyone involved in IT.
Kubernetes IMHO is pretty much like moving from college into work life and adapting to the fact that requirements for a decent adult life are very much different from a college life.
One might reasonably expect "everything" to include traditional deployment methods that require no changing of mindsets (not that that's likely a factor in my nascent startup example).
Think being a startup where deployments most likely will be manual and undocumented, upgrading to an initial automation using some scripting, transitioning to CI/CD, etc.
So from various experiences at companies of all sizes I would actually support such a broad claim, since companies are basically transitioning all the time through new phases to greener pastures. That of course assumes that the company values constant improvements and uses an iterative process for growth and change.
If you have an existing production which works well after years of improvement and stabilization, with people with a lot of expertise on it, you won't get that in 2 months, not the stability and quality nor the ability to troubleshoot it.
Startups start from scratch, don't have big workloads, also there isn't so much pressure to have a high quality service from day one because you don't have that much people consuming your services.
It's certainly in line with technical new hire productivity "ramp up" time estimates (typically 3 months?), which I would guess are a similar (if individual) case of change-of-mindset.
All of which roll into a single metrics server (Prometheus) and dashboard (Grafana).
But application level metrics for example on the JVM behaves exactly the same whether it's in a container or not. You can still pull JMX metrics.
Setting up all of the special networking tricks to get to the bloody JVM in a container can be annoying. Especially because I guarantee those weren't setup until they were needed.
But yeah, most of my complaint is with folks that dropped the old way of making applications because they wanted to use the new tooling. Completely missing all of the "hidden" use cases the existing applications were made with.
The instructions are accurate, it's been easy to learn, and does not require an army of consultants. We will be moving 1000's of apps to multiple data centers globally with a team of around 6 people. It took 100s of employees years to build and maintain infrastructure for those apps.
Note - Consultants are not needed at all, they simply help speed up the process with expert knowledge.
The actual cluster deployment is not mess at all. Even if done with a mere Bash script, it can be done in like 20 lines of code.
I can only recommend to actually try using Kubernetes because what you wrote reads more like you are actually still in the state of "wtf is this and why should I touch this crap?" but have not yet tried it.
I am not disagreeing that it’s complex and a lot of buzzwords but every day the situation is getting better and better.
This is effectively what is being sold now but using Kubernetes and Containers instead of Application Servers and J2EE
K8s lets you build “containers” out of whatever existing software happens to be laying around, rather than having to write your services from the ground up for being run in a container.
Now try troubleshooting it.
And again the monitoring and auditing on Kubernetes is significantly better than trying to roll your own. There is detailed logs and metrics up and down the stack.
Clearly, you haven't run much at scale or experienced many problems.
Troubleshoot a network issue through an ingress, kube-proxy and then down to the pod.
Troubleshoot an issue with kube-api or etcd when you have no access to either the master or the etcd cluster.
Troubleshoot a load issue on a node related to IO.
Take the standard cloud application stack: Every app needs an artifact (disk image), a running service, an instance template, a group of VMs running using those templates, networking and a load balancer.
Before kubernetes, you had to automate by writing terraform scripts which mutate physical infrastructure as you apply.
With kubernetes, you POST a bunch of well-defined resources (container spec for the service, pod spec for the instance, pod as a VM, deployment as the group of VMs, services and ingress for traffic). Every physical cloud resource has a clear API mapping. Basically you save in kubernetes what you need as a first-class API resource. And then, kubernetes responds to what you saved by changing infrastructure to match.
You can do the same for practically any kind of infrastructure resources. Want an S3 bucket? Duh, POST a S3 bucket resource and you can write a controller to react and fulfil that bucket.
Kubernetes comes with a built-in set of functionality that fulfils a certain core set of infrastructure using containers. But you are not restricted to that. In theory, you could POST a VM as a resource, or an Instance Group as a resource, and you can write a kubernetes controller to fulfil those resources.
Why? Because APIs are more powerful than tools. APIs allow a different axis of infrastructure evolution, even if you distribute the control to everyone. By modeling these as APIs, you can bake in a huge amount of infrastructure intelligence into the API. Want to enforce different, code-driven resource-based policies, linting, sane defaults, organizational context? Yeah, make the controller do just that. If you just provide a tool to every engineering team, you basically lose any form of cross-cutting orchestration, and you lose the ability to evolve infrastructure in a separate axis independent of the tool that each team uses.
Basically, kubernetes is "Infrastructure as an API". Thinking about it as a "new application server" kinda hides the whole point of it.
After: Write YAML then run `kubectl apply`
We could probably bootstrap our entire product from scratch in a couple of hours if it disappeared off the face of earth.
“Infrastucture as a config” is a very powerful concept. I’d say that’s what the big power of k8s is.
There will be specialisation once again, because it makes absolutely no sense to make people whose main job it is to think about use cases, business logic and user interfaces to also deal with stuff like this.
In this way, devs will be doing ops, but not as ops for any given prod environment, rather encoding operational tasks in reusable software.
Ops themselves will bifurcate, one towards configuring prebuilt controllers, others towards troubleshooting problems in production. The middle ground will hollow out.
But more broadly, Kubernetes in the hands of most developers that are doing non-platform level work is a step backwards and will be disastrous. It's far too low level, it's like, hey this Python/Ruby/Java thing was great, but have you seen this new bytecode linker and assembler?
New platforms and runtime built ON TOP OF Kubernetes are probably going to be fine. But that's going to take a while.
Everyone who comments about DevOps should have read "The Phoenix Project" first, so that they understand what you just said. It's not about consolidating workloads onto generalists, it's supposed to be about breaking down the silos in an organization, nothing more!
You are right. What I should have said is that using DevOps as an excuse to make application developers do complex infrastructure design, automation and maintainance as a side-job is an unsustainable practice.
The idea that this will kill "devops" is ludicrious, because devops doesn't actually replace job titles. We have a "Devops engineer". He's an ops engineer, with "dev" in front of his title because he codes in perl and python (...like an op). Visual Studio is a "DevOps Environment". I don't know what this means. It likely means that it can provide end-to-end integration in your development environment, which was previously an IDE. The idea behind devops being a raised layer of abstraction is fine, but we don't really need a new word for it, and as history has shown, raised abstractions do not lower. We're going to end up with "devops" until we find a new buzzword.
There are, however, a number of people in the field that are all ops and have no development abilities. There is less and less of a need for those kinds of people.
This has been hashed about at length, but the inverse is also true. Developers must learn to operate their software in production.
I very strongly disagree, unless what you mean by "no development abilities" is merely an inability (or refusal) to write or modify (maybe debug) code. There is here, what I believe, a very important distinction, that is routinely lost, and has all but totally devalued Ops skills in the minds of many tech hiring managers who have exclusively programming backgrounds.
I was on a phone interview today, in fact, lamenting, along with my interviewer, that there exist such sysadmins who don't even write scripts or automation. He suggested that they've started to call themselves "IT", and, IMO, that may well be a more apt term even than "Ops". My own experience is that, back in the dark ages, a traditional sysadmin not only had to be proficient at scripting but also at some amount of C just to port open source tools between various proprietary Unix version, especially with new releases.
That does not, however, bestow upon me, anything that I could call, with a straight face, "development abilities". I hold in my head none of the best practices having to do with programming that have accumulated over the 3 decades of my career, for example. Instead, I hold the Ops best practices. Being able to implement FizzBuzz in bash is neither necessary nor sufficient for software development.
> Developers must learn to operate their software in production.
I disagree here, as well, on behalf of developers.
I think you'll find having to learn what is, essentially, a completely separate, new, engineering discipline, in addition to software development, severely cuts into their productivity.
Unless I bring production to the developers, almost like bringing the upstairs downstairs in the cartoon pushbutton house. I do actually feel it's a responsibility of Ops as part of Devops culture to create as frictionless (for their needs) a simulation of production as possible for developers. The practice has all sorts of benefits, including eliminating "works on my machine" debugging issues, that has been one of the rallying cries for containers.
Not everyone does the same job you do, even if they might share the same job title. I'm proficient in bash and python, have been learning rust, and know enough c to get around, but there have been many years in my career where my actual sysadmin duties required little more than being able to do 'while true; do x; done'
That's very much not been the case over the past half decade, but there were times in the 2000s where nothing was necessary beyond some basic bash scripts. No writing, modifying, or debugging of anything I would actually call code was needed.
I've also done hundreds of interviews for sysadmin positions. I'd say the minority of them were proficient beyond the very basics of bash. Most of them seemed to be doing well enough in their existing roles to not have been fired.
That's true about any title across the entire computer industry (and maybe all technology), but it doesn't say anything about the categories those jobs fall into and skills/experience associated with those categories.
> my actual sysadmin duties required little more than being able to do 'while true; do x; done'
If you mean that was the level of the complexity of the scripting/automation required, there's nothing wrong with that. Consider, however, trying to do even that, without any scripting at all.
Presumably something like 'do x', up-arrow, return, up-arrow, return, ad infinitum. Apparently, that's the kind of person that's moved into "IT" that was being described to me, though this was pure hearsay.
> That's very much not been the case over the past half decade, but there were times in the 2000s where nothing was necessary beyond some basic bash scripts. No writing, modifying, or debugging of anything I would actually call code was needed.
That would be comfortably (up to a decade) past the dark ages to which I referred. Please don't misinterpret my mentioning having had to work with C as being in contrast to basic bash scripts.
My point was actually quite the opposite, that it's all code, but that it's a form of working with code that is (at least subjectively) different from software development.
As someone doing the hiring for such things, people looking to get hired seem to have a different view of things. I don't need someone with an advanced degree in CS and mastery of algorithms to produce API endpoints and orchestration bits.
As for having software developers handing off their software for someone else to operate, well, that's an anti-pattern that inhibits throughput and software quality. There's quite a bit of data around this topic. I would encourage you to read this book:
I'll grant you that it's difficult to get there in a traditional IT org, but once you do get there, the results are glorious.
No dissonance from my end, and I'd say that other than the very narrow overlap of being able to "write code", that's quite distinct from what, for example, my Ops skills are.
To be fair, I did think of some other potential relatively narrow overlaps, such as revision control systems, strace/dtrace, and debuggers, but the latter is a bit tenuous, since it's a niche/vintage skill. Even with VCSes, I don't need, know, or use most features (and miss features like "svn export").
> As for having software developers handing off their software for someone else to operate, well, that's an anti-pattern that inhibits throughput and software quality.
You may be implying a false dichotomy (or maybe I'm reading one in that you didn't intend). I don't think anyone, myself included, is advocating in favor of the "throw it over the wall" anti-pattern that the original Devops cultural movement advocated so vehemently against.
I absolutely agree that developers should remain responsible for the operation of their software in production and, to whatever extent necessary, be involved in that operation. I'm also saying that if "whatever extent necessary" isn't remarkably minimal, it indicates something seriously wrong with the software, with the production-like development environment Ops is providing (i.e. implementation of Devops culture), or both.
My main worry is that specialization won't return soon and my Ops skills won't be enough to keep me employed.
> I've been doing Ops for 20 years and even back in the 90's I barely had to understand what a Make file was doing. I've certainly never had to do much actual dev work.
Although I ended up being pretty comfortable with Make, I wouldn't characterize what I did with it as actual dev work, either. The vast majority was tweaks of existing makefiles to get them to work on new platforms.
I'm not sure even the devs I worked with back then had as much exposure to Make as did the release engineers (part of QA).
> My main worry is that specialization won't return soon and my Ops skills won't be enough to keep me employed.
The latter part may end up being true regardless, in which case Ops skills will eventually be lost, to the detriment of the industry.
I doubt it, however, because even in the vast majority of Devops-as-a-title job postings, no matter how much emphasis is placed on automation, building internal tools, CI/CD, "dev", or "infrastructure as code", Ops skills (some of which those arguably were or overlap with) are still key. Otherwise, they'd just be hiring developers and having them perform these functions .
All that said, I don't know what the right answer is for staying employable.
I know it's not merely waiting for (a subset of) skills from the 90s to become valuable enough again. That would be doubling-down on what might be too narrow a specialization. However, even making sure one has at least some expertise in as broad range of Ops skills as possible, including networking, databases, and especially enough coding to enable routine (if basic) automation, may not be enough.
Maybe the answer is acquire just enough dev skills to get hired into those hybrid roles and then either wait until the Ops part becomes important enough to be full time or deliver so much more value with Ops than dev that the latter responsibilities get transferred to someone else. I have, so far, not done this, both because it feels slightly dishonest and because, like in the footnoted situation, I fear it leads to lower productivity/quality on the Ops side.
 Judging by some comments I've seen here, this also happens but has not yielded a large set of examples of successful outcomes (including keeping those developers happy and productively developing)
As someone trying to be primarily a dev who keeps getting roped into doing ops stuff for deploying my stuff, my experience is the exact opposite, and in particular putting Kubernetes on the end as the least complicated technology seems outright crazy to me, like, seriously, what on Earth are you talking about? You literally can't bring up a hello world web app without basically knowing everything you needed to know to bring it up on bare metal, and everything you needed to know to bring it up on a VM (whose crazy virtual networking stuff now seems like simplicity itself compared to Docker/Kubernetes), and everything you need to know to bring it up in Docker, and everything you need to know to bring it up in Kubernetes... and you need some sort of solution around building Docker images, too, so you're still going to need to know that, too.
There's all kinds of reasons this is necessary and there's all kinds of reasons why provising a new hardware server, blasting a Linux distro on to it, and just running your server is a bad idea. By no means am I proposing we need to go back to some glorious past, where losing a single hard drive sector means "oh well, guess the database is gone! recovery plan? Our recovery plan is to not need the database anymore!" I'm just saying, it's crazy to think this is all getting simpler. Try looking at this through the eyes of a fresh grad sometime.
Granted, I'm a bit crabby that literally every time I go to deploy something new, on a roughly 18 month time frame (the intervening time being maintenance and feature development), I have to go learn something else for my one-off task. (I don't think devops manages to beat the Node/JS world for churn, but it's probably in second place.) But still, it's getting more and more complicated, not less, and I don't even sense that we've passed an inflection point on the complexity reducing yet.
Those "all kinds of reasons" aren't necessarily valid, though, because it's not actually possible to go back to that caricature of the past that you describe.
The reason it's not possible is that some things actually have gotten simpler, at least from a user's perspective.
Automatic bad-block relocation has been standard for effectively forever. Even line-speed hardware RAID has been affordable for so long that it's not even a question. Hardware, in general, is routinely villified as being a nightmare, when the reality is that it's boringly reliable and, more importantly, has predictable enough failure rates that there are simple, standard engineering solutions around the failures (often already baked in).
That "inflection point" may well just end up being peeling away all the abstraction layers to discover that simplicity underneath works just fine, since we don't live in the 20th century any more.
The reason for this is the enormous cost benefits (its super cheap, you only pay when your code actually runs, no upfront costs, no passive server costs, no overprovision or underprovision of compute resources etc).
The servless approach has its place just like everything else, it’s not a panacea though.
I think containers is the future but we need idiot-proof GUI systems that Heroku and similar services provide to wire up our applications, and we need a good way to handle persistence, which many modern tools ignore.
Kubernetes' uptake is nowhere close to AWS by any measure. And AWS is the #1 public cloud that hosts Kubernetes workloads.
It's every IT vendor in the world vs. AWS, and so far, they're winning. Kubernetes and open platforms have a shot, but they're not moving fast enough yet in the direction that matters - up the stack.
Rolled it back into KVM/QEMU in colo with a glue layer REST interface over virsh and will never look back.
Of course we don't use containers..they don't offer an overt benefit in HPC...and I don't think they ever will.
There is a much bigger market for infrastructure in the cloud than people realize, and I will be very surprised if kubernetes is not replaced in 5 years.
What will very likely not be going away in the next decade is networks.
Unless we can build tooling for setting up the network infrastructure below all these cloud tools, you will still be stuck with people doing admin work like 30 years ago. From personal experience: k8s clusters are easy. Building the hardware and especially networking below so that everything actually can be automated in k8s... whole different story.
But actually shouldering the burden of creating and maintaining the entire runtime -- and orchestrate it -- is a bit much and stretches people thin.
Writing native UI code is no more low level than using electron. It's just less familiar to web developers and less portable. These are completely different considerations than the greater degree of division of labour I'm predicting for the server side.
Plenty of other low level APIs and technologies that you either can't use or are wrapped in higher level sandbox constructs for JS for portability, security, ease of use, whatever.
Writing native UI code often forces you to deal with low level platform specifics but you get the benefits, HTML UI is just a cheap way to do it (not meant in a derogatory way - it's easy to find devs/designers and target multiple platforms from same code base)
Without generalists with overview and situational awareness, you're going to need a lot of documentation, paperwork and meetings to preempt communication problems.
It’s more or less a logical following of infrastructure as code, because it’s the developers who write and push those files.
Devops is/was a professional movement to get developers and operations folks to communicate with one another, create a shared value system and culture, drive continuous learning, enhance automation, visibility/metrics, so we can all build software faster safely and sustainably.
Infrastructure as code was one small aspect of this. Continuous delivery, microservices, cloud native platforms, a sharing/collaborative culture, lean product development, modern alerting/monitoring, all were part of this.
But it seems tech pop culture cliche's win out over best intentions again and again, as they did with 'agile', 'cloud', 'structured programming', 'object oriented programming', 'reactive', and 'REST'
a) The term is misappropriated to refer to some something rather different from what it was originally meant to refer to. I may be guilty of doing that.
b) The concept was too fluffy to begin with and sparked a cottage industry for consultants without helping anyone else.
c) The original ideas were flawed and have so many unintended consequences in practice that people rightly start associating the term with those negative outcomes instead of the well meaning goals.
Perhaps a bit of all the above is what's causing the dissonance here.
b) Consultants (good and bad) always sprout up in any area, and have throughout history (see: Sophists vs. Charlatans).
But, what is "fluffy" (ill-defined, nebulous, uncertain, difficult) to one person is a deep area of research for another.
Put another way, people tend to dismiss what they don't understand or can't see as unimportant. This is the "looking for your lost keys under the street lamp" syndrome. Most often people can only understand the future in terms of the past, and if they don't have past exposure to a topic, they're not going to see its relevance unless they put extraordinary effort in to pay attention.
For example (not necessarily directed at you), Devops folks really put a lot of emphasis on Lean concepts that come out of the Toyota production system. But if I think "wtf do I care about making cars?", I might think ideas like "Continuous Flow" or "Cost of Delay" as being fuzzy and irrelevant to my work as a developer. But they have huge impact over how work is organized and made productive, and literally billions of dollars have been spent developing and honing these ideas in the product manufacturing industries... that just might have broader relevance to the dysfunction of how traditional enterprises run their IT shops vs. how Amazon does.
As for c), all ideas are flawed and have unintended consequences :) , focusing too deeply on the negative is a cynical reaction that often gets back to (b).
And I don't see DevOps becoming obsolete at all; just the other day I spend deploying apps with an ops guy, where neither of us could do it all on his own: the ops guy lacked understanding of the code bases, the endpoints to configure, and integration test procedures, me not having permissions to redeploy and view logs.
Yes, that means infrastructure as code, including code reviews, frequent deployments, etc.
No, it doesn't require that the developers working on the devops code are the same ones who work on the app code. At a small company (e.g., three devs) they will be, but at a large company, you have a dedicated "devops team" who exclusively works on infrastructure (as code).
DevOps guy formerly known as SysEngineer formerly known as SysAdmin for 15+ years.
Platform Engineer former DevOps guy, former SysEngineer, former SysAdmin for the past 24 years.
At the beginning the author poses the following questions:
* Do you use Mac, Windows, or Linux? Have you ever faced an issue related to \ versus / as the file path separator?
What version of JDK do you use? Do you use Java 10 in development, but production uses JRE 8? Have you faced any bugs introduced by JVM differences?
* What version of the application server do you use? Is the production environment using the same configuration, security patches, and library versions?
* During production deployment, have you encountered a JDBC driver issue that you didn’t face in your development environment due to different versions of the driver or database server?
* Have you ever asked the application server admin to create a datasource or a JMS queue and it had a typo?
I've experienced problems whose root cause are some form of all of those. Much of it could be chalked up to growing pains etcetera, but, for example, there are concrete differences between docker versions running on mac and linux that have shown up for me.
This doesn't reduce the author's argument, but they do seem liked strange examples.
Our choice to move to Docker and kubernetes came from developers, and specifically spoke to the need for consistent, reproducible test environments. We had dockerized most applications many months before the notion of using them in production was put on the table. What remains to be seen is if the switches in production reduce complexity and maintenance on the devops end of things, as well. I'm also curious how many other organizations had containers introduced 'from the bottom up' like us.
But For various reasons, political, personal, technical etcetera, consistent, accessible test environments were not available. It was a huge bottleneck for developers.
This is why docker was so appealing: it allowed devs to circumvent the political and technical issues. We didn’t have to justify provisioning more instances, since we could just use docker compose to run stacks on local machines.
So I’m that sense, the choice and it’s benefits arose from non technical hurdles.
That said, if you can, I'd be ridiculously interested in a retrospective when you have a chance on your transition. Good luck with it!
Observing similar patterns creeping in the Kubernetes culture and ecosystem does not inspire optimism in a side observer like myself.
Windows can, surprisingly, handle it just fine.
My team originally pitched how docker solved much of the dependency upgrade management by having layers for each major set of dependencies. That ignored the fact that upgrading a layer is not really something you do.
So, then you can go around the path of coordinating many containers communicating with each other. That works, but in that world, things really don't seem any easier than the earlier alternatives. Harder, in many ways.
Don't get me wrong, the momentum and raw money being put into containers certainly paints it as the future. It just feels like lying to say that they have even come close to parity with what we were capable of not that long ago.
The economics are interesting though. Computing resources are super cheap and getting more cheap everyday. (Well except for RAM!) The waste produced by containers means says, yeah, give each application its own app server. Its own web server, its own JDBC driver, even its own JDK. We'll throw it all on the cloud and run it for a few cents each hour.
The problem is that testing isn't cheap. This is where microservices and containers and the all this craziness will fall down. Now you've got dozens and dozens of applications all running with their own application servers, JDBC drivers, and their own JDK. This is a combinatoric explosion in your testing surface.
The true value of the application server approach is that it forced applications to conform to a clear contract. Once your application conformed to this contract it could be thrown over the wall into a full-time ops organization that could transparently deploy, monitor and manage the app.
The container madness will come to an end. People will realize that giving developers the keys to the kingdom and total freedom over their application is a terrible idea. (And smart developers don't want total freedom.) What will emerge will be something much more interesting: a hybrid approach where applications can be bundled into into artifacts that fully express their dependencies and containers can be annotated in a way that fully express their capabilities.
Containers provide isolation and immutable infrastructure and this is good. Appservers provide standardization, specialization and separation of responsibilities and this is also good. There's no reason why we can't enjoy both.
The devs just have a single base container to work off from and we use CICD to further enforce whatever we need to.
It also makes it easier for devs to propose changes, as we know we can consistently apply them.
Sadly, I don't think it's actually developers that were pushing for this "freedom" (with which comes responsibility, hence it makes sense that it would be unwanted), as such.
Rather, I place the blame with managers and with (fellow) sysadmins for not embracing, empowering, or forcing (whatever it takes) the aspects of the original DevOps cultural movement to have Ops bring the equivalent  of a production environment to every developer.
> Containers provide isolation and immutable infrastructure
I thought they weren't even (necessarily) immutable. If not, then maybe they're merely immutable-through-practice, but that's as unsatisfying as configuration management systems creating reproducible deployments that lack traceability .
 In terms of tooling for things like builds, deployment, and management (especially of dependencies), not necessarily full hardware capacity, although at one place I did provide each developer with a copy of yesterday's production database on their own, personal bare-metal database server, less "beefy" than the production hardware.
 is that the right word? something like chain-of-custody for code and/or configuration.
Containers are one way to force an application to explicitly declare all of its dependencies. The problem is the application then also bundles those dependencies. There's not much room for control or intermediation. Developers have total freedom to do all sorts of wackiness. (And oh the wackiness I've seen in containers. When the cat is away...) What's needed is a richer (and much more compact) mechanism for applications to express all their dependencies.
Another way to understand the issue to understand the underlying economics at work here. Container economics proposes that computing resources are so cheap that, sure, let the developers go crazy and do whatever they want. But those same developers must be on the hook for all operational concerns hence "DevOps." And this will scale until one day you look around and you realize that anarchy never works, not even a little bit.
Now there's a lot of room to fall here. RedHat is in the business of selling compute and storage resources. It's not clear that even if they could make their products more efficient they would. But as long as compute resources are cheap and getting cheaper you can let people go wild and RedHat will make a lot of money and everybody will be happy. For an organization the true costs will emerge in a very subtle manner: the productivity of the developers (already hard to measure) will ultimately fall. That's because instead of actually solving actual business problems their devs are deploying dozens and dozens of polyglot microservices and trying to figure out why on earth microservice #28 that Bob, who left the company a year ago, decided to write in a version of lisp that he invented crashes every day at noon.
But hey, at least Kubernetes makes it easy to deploy everything!
FWIW there are container-based app platforms that do allow you to swap out filesystem layers to update dependencies and to remove control from developers by having a standardized containerizer that has extension hooks but can't be mucked with at the lowest levels. This is how Cloud Foundry works for example, or Heroku.
A) focusing on having standard base images controlled by ops
B) encouraging combining source code / built artifacts as a layer on those base images
C) giving controls to ops so that the only images users could build or run must be built with A/B above.
In that mode containers are less wasteful because you can share the base image across every host (or rebuild everything centrally), and all that gets downloaded to a host is the source code top layer. Which is roughly indistinguishable from the lambda runtime and how it accesses the code to execute.
More, you have to worry about the container host anyway, so you haven't removed the need to maintain a host. You've just added to it the need to maintain the rest of the container infrastructure as well as your application stuff.
For places that have not solved the "virtual hosts should be virtually free" problem, containers are quite welcome. You can get going with them quite quickly. If you have already solved that problem, they can look an awful lot like just more work.
This is a somewhat pessimistic viewpoint, but lowest common denominator solutions tend to acquire the most network effects. A VM requires more touch points to manage for the person who has to set up a machine - despite ten years of solid progress, they still tend to be pretty annoying to configure and build and manage. The platform as a service approach (whether lambda, nodejs on cloudflare, various functions as a service approaches, heroku, cloud foundry, or dokku) on the other hand take away a lot more hassle by abstracting pain points out, but get accused of being too rigid. Both extremes benefit specific use cases, but have disadvantages in general purpose use.
Containers sit in the ugly, dirty, practical middle. They can do both (VMs are just processes). So the network effects they accrue just like Linux did of being “good for everything, not great” help mitigate some of the disadvantages.
The public cloud providers change this calculus a bit by offering these things as a service, but internally they are just managing the container runtimes for you.
I’m obviously biased, but I tend to see containers as “good enough” to build other abstractions on top, with specific areas where VMs and heavy PaaS abstractions clearly win.
CF built container technology before Docker or Kubernetes -- two generations of it -- because it was seen as the right primitive by people with experience of Borg. But containers were not touted as ends in themselves.
So the contract boundary given was: sourcecode. Buildpacks.
Docker comes along, then Kubernetes, and the container goes from being a hidden detail to a central concept around which a lot of other stuff orbits. And containers are a step forward on a lot of axes. Developers begin to want to use containers as their shipping unit.
So the contract boundary became: images.
Later ops realises that while opaque running containers are awesome for reducing their management complexity, it doesn't reduce all categories of risk. After all: what's in the damn containers? And so various tools have emerged from the container-oriented ecosystem to take sourcecode and turn it into a container image, so that developers and operators have a consistent handoff point.
So the contract boundary becomes: sourcecode.
It sounds like a nice story, and it might seem like we'll go in circles hereafter. But we're not doomed to do poetic laps: what's happened in the middle has been the rise of CI/CD tools, sitting between the container boundary and the sourecode boundary. Good fences make good neighbours and it turns out that fences made of helpful robots make even better neighbours.
As a Pivot with a long association with Cloud Foundry, I have enjoyed in the past few months getting to compare notes with Red Hatters and others in the k8s community.
They focus at far too low level a problem.
> The problem is that testing isn't cheap. This is where microservices and containers and the all this craziness will fall down. Now you've got dozens and dozens of applications all running with their own application servers, JDBC drivers, and their own JDK. This is a combinatoric explosion in your testing surface.
I fail to see how this more difficult from 95% of existing enterprise systems that run various JDK versions, various WebSphere versions, JDBC drivers etc.
The difference with containers is that vendoring your dependencies makes all that explicit. Whether you choose to stay on old versions is a mistake you can still make (and will make if you leave container creation and patching in the hands of dev teams).
That said, since when does any organization build horizontal testing libraries, like, "test X version of JDBC", so they can standardize on a single driver across all projects? It's never done. Projects are all over the place with their dependencies unless there is institutionalized forcing through Maven repos or what not to block the download of old/insecure versions.
There is no combinatorial problem here, each test suite needs to unit test the individual service and then test the API contract of the service. If you let your dependencies decay you're accepting major security and maintenance risk, but you were in the WAR file world too.
> The true value of the application server approach is that it forced applications to conform to a clear contract. Once your application conformed to this contract it could be thrown over the wall into a full-time ops organization that could transparently deploy, monitor and manage the app.
LOL, I really think you're exaggerating. I've worked with WebLogic, WebSphere, IIS, JBoss, Tomcat, Django, Rails, Node, you name it over the years, and while some orgs got close in the Java world, this was mostly a pipe dream that never happened in any cost effective manner as a standard practice.
> The container madness will come to an end. People will realize that giving developers the keys to the kingdom and total freedom over their application is a terrible idea. (And smart developers don't want total freedom.)
This I agree with. It's all far too low level.
This really was Docker, Inc.'s fault IMO, their marketing message was to empower developers to get ops out of the way by building these containers that would magically just be run as opaque lego blocks. That was the hype everyone on HN was drooling over in 2013. Turns out it's not quite that simple.
I honestly don't think this is necessarily a terrible thing. But, the idea that your common layers are stable is a dangerously bad assumption.
The fact that you can login to a VM and update a package does not make the system stable either (and of course you can actually do this with running containers as well).
Add to that you still need to restart running applications to take advantage of the package update (assuming the packages is a shared lib).
Meanwhile you can push new base images, automatically trigger rebuilds and roll out the update when ready.
Specifically, we had devs talking about how we wouldn't have to worry about system patching anymore, because the containers would take care of that. With no answer for how we trace versions and patches through our systems.
If you are already tooled enough such that you can completely redeploy a full stack easily without worrying about some in place modifications, the difference between a VM and a container are relatively minimal, all told. Especially since you have to be ready to pull down the host of the containers anyway.
Generally the patching problem can be solved with image scanning, of which there are tools out there to deal with this, both FLOSS and pay for.
FreeBSD jails date from 2000
Solaris zones date from 2004
In the Linux distribution that I used (PLD), the first vserver support took place in january 2004.
Maybe a little earlier because the util-vserver showed in Nov 2003.
That said, it is not limited to any one set of our industry. It is likely not even limited to our industry.
We like building new things, from different angles, but in the end it seems like everything cycles through the same ideas.
I’m only vaguely familiar with FreeBSD, and not at all with Solaris.
We were deploying into HP Vaults in 2000.
Being in Ops, and having first been paid to do sysadmin work 30 years ago, and having been raised by semiconductor industry veterans, it's pretty safe to say I know tech history.
Having experienced plenty of that history first-hand, including learning to "know better" the hard way, has resulted in a certain amount of conservatism and skepticism toward any new technology, especially one that gains popularity with particular speed.
However, "skepticism" isn't a synonym for "summary dismissal", but, rather, actual skepticism in the sense of asking uncomfortable questions and paying attention to the answers, even they end up being uncomfortable to hear. That's how we grow and, ultimately, know even better.
I've also learned that I enjoy working in smaller companies and/or startup environments far more than the alternative and this has, inevitably, required adapting to some amount of the "move fast and break things" method. Even if its genesis is with leads that can legitimately be labeled as "immature", it merely increases risk, and risk is inherent in startups anyway.
As it applies to my evaluation of new tech, it usually means I get to evaluate it already in operation, instead of merely asking questions. Often, that empirical data is far more valuable, even in forming an argument to stop using that tech.
All that said, so far, I can't help but agree with your assessment of "boondoggle" when all the layers of abstraction, complexity, and cost are included in, languages, ci/cd, virtualization/containers, orchestration, and cloud.
Disclaimer: Infra/DevOps engineer previously, 20 years total in tech
I’m imagining a thought experiment... the requirements to understand and operate in this industry... for example this article that was posted . Imagine giving it out at a family reunion and asking each relative if it makes any sense at all. Even the ones who are developers might only get a small gist of it if they program java code. For humans to digest decades of information written by 1000s of other humans in terse code ... it’s a daunting task which is married to the side effect of accidentally repeating past mistakes. Maybe if we had an Oracle to help us... oh wait that’s a terrible joke
What really makes me afraid are the thousands of complex addons that are being pushed by the community (For example, Istio, Networking addons, etc etc). Those should be kept outside and it MUST be made clear they are definitely not needed for a normal installation of Kubernetes.
Istio for example, is such a political brainwash power-move by some bigger companies that benefit from it. I believe less than 5% of the use-cases really require Istio, still it is being pushed as something you should always install in your cluster. This is bad for everyone.
Seriously, this is so embarrassing at the current point ( with me having absolutely 0 understanding of the concept)
If you're still getting enough work as a sysadmin and enjoying it, I think "VMs" as a concept will still be around for a long time.
It's all about saving money on DR. When these companies realize that developers can't handle doing ops AND complex business logic, maybe they'll rethink it. Until then I expect this trend to spread rapidly as companies look for ways to abstract away DR costs.
Cloud services is pretty much a business of huge marketing budgets and old-school lock-in strategies.
Assuming you have a similar definition how does Kubernetes solve that problem?
So k8's basically runs a replica of your whole system behind the scenes so if a physical location goes down you still have your system running.
There's a lot more to replicate than just the bits in the apps.
Next up - lightweight k8s server - stripped down of all the crap that can easily run and deploy a single container.
Beats are pretty lightweight and for what I used them they work.
Without getting too far into semantics, I think the author is using a well overloaded terminology. When I hear Application Server, I immediately think tomcat or some other jvm.
The only part I object to is when people get mystical and pie eyed about the idea that putting apps, code, what-have-you in the cloud will mystically make it work well and if it fails you just restart the container or pod and all is well. Which is of course absurd. Bad code is bad code, bad queries run poorly no matter the context. You can buy your way out of some performance issues by scaling up, but that only buys you time at best. IMHO there will always be a need for someone to help people understand why things aren't working well and fix it.
I resisted vm's at first.
I seriously resisted the idea of the cloud.
I resisted Devops and Agile.
I looked curiously at containers as possibly not just a fad having used jails for years.
Now I'm managing a large and growing exponentially K8s platform and having a great time.
Resistance is futile.