Why Kubernetes Is the New Application Server 280 points by rterzi 8 days ago | hide | past | web | favorite | 199 comments

 I would say Kubernetes is becoming the new Application Server, but not for anything listed in the article. Kubernetes is more and more being sold to "enterprise" as a solution for running services much like Application servers were, and much like application servers the standard Kubernetes deployment is becoming a tangled mess of buzzwords and dreams, which work great in a demo, but won't work at all without an army of consultants.
 I'm a former OpenShift (Red Hat's distribution of Kubernetes) consultant and currently I work at Red Hat in a different position and I have to disagree on the "won't work at all without an army of consultants".While it's true that some of our customers have an army of consultants, the vast majority of our customers don't use consulting at all or that only use it very infrequently. If you don't want a lot of customization, have the right amount of people and the right people and realistic expectations you don't need consulting at all.When people want to get a highly customized experience (often, for the wrong reasons), or want to get in production after 2 months, when their people have no experience in kubernetes and didn't do adequate testing (load, fault-tolerance, etc) it will give lot of problems. But that's the case for everything, not only for kubernetesIn my opinion (mine, not Red Hat's) getting someone to deploy it with you the first time, showing how it's done and why things are done, and after that and do a workshop explaining the basic concepts has great value, saves a lot of time and isn't expensive.
 This is the same process that was done with J2EE and Application Servers, it starts out with a _simple_ standard to have a Java application served with a standard API, and then it grows out of control. You are already seeing it with Itsio and Helm being layered on top. Today you might be able to get the stock Kubernetes up without much assistance, but give it a few years as "features" get added on.
 I'm not seeing that and I'm pretty sure we won't. it's extremely hard to get new features in the core of kubernetes and in fact it's moving to a micro services architecture.Neither helm or istio are part of the core.
 I think you're making a distinction that many others wont. Specifically, a distinction that ordinary users of Kubernetes won't make.Kubernetes is more than what's in the Kubernetes core GitHub repo, it's the collection of tools that go along with Kubernetes that "average users" expect to just be there. For example, an ingress controller, helm, likely soon cert-manager and external-dns too.The split out into multiple projects actually makes the landscape harder. Where by people and distributions have to make choices about the set of expected vs set of provided features.A great example of this I saw was an internal thread here at SUSE (I work on their Kubernetes product) where ingress controllers were being compared. Despite the Ingress data model actually being in the core, features like nginx-ingress-controllers ability to do arbitrary TCP and UDP ports on the same external-IP were being seen as a reason to choose that controller over some of the others. I can't argue the feature is useless, because it's not - it's very useful - but it's a great example of how things can very easily become conflated, even for people who should know better! A feature being part of the core just means it's part of a minimal Kubernetes deployment, while a complete Kubernetes deployment includes much more than core.
 > want to get in production after 2 months [...] > it will give lot of problems. But that's the case for everything, not only for kubernetesAre you asserting that any modern production environment that takes less than 2 months to set up is bound to cause problems? Or the whole company/startup/project?If the former, I'd say there are many tech startups who would disagree, or at the very least, point out that it doesn't matter because they'd cease to exist during those 2 months without an MVP in production.
 I think the two month phase is pretty accurate. Deployment itself is not that time intense and changing your applications to be container friendly is not either.The time intensive phase during the first Kubernetes deployment is changing the mindset of the engineering team and everyone involved in IT.Kubernetes IMHO is pretty much like moving from college into work life and adapting to the fact that requirements for a decent adult life are very much different from a college life.
 That's as may be for Kubernetes, but the GP specifically stated "that's the case for everything, not only for kubernetes", which struck me as an extraordinarily broad claim.One might reasonably expect "everything" to include traditional deployment methods that require no changing of mindsets (not that that's likely a factor in my nascent startup example).
 Transition phases actually make sense even if you apply "everything" instead of just "Kubernetes" considering how every company transitions through stages in terms of tooling.Think being a startup where deployments most likely will be manual and undocumented, upgrading to an initial automation using some scripting, transitioning to CI/CD, etc.So from various experiences at companies of all sizes I would actually support such a broad claim, since companies are basically transitioning all the time through new phases to greener pastures. That of course assumes that the company values constant improvements and uses an iterative process for growth and change.
 I should have worded that differently. I meant migrating an existing production.If you have an existing production which works well after years of improvement and stabilization, with people with a lot of expertise on it, you won't get that in 2 months, not the stability and quality nor the ability to troubleshoot it.Startups start from scratch, don't have big workloads, also there isn't so much pressure to have a high quality service from day one because you don't have that much people consuming your services.
 Thanks. Where "everything" is in the context of a transition or migration, rather than any kind of deployment, including an initial one, 2 months seems like a very reasonable minimum.It's certainly in line with technical new hire productivity "ramp up" time estimates (typically 3 months?), which I would guess are a similar (if individual) case of change-of-mindset.
 Very similar abstractions are being created over and over, if you are too close it is hard to see that the abstractions are largely the same, and the grand benefits being toted are actually quite small when comparing one to the other and the differences are just attitudes and a tiny bit of tooling that could be recreated for any of the abstractions.
 It does get frustrating to lose tooling you previously had. Even more frustrating when some of the instrumentation that you take for granted in the old world is "TBD" in the new one.
 Instrumentation on Kubernetes is pretty incredible albeit complex. You have host, container, application and ingress level metrics. But via the mesh networking eg LinkerD detailed metrics of how containers are talking to each other.All of which roll into a single metrics server (Prometheus) and dashboard (Grafana).
 Most of the instrumentation I have found myself losing have been at the application levels. And to be clear, most of them are still somewhere, but not nearly as well polished as in services we have not on containers. (I'm also primarily talking about a bunch of internal practices at my current job.)
 I can't speak to how you build your apps.But application level metrics for example on the JVM behaves exactly the same whether it's in a container or not. You can still pull JMX metrics.
 Agreed, most of this is in the "how you build your apps."Setting up all of the special networking tricks to get to the bloody JVM in a container can be annoying. Especially because I guarantee those weren't setup until they were needed.But yeah, most of my complaint is with folks that dropped the old way of making applications because they wanted to use the new tooling. Completely missing all of the "hidden" use cases the existing applications were made with.
 I'm helping build an Openshift platform with 2-3 redhat consultants. I do not work for Redhat, I'm a Linux/Devops admin. It's been very simple to install and fully automate everything. It also is allowing developers to focus on writing code and not infrastructure as AWS does. Most of it is bare metal for now, but we will be able to spin up AWS and Azure in a few hours with the same code.The instructions are accurate, it's been easy to learn, and does not require an army of consultants. We will be moving 1000's of apps to multiple data centers globally with a team of around 6 people. It took 100s of employees years to build and maintain infrastructure for those apps.Note - Consultants are not needed at all, they simply help speed up the process with expert knowledge.
 I have to disagree here. From personal experience I can say: you need exactly one person willing to read, try and properly observe the installation and population of a kubernetes cluster with applications and tools.The actual cluster deployment is not mess at all. Even if done with a mere Bash script, it can be done in like 20 lines of code.I can only recommend to actually try using Kubernetes because what you wrote reads more like you are actually still in the state of "wtf is this and why should I touch this crap?" but have not yet tried it.
 I can spin up a Kubernetes cluster in seconds using AWS, Azure or GCP. And with helm installing new applications is a pretty trivial affair.I am not disagreeing that it’s complex and a lot of buzzwords but every day the situation is getting better and better.
 You can also spin up Tomcat with everything needed to run an application with your local package manager and you can deploy your WAR file by putting it in the right location, but that's not what IBM, Oracle, and Red Hat have sold as Application Servers.This is effectively what is being sold now but using Kubernetes and Containers instead of Application Servers and J2EE
 The difference is that Tomcat can’t run arbitrary badly-behaved third-party x86 binaries under itself. (I mean, it can, but it doesn’t have the facilities to protect itself from them.)K8s lets you build “containers” out of whatever existing software happens to be laying around, rather than having to write your services from the ground up for being run in a container.
 Easy peasy.Now try troubleshooting it.
 On the cloud providers it is fully managed so nothing really to troubleshoot from the core platform perspective. The applications e.g. ingress will likely do but that's the same as if you were running on bare metal without Kubernetes.And again the monitoring and auditing on Kubernetes is significantly better than trying to roll your own. There is detailed logs and metrics up and down the stack.
 "On the cloud providers it is fully managed so nothing really to troubleshoot"Clearly, you haven't run much at scale or experienced many problems.Troubleshoot a network issue through an ingress, kube-proxy and then down to the pod.Troubleshoot an issue with kube-api or etcd when you have no access to either the master or the etcd cluster.Troubleshoot a load issue on a node related to IO.
 I feel a bit alone here. The most obvious benefit of kubernetes is that It's an API. There is definitely a tooling angle, but the core disruption that kubernetes brings to the picture is, it allows to model infrastructure as an API.Take the standard cloud application stack: Every app needs an artifact (disk image), a running service, an instance template, a group of VMs running using those templates, networking and a load balancer.Before kubernetes, you had to automate by writing terraform scripts which mutate physical infrastructure as you apply.With kubernetes, you POST a bunch of well-defined resources (container spec for the service, pod spec for the instance, pod as a VM, deployment as the group of VMs, services and ingress for traffic). Every physical cloud resource has a clear API mapping. Basically you save in kubernetes what you need as a first-class API resource. And then, kubernetes responds to what you saved by changing infrastructure to match.You can do the same for practically any kind of infrastructure resources. Want an S3 bucket? Duh, POST a S3 bucket resource and you can write a controller to react and fulfil that bucket.Kubernetes comes with a built-in set of functionality that fulfils a certain core set of infrastructure using containers. But you are not restricted to that. In theory, you could POST a VM as a resource, or an Instance Group as a resource, and you can write a kubernetes controller to fulfil those resources.Why? Because APIs are more powerful than tools. APIs allow a different axis of infrastructure evolution, even if you distribute the control to everyone. By modeling these as APIs, you can bake in a huge amount of infrastructure intelligence into the API. Want to enforce different, code-driven resource-based policies, linting, sane defaults, organizational context? Yeah, make the controller do just that. If you just provide a tool to every engineering team, you basically lose any form of cross-cutting orchestration, and you lose the ability to evolve infrastructure in a separate axis independent of the tool that each team uses.Basically, kubernetes is "Infrastructure as an API". Thinking about it as a "new application server" kinda hides the whole point of it.
 Before: Write HCL then run terraform applyAfter: Write YAML then run kubectl apply;-)
 terraform has a kubernetes provider so you can still terraform apply ;-)
 That’s not a benefit to me. Is it reproducible? Keeping desired state under version control is valuable.
 Yep. Super reproducible. Even more than the old ways. You can store the docker configs and store docker containers in registry. Store the yaml files, or auto generate them using build process based on other central DSL.We could probably bootstrap our entire product from scratch in a couple of hours if it disappeared off the face of earth.“Infrastucture as a config” is a very powerful concept. I’d say that’s what the big power of k8s is.
 The state is defined in yaml manifests file which are trivial to store in Git and create templates with in Ansible.
 Infrastructure as software vs infrastructure as code.
 Bingo, you've hit the nail :) This is what I love about K8S. In addition you've got the K8S reconciliation loop running constantly to make sure the system is in the state you described via the API.
 Here's my prediction: DevOps is dead. In 5 to 10 years developers will no longer be mentioned in articles like this.There will be specialisation once again, because it makes absolutely no sense to make people whose main job it is to think about use cases, business logic and user interfaces to also deal with stuff like this.
 K8s establishes a pattern for implementing many ops tasks as controllers rather than scripts or configs. The level of abstraction will be raised, and more and more of the custom hacks specific to individual companies will become modules you deploy on your cluster.In this way, devs will be doing ops, but not as ops for any given prod environment, rather encoding operational tasks in reusable software.Ops themselves will bifurcate, one towards configuring prebuilt controllers, others towards troubleshooting problems in production. The middle ground will hollow out.
 Your broader point is correct but I think you're mischaracterizing Devops (which isn't about developers doing this stuff, it's about a continuously improving culture + technology to support end-to-end delivery and operations)But more broadly, Kubernetes in the hands of most developers that are doing non-platform level work is a step backwards and will be disastrous. It's far too low level, it's like, hey this Python/Ruby/Java thing was great, but have you seen this new bytecode linker and assembler?New platforms and runtime built ON TOP OF Kubernetes are probably going to be fine. But that's going to take a while.
 >Your broader point is correct but I think you're mischaracterizing Devops (which isn't about developers doing this stuff, it's about a continuously improving culture + technology to support end-to-end delivery and operations)Everyone who comments about DevOps should have read "The Phoenix Project" first, so that they understand what you just said. It's not about consolidating workloads onto generalists, it's supposed to be about breaking down the silos in an organization, nothing more!
 Which they barely did in Phoenix Project toward the end. The first part was building silos to slow the input and reduce work in progress.
 >Your broader point is correct but I think you're mischaracterizing DevopsYou are right. What I should have said is that using DevOps as an excuse to make application developers do complex infrastructure design, automation and maintainance as a side-job is an unsustainable practice.
 Indeed. That's why Google has SRE and Facebook has Production Engineering, and so on.
 aka "ops"The idea that this will kill "devops" is ludicrious, because devops doesn't actually replace job titles. We have a "Devops engineer". He's an ops engineer, with "dev" in front of his title because he codes in perl and python (...like an op). Visual Studio is a "DevOps Environment". I don't know what this means. It likely means that it can provide end-to-end integration in your development environment, which was previously an IDE. The idea behind devops being a raised layer of abstraction is fine, but we don't really need a new word for it, and as history has shown, raised abstractions do not lower. We're going to end up with "devops" until we find a new buzzword.
 No disagreement here.There are, however, a number of people in the field that are all ops and have no development abilities. There is less and less of a need for those kinds of people.This has been hashed about at length, but the inverse is also true. Developers must learn to operate their software in production.
 > There are, however, a number of people in the field that are all ops and have no development abilities. There is less and less of a need for those kinds of people.I very strongly disagree, unless what you mean by "no development abilities" is merely an inability (or refusal) to write or modify (maybe debug) code. There is here, what I believe, a very important distinction, that is routinely lost, and has all but totally devalued Ops skills in the minds of many tech hiring managers who have exclusively programming backgrounds.I was on a phone interview today, in fact, lamenting, along with my interviewer, that there exist such sysadmins who don't even write scripts or automation. He suggested that they've started to call themselves "IT", and, IMO, that may well be a more apt term even than "Ops". My own experience is that, back in the dark ages, a traditional sysadmin not only had to be proficient at scripting but also at some amount of C just to port open source tools between various proprietary Unix version, especially with new releases.That does not, however, bestow upon me, anything that I could call, with a straight face, "development abilities". I hold in my head none of the best practices having to do with programming that have accumulated over the 3 decades of my career, for example. Instead, I hold the Ops best practices. Being able to implement FizzBuzz in bash is neither necessary nor sufficient for software development.> Developers must learn to operate their software in production.I disagree here, as well, on behalf of developers.I think you'll find having to learn what is, essentially, a completely separate, new, engineering discipline, in addition to software development, severely cuts into their productivity.Unless I bring production to the developers, almost like bringing the upstairs downstairs in the cartoon pushbutton house[1]. I do actually feel it's a responsibility of Ops as part of Devops culture to create as frictionless (for their needs) a simulation of production as possible for developers. The practice has all sorts of benefits, including eliminating "works on my machine" debugging issues, that has been one of the rallying cries for containers.
 >I very strongly disagree, unless what you mean by "no development abilities" is merely an inability (or refusal) to write or modify (maybe debug) code. There is here, what I believe, a very important distinction, that is routinely lost, and has all but totally devalued Ops skills in the minds of many tech hiring managers who have exclusively programming backgrounds.Not everyone does the same job you do, even if they might share the same job title. I'm proficient in bash and python, have been learning rust, and know enough c to get around, but there have been many years in my career where my actual sysadmin duties required little more than being able to do 'while true; do x; done'That's very much not been the case over the past half decade, but there were times in the 2000s where nothing was necessary beyond some basic bash scripts. No writing, modifying, or debugging of anything I would actually call code was needed.I've also done hundreds of interviews for sysadmin positions. I'd say the minority of them were proficient beyond the very basics of bash. Most of them seemed to be doing well enough in their existing roles to not have been fired.shrugs
 > Not everyone does the same job you do, even if they might share the same job title.That's true about any title across the entire computer industry (and maybe all technology), but it doesn't say anything about the categories those jobs fall into and skills/experience associated with those categories.> my actual sysadmin duties required little more than being able to do 'while true; do x; done'If you mean that was the level of the complexity of the scripting/automation required, there's nothing wrong with that. Consider, however, trying to do even that, without any scripting at all.Presumably something like 'do x', up-arrow, return, up-arrow, return, ad infinitum. Apparently, that's the kind of person that's moved into "IT" that was being described to me, though this was pure hearsay.> That's very much not been the case over the past half decade, but there were times in the 2000s where nothing was necessary beyond some basic bash scripts. No writing, modifying, or debugging of anything I would actually call code was needed.That would be comfortably (up to a decade) past the dark ages to which I referred. Please don't misinterpret my mentioning having had to work with C as being in contrast to basic bash scripts.My point was actually quite the opposite, that it's all code, but that it's a form of working with code that is (at least subjectively) different from software development.
 Development abilities would entail having the ability to take a task, or feature, write code around that task or feature including tests, and then submit a PR.As someone doing the hiring for such things, people looking to get hired seem to have a different view of things. I don't need someone with an advanced degree in CS and mastery of algorithms to produce API endpoints and orchestration bits.As for having software developers handing off their software for someone else to operate, well, that's an anti-pattern that inhibits throughput and software quality. There's quite a bit of data around this topic. I would encourage you to read this book:https://itrevolution.com/book/accelerate/I'll grant you that it's difficult to get there in a traditional IT org, but once you do get there, the results are glorious.
 > Development abilities would entail having the ability to take a task, or feature, write code around that task or feature including tests, and then submit a PR.No dissonance from my end, and I'd say that other than the very narrow overlap of being able to "write code", that's quite distinct from what, for example, my Ops skills are.To be fair, I did think of some other potential relatively narrow overlaps, such as revision control systems, strace/dtrace, and debuggers, but the latter is a bit tenuous, since it's a niche/vintage skill. Even with VCSes, I don't need, know, or use most features (and miss features like "svn export").> As for having software developers handing off their software for someone else to operate, well, that's an anti-pattern that inhibits throughput and software quality.You may be implying a false dichotomy (or maybe I'm reading one in that you didn't intend). I don't think anyone, myself included, is advocating in favor of the "throw it over the wall" anti-pattern that the original Devops cultural movement advocated so vehemently against.I absolutely agree that developers should remain responsible for the operation of their software in production and, to whatever extent necessary, be involved in that operation. I'm also saying that if "whatever extent necessary" isn't remarkably minimal, it indicates something seriously wrong with the software, with the production-like development environment Ops is providing (i.e. implementation of Devops culture), or both.
 I was precisely thinking the "throw it over the wall" anti-pattern. Now that I understand your sentiment, I concur with it.
 I've been doing Ops for 20 years and even back in the 90's I barely had to understand what a Make file was doing. I've certainly never had to do much actual dev work.My main worry is that specialization won't return soon and my Ops skills won't be enough to keep me employed.
 I consider this an important perspective, because it illustrates something I have heard criticized (and which I criticize in its narrow form, as well): overspecialization/overfocus.> I've been doing Ops for 20 years and even back in the 90's I barely had to understand what a Make file was doing. I've certainly never had to do much actual dev work.Although I ended up being pretty comfortable with Make, I wouldn't characterize what I did with it as actual dev work, either. The vast majority was tweaks of existing makefiles to get them to work on new platforms.I'm not sure even the devs I worked with back then had as much exposure to Make as did the release engineers (part of QA).> My main worry is that specialization won't return soon and my Ops skills won't be enough to keep me employed.The latter part may end up being true regardless, in which case Ops skills will eventually be lost, to the detriment of the industry.I doubt it, however, because even in the vast majority of Devops-as-a-title job postings, no matter how much emphasis is placed on automation, building internal tools, CI/CD, "dev", or "infrastructure as code", Ops skills (some of which those arguably were or overlap with) are still key. Otherwise, they'd just be hiring developers and having them perform these functions [1].All that said, I don't know what the right answer is for staying employable.I know it's not merely waiting for (a subset of) skills from the 90s to become valuable enough again. That would be doubling-down on what might be too narrow a specialization. However, even making sure one has at least some expertise in as broad range of Ops skills as possible, including networking, databases, and especially enough coding to enable routine (if basic) automation, may not be enough.Maybe the answer is acquire just enough dev skills to get hired into those hybrid roles and then either wait until the Ops part becomes important enough to be full time or deliver so much more value with Ops than dev that the latter responsibilities get transferred to someone else. I have, so far, not done this, both because it feels slightly dishonest and because, like in the footnoted situation, I fear it leads to lower productivity/quality on the Ops side.[1] Judging by some comments I've seen here, this also happens but has not yielded a large set of examples of successful outcomes (including keeping those developers happy and productively developing)
 I've only seen devs doing that stuff at very small startups.
 I think devops was defunct once the asphalt hit the hardpan. Approaching 'generic' secure operational environments as programmable, iterable and contained is one of the great IT lies of the last 20 years.
 Yes it makes no sense for developers to understand the underlying systems that run their applications. /sarcasm
 Understanding the characteristics of the underlying systems is one thing, but actually building and maintaining them is something completely different.
 You definitely have to have a good knowledge about the underlying system, no doubt.But actually shouldering the burden of creating and maintaining the entire runtime -- and orchestrate it -- is a bit much and stretches people thin.
 Maybe that's true, but it also makes no sense for me to be interrupted every 15 minutes by someone asking me "I need you to restart your \$deployment for something I'm doing" while I'm trying to develop.
 Surely if they need to ask, they should be able to do it themselves?
 This is how we ended up with Electron apps.
 VSCode is an electron app and it's pretty amazing.
 No, that's unfair. User interfaces have always been and will always be a key responsibiliy of developers.Writing native UI code is no more low level than using electron. It's just less familiar to web developers and less portable. These are completely different considerations than the greater degree of division of labour I'm predicting for the server side.
 >Writing native UI code is no more low level than using electron.For one JavaScript doesn't have shared memory threads like lower level desktop languages do, it's all neatly hidden in JS event loop or your forced to go IPC route. Shared memory multithreading introduces low level complexity but brings performance benefits if you can leverage it.Plenty of other low level APIs and technologies that you either can't use or are wrapped in higher level sandbox constructs for JS for portability, security, ease of use, whatever.Writing native UI code often forces you to deal with low level platform specifics but you get the benefits, HTML UI is just a cheap way to do it (not meant in a derogatory way - it's easy to find devs/designers and target multiple platforms from same code base)
 Another prediction in 5 to 10 years infrastructure will be so streamlined, so that developers won't need to think it at all. I see the progress when we cam from manually maintained machines, to clumsy devops tools, to more streamlined dev tools like ansible, to kubernetes, and with each new step, maintaining a system requires less and less expertise.
 "I see the progress when we cam from manually maintained machines, to clumsy devops tools, to more streamlined dev tools like ansible, to kubernetes, and with each new step, maintaining a system requires less and less expertise."As someone trying to be primarily a dev who keeps getting roped into doing ops stuff for deploying my stuff, my experience is the exact opposite, and in particular putting Kubernetes on the end as the least complicated technology seems outright crazy to me, like, seriously, what on Earth are you talking about? You literally can't bring up a hello world web app without basically knowing everything you needed to know to bring it up on bare metal, and everything you needed to know to bring it up on a VM (whose crazy virtual networking stuff now seems like simplicity itself compared to Docker/Kubernetes), and everything you need to know to bring it up in Docker, and everything you need to know to bring it up in Kubernetes... and you need some sort of solution around building Docker images, too, so you're still going to need to know that, too.There's all kinds of reasons this is necessary and there's all kinds of reasons why provising a new hardware server, blasting a Linux distro on to it, and just running your server is a bad idea. By no means am I proposing we need to go back to some glorious past, where losing a single hard drive sector means "oh well, guess the database is gone! recovery plan? Our recovery plan is to not need the database anymore!" I'm just saying, it's crazy to think this is all getting simpler. Try looking at this through the eyes of a fresh grad sometime.Granted, I'm a bit crabby that literally every time I go to deploy something new, on a roughly 18 month time frame (the intervening time being maintenance and feature development), I have to go learn something else for my one-off task. (I don't think devops manages to beat the Node/JS world for churn, but it's probably in second place.) But still, it's getting more and more complicated, not less, and I don't even sense that we've passed an inflection point on the complexity reducing yet.
 > There's all kinds of reasons this is necessary and there's all kinds of reasons why provising a new hardware server, blasting a Linux distro on to it, and just running your server is a bad idea. By no means am I proposing we need to go back to some glorious past, where losing a single hard drive sector means "oh well, guess the database is gone! recovery plan? Our recovery plan is to not need the database anymore!" I'm just saying, it's crazy to think this is all getting simpler. Try looking at this through the eyes of a fresh grad sometime.Those "all kinds of reasons" aren't necessarily valid, though, because it's not actually possible to go back to that caricature of the past that you describe.The reason it's not possible is that some things actually have gotten simpler, at least from a user's perspective.Automatic bad-block relocation has been standard for effectively forever. Even line-speed hardware RAID has been affordable for so long that it's not even a question. Hardware, in general, is routinely villified as being a nightmare, when the reality is that it's boringly reliable and, more importantly, has predictable enough failure rates that there are simple, standard engineering solutions around the failures (often already baked in).That "inflection point" may well just end up being peeling away all the abstraction layers to discover that simplicity underneath works just fine, since we don't live in the 20th century any more.
 No, the future is serverless applications in the major cloud providers where every server is abstracted away and things scale automatically.The reason for this is the enormous cost benefits (its super cheap, you only pay when your code actually runs, no upfront costs, no passive server costs, no overprovision or underprovision of compute resources etc).
 If you’ve ever used a serverless framework, you’ll know this argument falls apart the second you need to run something for a long period of time, manage complex state or glue services together. There are numerous other details being glossed over.The servless approach has its place just like everything else, it’s not a panacea though.
 Eventually, it should be abstracted even further into something that looks like Heroku, but deploys your code onto serverless functions or containers depending on how your service needs to run. A nice, modular, microservice backend automatically with zero config.
 IMO, serverless in the future will be just a kind of resource on k8. k8 is not serverless, k8 is a open source AWS.
 Why wait for future? You can deploy openfaas on kubernetes, i belive there is more serverless projects you can deploy on k8s cluster
 I think serverless will be included in the k8 or competing product replacing it out of the box.
 We’ve had Heroku for almost over ten years now...
 Have you used the AWS interface lately? It's getting further and further away from being streamlined.
 AWS is legacy. That's why Kubernetes has uptake. They are in danger of becoming a rack provider if this continues. Their managed applications still have a place, but with third parties, they might become a glorified cloud app store, which they might desire anyway.
 I haven't worked with Kubernetes but I have worked a lot with Docker. Building your containers is difficult, using existing containers has a bunch of gotchas and getting your containers up and running anywhere resembling a production environment is a giant hassle.I think containers is the future but we need idiot-proof GUI systems that Heroku and similar services provide to wire up our applications, and we need a good way to handle persistence, which many modern tools ignore.
 This reminds me of the OpenStack Hubris, wherein they would dictate requirements to AWS due to the sheer momentum of the community. Uh, nope.Kubernetes' uptake is nowhere close to AWS by any measure. And AWS is the #1 public cloud that hosts Kubernetes workloads.It's every IT vendor in the world vs. AWS, and so far, they're winning. Kubernetes and open platforms have a shot, but they're not moving fast enough yet in the direction that matters - up the stack.
 AWS is a feature factory and they are breaking their own back. Today I had an issue where creating an AMI for an application feature set as a golden image (with a very modest price tag at t1.supersmall or whatever) does not scale into high end compute instances due to lack of support for ena. Never was the case before.Rolled it back into KVM/QEMU in colo with a glue layer REST interface over virsh and will never look back.Of course we don't use containers..they don't offer an overt benefit in HPC...and I don't think they ever will.
 Legacy? They are exploding with new stuff weekly, and growing like crazy.There is a much bigger market for infrastructure in the cloud than people realize, and I will be very surprised if kubernetes is not replaced in 5 years.
 Infrastructure already is streamlined as hell these days, even bare metal installations. Manual installations from 1996 versus a kickstarted USB installation, iPXE, etc. already feels like things are on pre-school level.What will very likely not be going away in the next decade is networks.Unless we can build tooling for setting up the network infrastructure below all these cloud tools, you will still be stuck with people doing admin work like 30 years ago. From personal experience: k8s clusters are easy. Building the hardware and especially networking below so that everything actually can be automated in k8s... whole different story.
 Exactly. It will be as simple as working with one machine (long overdue), when developers never needed help.
 There's a lot of benefits to having the same people doing a lot of stuff (see also full-stack devs). It means that someone has an overview of things and have situational awareness.Without generalists with overview and situational awareness, you're going to need a lot of documentation, paperwork and meetings to preempt communication problems.
 You are quite right, I already have to jungle .NET, Java/Android, C++, SQL, Web on my head, don't need this as well.
 What does "DevOps is dead" mean?
 That devs will no longer do ops.
 DevOps is more like System Administrators doing the work of Software Engineers for half of the money.
 wait, is that what devops means to you? That devs are doing ops?
 That is the predominant definition, so yes I would assume just that.It’s more or less a logical following of infrastructure as code, because it’s the developers who write and push those files.
 If that's the predominant definition, then pretty much everyone that coined the term, written books about it, or run devopsdays events has failed, and we might as well pack up the use of the term. Because not one of them would agree with this.Devops is/was a professional movement to get developers and operations folks to communicate with one another, create a shared value system and culture, drive continuous learning, enhance automation, visibility/metrics, so we can all build software faster safely and sustainably.Infrastructure as code was one small aspect of this. Continuous delivery, microservices, cloud native platforms, a sharing/collaborative culture, lean product development, modern alerting/monitoring, all were part of this.But it seems tech pop culture cliche's win out over best intentions again and again, as they did with 'agile', 'cloud', 'structured programming', 'object oriented programming', 'reactive', and 'REST'
 There are three (non exclusive) variants of this theme.a) The term is misappropriated to refer to some something rather different from what it was originally meant to refer to. I may be guilty of doing that.b) The concept was too fluffy to begin with and sparked a cottage industry for consultants without helping anyone else.c) The original ideas were flawed and have so many unintended consequences in practice that people rightly start associating the term with those negative outcomes instead of the well meaning goals.Perhaps a bit of all the above is what's causing the dissonance here.
 I'd posit thatb) Consultants (good and bad) always sprout up in any area, and have throughout history (see: Sophists vs. Charlatans).But, what is "fluffy" (ill-defined, nebulous, uncertain, difficult) to one person is a deep area of research for another.Put another way, people tend to dismiss what they don't understand or can't see as unimportant. This is the "looking for your lost keys under the street lamp" syndrome. Most often people can only understand the future in terms of the past, and if they don't have past exposure to a topic, they're not going to see its relevance unless they put extraordinary effort in to pay attention.For example (not necessarily directed at you), Devops folks really put a lot of emphasis on Lean concepts that come out of the Toyota production system. But if I think "wtf do I care about making cars?", I might think ideas like "Continuous Flow" or "Cost of Delay" as being fuzzy and irrelevant to my work as a developer. But they have huge impact over how work is organized and made productive, and literally billions of dollars have been spent developing and honing these ideas in the product manufacturing industries... that just might have broader relevance to the dysfunction of how traditional enterprises run their IT shops vs. how Amazon does.As for c), all ideas are flawed and have unintended consequences :) , focusing too deeply on the negative is a cynical reaction that often gets back to (b).
 That's very much not the predominant definition, most companies that hire DevOps people just use it as a different name for a more modern infrastructure engineer. Shitty cheap companies use it to hire developers & infra in one role. My personal opinion is that a DevOps engineer is someone who probably once was a developer so he's very well versed in coding, but switched over to infrastructure.
 That's not my understanding of the term DevOps. Rather, DevOps means using version-controlled artifacts for deployment rather than manual deployment or one-off scripts, and to consider those artifacts a "product" in their own right, or part of the main development artifacts. So that qualified devs could create/edit those along with other developer artifacts, though frequently these artifacts are created by admin-like persons.And I don't see DevOps becoming obsolete at all; just the other day I spend deploying apps with an ops guy, where neither of us could do it all on his own: the ops guy lacked understanding of the code bases, the endpoints to configure, and integration test procedures, me not having permissions to redeploy and view logs.
 The buzzwords are ultimately meaningless but that sounds more specific to ”infrastructure as code” than the meaning of “devops”. With iaac basically being one of the facets of devops. Does that sound right to you?
 I don't know. IAAC is a much newer term than DevOps, and used primarily in combination with IAAS, so I think its meaning can't be used as an argument to give or take away meaning to the term DevOps.
 It generally means "ops work that is done with the best practices of dev work".Yes, that means infrastructure as code, including code reviews, frequent deployments, etc.No, it doesn't require that the developers working on the devops code are the same ones who work on the app code. At a small company (e.g., three devs) they will be, but at a large company, you have a dedicated "devops team" who exclusively works on infrastructure (as code).
 I don't take Wikipedia as sacrilege, but I don't see anywhere where it says developers do ops: https://en.wikipedia.org/wiki/DevOps
 Was "sacrilege" the word you intended? Or perhaps "sacred"?
 "Devops does not mean devs doing ops" is like "story points are not time estimates." It gets repeated so often because it's such an easy mistake to fall into.
 Just like communism, if in real life 99% of practitioners are doing it wrong, maybe we should revise the theory? Some dreams have a strong attraction, but they’re just like the mermaids of old.
 It's more like experienced devs doing ops in support of devs. As with most terms like this ymmv.
 From some of the folks I was talking to, it seemed more like companies firing their SysAdmins, Release Management, and DBAs and making the developers do the production operations / support.
 Yep, I think it means you can hire one "DevOps Engineer" to do the work of multiple sysadmins/dbas by using these cloud-based services.
 Seems he's essentially just predicting an increase in the division of labor between product development and infrastructure work going forward.
 That's interesting, sounds like the world 5 years ago and before -- outside of the Netflix/Google/Facebook monoliths. Sounds like the landscape made systems so complicated via ease of automation that they don't know what's happening under the hood anymore. That's good news for consultants like myself.Sincerely,DevOps guy formerly known as SysEngineer formerly known as SysAdmin for 15+ years.
 If you know Ansible/Puppet/Chef K8s platform doesn't require a complicated mess of scripts to manage.Sincerly,Platform Engineer former DevOps guy, former SysEngineer, former SysAdmin for the past 24 years.
 In Devops ops write infrastructure as code. I'm an old school bash perl sysadmin. Devops with Agile is a cultural shift and has transformed the last 4 companies I've worked at. Two of them having ~100,000 employees. We now do ops with 6-10 people for systems that used to require 100+ people to build and maintain.
 We're in the process of migrating our (primarily) java services from straight AWS to kubernetes.At the beginning the author poses the following questions:* Do you use Mac, Windows, or Linux? Have you ever faced an issue related to \ versus / as the file path separator? What version of JDK do you use? Do you use Java 10 in development, but production uses JRE 8? Have you faced any bugs introduced by JVM differences?* What version of the application server do you use? Is the production environment using the same configuration, security patches, and library versions?* During production deployment, have you encountered a JDBC driver issue that you didn’t face in your development environment due to different versions of the driver or database server?* Have you ever asked the application server admin to create a datasource or a JMS queue and it had a typo?I've experienced problems whose root cause are some form of all of those. Much of it could be chalked up to growing pains etcetera, but, for example, there are concrete differences between docker versions running on mac and linux that have shown up for me.This doesn't reduce the author's argument, but they do seem liked strange examples.Our choice to move to Docker and kubernetes came from developers, and specifically spoke to the need for consistent, reproducible test environments. We had dockerized most applications many months before the notion of using them in production was put on the table. What remains to be seen is if the switches in production reduce complexity and maintenance on the devops end of things, as well. I'm also curious how many other organizations had containers introduced 'from the bottom up' like us.
 Hm. My team is running and developing a somewhat complex system based on jvm for years, and never encountered any of the issues in the bullet points. Having consistent test, staging and production environments produced by ansible have been working out for us (and is not hard). Are we just being lucky?
 I don’t have enough experience to say how common my experience (or yours) is, but I don’t know how much luck is involved. I’m sure many institutions have solved these problems without containers and sicker.But For various reasons, political, personal, technical etcetera, consistent, accessible test environments were not available. It was a huge bottleneck for developers.This is why docker was so appealing: it allowed devs to circumvent the political and technical issues. We didn’t have to justify provisioning more instances, since we could just use docker compose to run stacks on local machines.So I’m that sense, the choice and it’s benefits arose from non technical hurdles.
 Honestly, most of the introduction I've seen has been bottom up or from new hires. I think there is a major point in having happy development environments. Happiness with tools is a very important point that is hard to quantify. https://www.johndcook.com/blog/2011/07/31/enjoyment-of-ones-... has a fun exploration of two great quotes on this vein.That said, if you can, I'd be ridiculously interested in a retrospective when you have a chance on your transition. Good luck with it!
 Sorry but this article implies that containers can solve issues like the difference between path separators on Windows and Linux. They can't even solve issues arising from differences in kernal versions, not to mention operating systems.
 Nor does Docker protect against such obvious things as differences in versions of base images. Check out the Slack dialogue at the end of this:
 Did you consider http://kubevirt.io/ ? It supposed to solve some of these hard limitations.
 This can work, but stacking framework on top of the framework is what made the classic application servers undesirable and a horrible mess to begin with.Observing similar patterns creeping in the Kubernetes culture and ecosystem does not inspire optimism in a side observer like myself.
 Always use /Windows can, surprisingly, handle it just fine.
 I spent two hours scratching my head today because I had put "/path/to/thing/**"  in a nuspec file instead of "\path\to\thing\**" The backslash was escaping the wildcard characters :(
 On interactive uses, sure. But it's not clear cut, some code paths are lower level and bypass that.
 Yea I guess most of my uses have been in things like python (e.g. open() and as.*). Got any examples of code paths that bypass that check and fail if you use /?
 I think they are wrong and actually it's the opposite problem. Some applications and libraries that do their own path mangling on Windows will choke if you give them a /. The win32 file I/O API handles them correctly.
 Never use windows, why should you for development and deployment?
 Instead of the new Application Server, Kubernetes is the new OpenStack. A conglomeration of projects that support a platform for deploying and operating software in containers or virtual machines. Just like OpenStack, some of the components deal with storage, others with networking, and others still with service discovery or proxying.Without getting too far into semantics, I think the author is using a well overloaded terminology. When I hear Application Server, I immediately think tomcat or some other jvm.
 It is hard not to build this impression of containers. Worse, it seems that the docker image format is massive compared to what most war files were like. This is annoying not just in terms of raw bytes to move around, but in taking stock of what is getting moved around.My team originally pitched how docker solved much of the dependency upgrade management by having layers for each major set of dependencies. That ignored the fact that upgrading a layer is not really something you do.So, then you can go around the path of coordinating many containers communicating with each other. That works, but in that world, things really don't seem any easier than the earlier alternatives. Harder, in many ways.Don't get me wrong, the momentum and raw money being put into containers certainly paints it as the future. It just feels like lying to say that they have even come close to parity with what we were capable of not that long ago.
 Exactly. Docker files are worse than war files in pretty much every way.The economics are interesting though. Computing resources are super cheap and getting more cheap everyday. (Well except for RAM!) The waste produced by containers means says, yeah, give each application its own app server. Its own web server, its own JDBC driver, even its own JDK. We'll throw it all on the cloud and run it for a few cents each hour.The problem is that testing isn't cheap. This is where microservices and containers and the all this craziness will fall down. Now you've got dozens and dozens of applications all running with their own application servers, JDBC drivers, and their own JDK. This is a combinatoric explosion in your testing surface.The true value of the application server approach is that it forced applications to conform to a clear contract. Once your application conformed to this contract it could be thrown over the wall into a full-time ops organization that could transparently deploy, monitor and manage the app.The container madness will come to an end. People will realize that giving developers the keys to the kingdom and total freedom over their application is a terrible idea. (And smart developers don't want total freedom.) What will emerge will be something much more interesting: a hybrid approach where applications can be bundled into into artifacts that fully express their dependencies and containers can be annotated in a way that fully express their capabilities.Containers provide isolation and immutable infrastructure and this is good. Appservers provide standardization, specialization and separation of responsibilities and this is also good. There's no reason why we can't enjoy both.
 We use containers to reduce the testing surface and enforce standards. And if course to ease testing as we now transition the same build through dev, staging and prod.The devs just have a single base container to work off from and we use CICD to further enforce whatever we need to.It also makes it easier for devs to propose changes, as we know we can consistently apply them.
 You can use them to do that but is it working? That was the point, I also agree that no, it is not working and has rather introduced several layers with painful and hard to debug failure modes.
 A problem comes down to answering questions such as "exactly what versions of X are installed in our beta stack?" Usually trivial if X is part of your application. Much more complicated if X is some lib that is on the system/image.
 > People will realize that giving developers the keys to the kingdom and total freedom over their application is a terrible idea. (And smart developers don't want total freedom.)Sadly, I don't think it's actually developers that were pushing for this "freedom" (with which comes responsibility, hence it makes sense that it would be unwanted), as such.Rather, I place the blame with managers and with (fellow) sysadmins for not embracing, empowering, or forcing (whatever it takes) the aspects of the original DevOps cultural movement to have Ops bring the equivalent [1] of a production environment to every developer.> Containers provide isolation and immutable infrastructureI thought they weren't even (necessarily) immutable. If not, then maybe they're merely immutable-through-practice, but that's as unsatisfying as configuration management systems creating reproducible deployments that lack traceability [2].[1] In terms of tooling for things like builds, deployment, and management (especially of dependencies), not necessarily full hardware capacity, although at one place I did provide each developer with a copy of yesterday's production database on their own, personal bare-metal database server, less "beefy" than the production hardware.[2] is that the right word? something like chain-of-custody for code and/or configuration.
 Nothing prevents one from having a container with many WAR files, so that there's only a single JVM for multiple applications.
 The entire exercise is about expressing dependencies. (The real third hard problem in computer science. Only idiots fall for off-by-1 errors.)Containers are one way to force an application to explicitly declare all of its dependencies. The problem is the application then also bundles those dependencies. There's not much room for control or intermediation. Developers have total freedom to do all sorts of wackiness. (And oh the wackiness I've seen in containers. When the cat is away...) What's needed is a richer (and much more compact) mechanism for applications to express all their dependencies.Another way to understand the issue to understand the underlying economics at work here. Container economics proposes that computing resources are so cheap that, sure, let the developers go crazy and do whatever they want. But those same developers must be on the hook for all operational concerns hence "DevOps." And this will scale until one day you look around and you realize that anarchy never works, not even a little bit.Now there's a lot of room to fall here. RedHat is in the business of selling compute and storage resources. It's not clear that even if they could make their products more efficient they would. But as long as compute resources are cheap and getting cheaper you can let people go wild and RedHat will make a lot of money and everybody will be happy. For an organization the true costs will emerge in a very subtle manner: the productivity of the developers (already hard to measure) will ultimately fall. That's because instead of actually solving actual business problems their devs are deploying dozens and dozens of polyglot microservices and trying to figure out why on earth microservice #28 that Bob, who left the company a year ago, decided to write in a version of lisp that he invented crashes every day at noon.But hey, at least Kubernetes makes it easy to deploy everything!
 > There's not much room for control or intermediation.FWIW there are container-based app platforms that do allow you to swap out filesystem layers to update dependencies and to remove control from developers by having a standardized containerizer that has extension hooks but can't be mucked with at the lowest levels. This is how Cloud Foundry works for example, or Heroku.
 For example, OpenShift (Kubernetes + stuff also from Red Hat) exposes this pattern like both heroku and cloud foundry byA) focusing on having standard base images controlled by opsB) encouraging combining source code / built artifacts as a layer on those base imagesC) giving controls to ops so that the only images users could build or run must be built with A/B above.In that mode containers are less wasteful because you can share the base image across every host (or rebuild everything centrally), and all that gets downloaded to a host is the source code top layer. Which is roughly indistinguishable from the lambda runtime and how it accesses the code to execute.
 Don't most virtualized hosts already solve the "less wasteful" point? I get that hypervisors and friends aren't bulletproof, either; but I have fewer ways to shoot yourself with those than I have with containers.More, you have to worry about the container host anyway, so you haven't removed the need to maintain a host. You've just added to it the need to maintain the rest of the container infrastructure as well as your application stuff.For places that have not solved the "virtual hosts should be virtually free" problem, containers are quite welcome. You can get going with them quite quickly. If you have already solved that problem, they can look an awful lot like just more work.
 Agree, virtual hosts are ubiquitous. But I don’t know that anyone loves their virtual machines the same way they love the smaller, faster, and simpler alternatives (unikernels included).This is a somewhat pessimistic viewpoint, but lowest common denominator solutions tend to acquire the most network effects. A VM requires more touch points to manage for the person who has to set up a machine - despite ten years of solid progress, they still tend to be pretty annoying to configure and build and manage. The platform as a service approach (whether lambda, nodejs on cloudflare, various functions as a service approaches, heroku, cloud foundry, or dokku) on the other hand take away a lot more hassle by abstracting pain points out, but get accused of being too rigid. Both extremes benefit specific use cases, but have disadvantages in general purpose use.Containers sit in the ugly, dirty, practical middle. They can do both (VMs are just processes). So the network effects they accrue just like Linux did of being “good for everything, not great” help mitigate some of the disadvantages.The public cloud providers change this calculus a bit by offering these things as a service, but internally they are just managing the container runtimes for you.I’m obviously biased, but I tend to see containers as “good enough” to build other abstractions on top, with specific areas where VMs and heavy PaaS abstractions clearly win.
 What's been interesting to watch from the Cloud Foundry POV is the circular migration of the boundary between development and operations.CF built container technology before Docker or Kubernetes -- two generations of it -- because it was seen as the right primitive by people with experience of Borg. But containers were not touted as ends in themselves.So the contract boundary given was: sourcecode. Buildpacks.Docker comes along, then Kubernetes, and the container goes from being a hidden detail to a central concept around which a lot of other stuff orbits. And containers are a step forward on a lot of axes. Developers begin to want to use containers as their shipping unit.So the contract boundary became: images.Later ops realises that while opaque running containers are awesome for reducing their management complexity, it doesn't reduce all categories of risk. After all: what's in the damn containers? And so various tools have emerged from the container-oriented ecosystem to take sourcecode and turn it into a container image, so that developers and operators have a consistent handoff point.So the contract boundary becomes: sourcecode.It sounds like a nice story, and it might seem like we'll go in circles hereafter. But we're not doomed to do poetic laps: what's happened in the middle has been the rise of CI/CD tools, sitting between the container boundary and the sourecode boundary. Good fences make good neighbours and it turns out that fences made of helpful robots make even better neighbours.As a Pivot with a long association with Cloud Foundry, I have enjoyed in the past few months getting to compare notes with Red Hatters and others in the k8s community.
 To be fair, it is more likely that random microservice #28 is running fine, but folks don't like it and are uncomfortable modifying it to add a variation to one of its rules. Thus giving birth to microservice #28'. :) (Please read this tongue in cheek.)
 > Exactly. Docker files are worse than war files in pretty much every way.They focus at far too low level a problem.> The problem is that testing isn't cheap. This is where microservices and containers and the all this craziness will fall down. Now you've got dozens and dozens of applications all running with their own application servers, JDBC drivers, and their own JDK. This is a combinatoric explosion in your testing surface.I fail to see how this more difficult from 95% of existing enterprise systems that run various JDK versions, various WebSphere versions, JDBC drivers etc.The difference with containers is that vendoring your dependencies makes all that explicit. Whether you choose to stay on old versions is a mistake you can still make (and will make if you leave container creation and patching in the hands of dev teams).That said, since when does any organization build horizontal testing libraries, like, "test X version of JDBC", so they can standardize on a single driver across all projects? It's never done. Projects are all over the place with their dependencies unless there is institutionalized forcing through Maven repos or what not to block the download of old/insecure versions.There is no combinatorial problem here, each test suite needs to unit test the individual service and then test the API contract of the service. If you let your dependencies decay you're accepting major security and maintenance risk, but you were in the WAR file world too.> The true value of the application server approach is that it forced applications to conform to a clear contract. Once your application conformed to this contract it could be thrown over the wall into a full-time ops organization that could transparently deploy, monitor and manage the app.LOL, I really think you're exaggerating. I've worked with WebLogic, WebSphere, IIS, JBoss, Tomcat, Django, Rails, Node, you name it over the years, and while some orgs got close in the Java world, this was mostly a pipe dream that never happened in any cost effective manner as a standard practice.> The container madness will come to an end. People will realize that giving developers the keys to the kingdom and total freedom over their application is a terrible idea. (And smart developers don't want total freedom.)This I agree with. It's all far too low level.This really was Docker, Inc.'s fault IMO, their marketing message was to empower developers to get ops out of the way by building these containers that would magically just be run as opaque lego blocks. That was the hype everyone on HN was drooling over in 2013. Turns out it's not quite that simple.
 So just reuse the common layers and only have your WAR file layer changing
 The problem is when there is a vulnerability in some part of an earlier layer. You have to rebuild the entire stack.I honestly don't think this is necessarily a terrible thing. But, the idea that your common layers are stable is a dangerously bad assumption.
 What are you defining as stable? Never changing? That's not something that should even hinted at.The fact that you can login to a VM and update a package does not make the system stable either (and of course you can actually do this with running containers as well). Add to that you still need to restart running applications to take advantage of the package update (assuming the packages is a shared lib).Meanwhile you can push new base images, automatically trigger rebuilds and roll out the update when ready.
 I suspect we mostly agree with each other. My experience with folks in containers so far has proven to be that this is often hoped, though.Specifically, we had devs talking about how we wouldn't have to worry about system patching anymore, because the containers would take care of that. With no answer for how we trace versions and patches through our systems.If you are already tooled enough such that you can completely redeploy a full stack easily without worrying about some in place modifications, the difference between a VM and a container are relatively minimal, all told. Especially since you have to be ready to pull down the host of the containers anyway.
 The main difference is the container is focused on an application and a VM is focused on a machine.Generally the patching problem can be solved with image scanning, of which there are tools out there to deal with this, both FLOSS and pay for.
 the linux world is slowly (and badly) recreating technology that has existed in FreeBSD and Solaris/Illumos for decades.
 You probably never heard or used - Linux VServer. Available from the later version of the 2.4 kernel.
 > 2008FreeBSD jails date from 2000Solaris zones date from 2004https://us-east.manta.joyent.com/bcantrill/public/ppwl-cantr...
 2003 http://linux-vserver.org/Overview#HistoryIn the Linux distribution that I used (PLD), the first vserver support took place in january 2004. https://github.com/pld-linux/kernel/commit/5be58c1bcc5568676...Maybe a little earlier because the util-vserver showed in Nov 2003. https://github.com/pld-linux/util-vserver/commit/c4036d6e748...
 Be fair here. The software world is slowly recreating technology. Being even more fair, they do often add to it. Usually UX capabilities, but sometimes full on features. Yes, often the features require more resources than were available before. Such that previous versions couldn't have offered some of these things.That said, it is not limited to any one set of our industry. It is likely not even limited to our industry.
 Software getting recreated, sometimes without the security promises (linux containers as they started vs. solaris zones) the previous solution had. To be fair, Google didn't really need the security in linux containers, they didn't run multi-tenant workloads, so it wasn't their top concern when submitting code to the linux kernel...We like building new things, from different angles, but in the end it seems like everything cycles through the same ideas.
 Completely agreed. It is hilarious because we all preach "don't reinvent the wheel" at the same time that our interviews are basically "can you implement this ridiculously low level wheel" on top of "come here where we are simply using the latest rims!" :)
 Which technologies were those?I’m only vaguely familiar with FreeBSD, and not at all with Solaris.
 FreeBSD has had jails and Solaris has had zones (with a Linux compatible zone as well aka branded zone). Given Illumos takes a lot of its roots from Solaris, it also has LX branded zones.
 And HP-UX/Tru64 before them.We were deploying into HP Vaults in 2000.
 Yeah, except even jails has gaps, like for example; shared memory isn't jailed.
 [flagged]
 I disagree with the tone of your message, but +1 for bringing up the fact that the software world's lack of knowledge of history dooms them to endlessly repeat it (I witness this on a daily basis).
 It's not that. The tech is not a solution for everyone's problems to begin with. It's a purely commercial tech made in an attempt to compete with Amazon and sell you stuff. It's very lock-iny, expensive to leave and is crippled on purpose, to make you buy services from Google. Barrier of entry for competition is high too, so it will be pricey.
 Kubernetes is the absolute opposite of everything you said. Open source, cloud agnostic, cloud optional..., it literally abstracts the hardware so you are not locked in. You can run it anywhere.
 What features of Kubernetes are only available for a price? Since Kubernetes is fully open source and self hosted/available from any cloud provider now, how is it crippled?
 What I meant is you can't get GKE/GCP level experience on bare metal.
 be polite and drop the hard language and maybe the stupid fucks..... aaah.... the people will be open to a discussion.
 You have a point about hype, tech cycles, and solutions being pushed out of ignorance vs tech superiority if you change the language to be more civil.
 You're right.
 Although it's a bad point without any concrete examples. Without that, it just seems like a baseless emotional reaction to not liking something new. I might even be inclined to agree if there was a bit more detail given.
 Containers are a solution looking for a problem, used more for organizational reasons than technical.Disclaimer: Infra/DevOps engineer previously, 20 years total in tech
 I hear things like this a lot but I never hear how you could accomplish the same things as Kubernetes (or any other orchestrator) without containers. Even if there were good solutions available for this, and the only advantage of containers was an organizational advantage rather than a technical one, I think that would still be a win.
 The same way we orchestrated before containers: VMs, config management, and APIs at the control plane
 There is a certain level of saltiness that is well deserved, and I'm not sure that the GP came close to strong enough. We're swirling around in circles, rediscovering the same thing every five years, leaving a blasted wasteland of half-baked abandoned technologies behind, endlessly chasing the shiny - and overwhelmingly, it's the extreme ignorance and illiteracy of the profession that is to blame.
 “””it's the extreme ignorance and illiteracy of the profession that is to blame.”””I’m imagining a thought experiment... the requirements to understand and operate in this industry... for example this article that was posted . Imagine giving it out at a family reunion and asking each relative if it makes any sense at all. Even the ones who are developers might only get a small gist of it if they program java code. For humans to digest decades of information written by 1000s of other humans in terse code ... it’s a daunting task which is married to the side effect of accidentally repeating past mistakes. Maybe if we had an Oracle to help us... oh wait that’s a terrible joke
 I hope that the Kubernetes Steering committee will continue its good work to keep the Core tidy and simple.What really makes me afraid are the thousands of complex addons that are being pushed by the community (For example, Istio, Networking addons, etc etc). Those should be kept outside and it MUST be made clear they are definitely not needed for a normal installation of Kubernetes.Istio for example, is such a political brainwash power-move by some bigger companies that benefit from it. I believe less than 5% of the use-cases really require Istio, still it is being pushed as something you should always install in your cluster. This is bad for everyone.
 Guys who have worked with Kubernetes and virtualization, I have a genuine personal question for you - I have been working as SysAdmin and then into virtualization (VMware / HyperV sort of things) and while natural progression in it says go for advanced VMware courses (related to Virtual machine and its concepts), do you think its better to switch lanes now to container-tech rather than some years down the line? I am totally 0 on Docker / Kubernetes / OpenStack (if thats related with containers).
 At this point it is clear that this is a thing that will happen for a whole lot of orgs. You would be well advised to dabble at this point and see if it connects with you. At the least, you should understand what problems are and are not solved so you don't look foolish in a conversation. That said, VMs and scripted/config-managed deployment will continue to be with us for the foreseeable future.
 "At the least, you should understand what problems are and are not solved so you don't look foolish in a conversation"Seriously, this is so embarrassing at the current point ( with me having absolutely 0 understanding of the concept)
 FWIW coming from a Windows dev side of things, I'd have a look at Docker for Hyper-V (https://docs.docker.com/machine/drivers/hyper-v/) and just have a play with things.If you're still getting enough work as a sysadmin and enjoying it, I think "VMs" as a concept will still be around for a long time.
 Well, until someone develops and releases (and strangle all naysayers) a language independent thing remotely like JEE JNDI... it’s just not. Sorry, JEE was horrible but also so beautifully ahead of its time
 Unfortunately, JEE was horribly behind the times. A lot of the ideas came from the COBOL middleware world. Others were a remix of CORBA.
 I think what a lot of people forget is that this is being driven by costs. Companies don't want to pay for disaster recovery, and it's cheaper to set up a k8's platform. The platform itself performs disaster recovery, so now that's one less cost for the company.It's all about saving money on DR. When these companies realize that developers can't handle doing ops AND complex business logic, maybe they'll rethink it. Until then I expect this trend to spread rapidly as companies look for ways to abstract away DR costs.
 In my experience, people are using k8s et al and microservices because "everyone is doing it", "nobody got fired for choosing microservices", "I need more cloud stuff on my resume" and a vague (and unsubstantiated, unless you're in SF maybe) hope to hire ops staff on the cheap. Frequently, it ends up in a mix of a partial k8s setup for services plus dedicated DB, logging, backup, and other proprietary persistence infrastructure services. In other words, the union of problems/risks, and the intersection of capabilities; and in particular such that moving to another cloud provider is impossible :)Cloud services is pretty much a business of huge marketing budgets and old-school lock-in strategies.
 Can you define what you mean by disaster recovery? To me it means the ability to recover business applications from a site failure.Assuming you have a similar definition how does Kubernetes solve that problem?
 What I mean by DR is that if a data center gets nuked, you still have a replica of your k8's platform running in your other data center(s).So k8's basically runs a replica of your whole system behind the scenes so if a physical location goes down you still have your system running.
 What about all the data in DBMS, file systems, object stores? Also what about BGP routes, firewall settings, SNAT/DNAT rules and the like?There's a lot more to replicate than just the bits in the apps.
 Those are the types of things developers simply don't have experience using. And if Kubernetes tries to replace all of those it will become the new OpenStack.
 Yes to all. The most difficult thing is persistent volumes, but leveraging things like Heptio's ark allow you to send them to an alternative storage class. It's awesome stuff.
 Well, that depends where those resources are running and also depends if you're leveraging persistent storage in k8's. All the routes/firewall/dns etc. will be preserved in the event a data center is nuked. Like I said, k8's is replicating your entire system behind the scenes.
 Isn't that problem solved equally with VMs?
 No, not unless you replicated the VMs accross different data centers manually. Which actually isn't that difficult using docker.
 The software world keeps on going in circles :).Next up - lightweight k8s server - stripped down of all the crap that can easily run and deploy a single container.
 yes, this. I'm just skipping the whole Docker thing this time around, and waiting until the circle turns and the herd decides that containers create too much complexity ;)
 As mentioned in the article, The EFK stack (Elasticsearch, Fluentd, Kibana) is great for unified logging. And its not just for OpenShift, its working wonders for our current project with Kubernetes on AWS.
 Could you give more details about why Fluentd and not Beats + Logstash on the other end? I don't like the idea of running Ruby on every machine just to ship logs somewhere.Beats are pretty lightweight and for what I used them they work.
 Many people use the ELK stack which is Logstash. I imagine you can swap out Fluentd for Logstash.
 I find it a little heavy for log aggregation. LogDNA has been great for my K8s clusters, and is relatively cheap.
 Is this a response for the Jib announcement and the inevitable comments that followed? Sounds a lot like it is...
 [flagged]
 You have some of those floating around who want to retain their little fiefdoms. I'm at the other end of the spectrum because supporting tens of thousands of snowflakes just makes you want to get a blowtorch. As far as I'm concerned for the right workloads kube can't come fast enough.The only part I object to is when people get mystical and pie eyed about the idea that putting apps, code, what-have-you in the cloud will mystically make it work well and if it fails you just restart the container or pod and all is well. Which is of course absurd. Bad code is bad code, bad queries run poorly no matter the context. You can buy your way out of some performance issues by scaling up, but that only buys you time at best. IMHO there will always be a need for someone to help people understand why things aren't working well and fix it.
 Agreed. When people say k8s/docker is complicated, yes it is, but its simpler and more reliable than the shedload of custom build, config & deployment scripts I've seen in past systems.
 that's funny. the same "neckbeards" that are "resisting" k8s have seen this scenario play out over an over and over again. It's very rare that someone actually comes up with something completely new. Usually what' new and exciting are refinements of trying the same ideas that were tried in the past (or changes in the "ecosystem" that enable technologies that were previously not feasible). my prediction is that k8s is NOT going to solve world hunger and it will slowly find its way into being a tool in the toolbelt. I am open to being proven wrong.
 It wasn't so much "this will solve world hunger" as much as it was "this thing that we built internally at Google ended up working out pretty well, here's a free and improved reimplementation of it that you can use".
 I'm a neckbeard, literally, and it's even gray :). I'm not sure the implied age gap proves out... maybe. I mean there are always early, mainstream and late adopters of a thing. I remember teaching C++ back in the 90's and there was always at least one person in the back of the room who at some point would say "I can do all this with structs and pointers!" or something to that effect. But yeah, ultimately if you get to be my age and are still doing engineering in this business then it's either because you're still doing it at the same place or you're not afraid of change.
 Fellow grey neckbeard here,I resisted vm's at first.I seriously resisted the idea of the cloud.I resisted Devops and Agile.I looked curiously at containers as possibly not just a fad having used jails for years.Now I'm managing a large and growing exponentially K8s platform and having a great time.Resistance is futile.
 Eventually those die anyway as we raise a new generation that primarily uses contains and all that stuff dips into legacy-rewrite-it-all territory.
 could not agree more with you.
 Although I disagree your post did make me laugh

Search: