After a couple of years of massive frustration with the entire direction of the "devops" segment, I think I'm resolved to get out of it and either move further up back to ordinary application development or further down into the actual kernel, preferably working on FreeBSD or some other OS that's more sane and focused than Linux.
Kubernetes represents the complete "operationalization" of the devops space. As companies have built out "devops" teams, they've mostly re-used their existing ops people, plus some stragglers from the dev side. These are the people you hear talking about how great Kubernetes is, because for them, they see it as "run a Helm chart and all done!". Which makes sense, since they were, not too long ago, the same guys fired up about all the super-neato buttons to click in the Control Panel. 90% of "devops" people at non-FAANG companies are operations people who just think of it as a new name for their old job.
Among this set, there's no recognition of the massive needless complexity that permeates all the way through Kubernetes, no recognition of the tried-and-tested toolkit thrown away and left behind, no recognition of the fact that we're working so hard to get things that we've had as built-in pieces of any decent server OS for decades. No recognition that Kubernetes exists so it can serve as Google's wedge in the Rent-Your-Server-From-Me Wars, and no awareness that just in general, there's no reason it should be this hard.
Of course, to them, it's not hard. They have an interface with buttons, they can run `helm install`, they get pretty graphs via their "service mesh". That's what I mean by "operationalized"; Kuberenetes is meant to be consumed, not configured. You don't ask how or why. You run the Minikube VM image locally and you rent GKE or EKS and go on your merry way. The intricacies are for the geniuses who've blursed us with this death trap to worry about! Worst-case, you use something like kops. Start asking questions or putting pieces together beyond this, and you're starting to sound like you don't have very much of that much-coveted "devops mindset" anymore.
"What happens if there's a security issue?" Oh, silly, security issues are a thing of the past in the day and age where we blindly `FROM crazy-joes-used-mongodb-emporium:latest-built-2-years-ago`. Containers don't need updates, you goose. They're beautiful, blubber-powered magic, and the Great Googly Kubernetes in the sky is managing "all that" for us. Right on.
I'm picking on Kubernetes specifically here because it's the epitome of all this, but really everything in "devops" world has become this way, and combined with the head-over-heels "omg get us into the cloud right now" mentality that's overtaking virtually every company, it's a bad scene.
Systems have gotten so much more convoluted and so much dumber over the last 5 years. The industry has a lot to be embarrassed about right now.
> After a couple of years of massive frustration with the entire direction of the "devops" segment,...
> Kubernetes represents the complete "operationalization" of the devops space.
It's funny that I have all the exact same concerns and misgivings as you, but the exact opposite feeling (ie reactionary bias) of where it has come from :)
To me in my less charitable curmudgeonly cynical moments (using when tracking down a Docker or Kubernetes problems), it feels like "ops" was overrun by talented but inexperienced devs adding too many layers of abstraction too quickly CADT style funded by large valley tech companies and VC money. And constantly changing them as fashions come and go. It's like the React ecosystem having a go at Go.
For all the faults of the culture of slower moving traditional systems software written in C etc, we didn't run into so many issues with the level of bugginess or with tooling that gets abandoned/replaced before we've even finished evaluating it. It's like Go has done for systems software quality what PHP did for web software.
To be fair Docker is maturing now, and Kubernetes itself has too now, but the gap between the now relatively stable low level functionality of core kubernetes and what app developers need at a higher level is a gulf of churning crap.
IMO .... AWS has been eating everyones lunch. K8s was a play by Google to sell GKE / GCP and it worked.
Problem being not everyone has a team of Google SREs to manage it so when k8s blows up the skillset to figure out the issues simply doesnt exist in the team managing it.
Yes, there's certainly a landgrab occurring in this space, and Google is certainly pouring a massive heaping of G-Juice on k8s in order to get in front of the user. In software, controlling the user interface is controlling everything.
Ultimately, it's to their credit, because in some small measure, it counteracts an Amazon-controlled dystopia headed by Galactic Emperor Bezos, so perhaps we should be glad for it. It's just sad that so many systems have to be the collateral damage.
Windows is a 2nd class citizen when automation is concerned IMO.
I believe Azure is feeding off of enterprises who are 15 years behind everyone else.
Windows as a platform cannot possibly keep up with the fast moving changes required by containers and that ecosystem. The only way Azure is competing is offering Linux based systems.
> ..people at non-FAANG companies are operations people who just think of it as a new name for their old job. Among this set, there's no recognition of the massive needless complexity that permeates all the way through Kubernetes.
Kubernetes was born out of ideas from Google's own internal systems. I think this discounts the complexity of operational platforms at the large companies. Companies where they build operational APIs straight into their services. It may not be the right tool for every job, but being so overly dismissive of complex operations platforms comes off as extremely pretentious.
The implicit message is that there are 5 about companies in the world who need Kubernetes' (and friends) inherent complexity. If your company is not named "Amazon", "Google", "Facebook", or "Netflix", it's probably not one of them.
I am not convinced any of them need k8s or even containers.
I am convinced many people use these technologies because it is simple and abstracts away the required skill sets to do complex operational tasks but only covers about 85% of those, leaving operation teams with limited skill sets exposed when that other 15% happens.
Same here. I do devops for 10 years (used to be called systems engineering) and have implemented containers once and Kubernetes zero times. I successfully made companies avoid Kubernetes several times. Most of the time tool focused people think that the next thing is a miracle that makes every problem go away, which is certainly not true. If you define the problems first and try to find solutions than you get much better results. Choosing a tool and try to find problems it solves is idiotic. Just like using Kubernetes instead of something much simpler.
You are implying that we need container migration. I give you one example of how you do not need that. Let's set up an autoscaling group with node count = 10. On node error you terminate the node in trouble. Autoscaling detects that you have less capacity than required. Creates a new instance. You are good. This is a much simpler solution to the problem than trying to think where you container fits, avoiding cascading problems and trying to track the state of your cluster.
And to be clear, "container migration" is nothing more advanced than this "take the failed system out of service and spawn it somewhere else" routine that we've been doing since time immemorial.
Things like VMware's vMotion or KVM's live migration can detect trouble and transparently move the running VM -- network connections, memory state, and all -- onto a different underlying host, leaving end users none the wiser.
That's the kind of thing noobs throw out the window when they come in fired up about how k8s have health checks, because they've never heard of health checks before.
People are trying to recreate container-level mobility with things like CRI-U, which is very cool, very cutting edge stuff, but right now it's so unpredictable even the k8s crowd is afraid to touch it. ;)
Once these things eventually pan out and the functionality gets tangled up into some contrived k8s-ified YAML abomination (aka "blessed by the Great Googly Appendage"), when the k8s fanboys start hailing it as the savior of all computerdom, I'm sure I'll be making posts like "Uh, hi, this is awesome tech, but live migration anyone? We've had this for a long time..."
CRI-U has the potential to open many doors, and while true live container migration may be one of them, it's not really the exciting one, primarily because we've already had fatter-but-functionally-similar support for server workloads through VM migration/snapshots. CRI-U's promise is in all the things we can do with process-level "save states" that aren't just "moving 300MB of RAM at a time instead of 8G".
I’m sure the team I’m in (10 total, of which none of us are “devils”, and only half are devs) could probably do our stuff without containers and Kubernetes, but honestly, the fact that I can make a commit to a repo, and in a couple of minutes the latest version of my code and dependencies is up and running, with things like SSL certs, DNS records and domain names sorted, all logging is sorted (I don’t have to worry about logging libs and connections to logging services, I just write to the console and it’s all collected and indexed in ElasticSearch; if my service goes down Kubernetes brings it right back up in a matter of moments, I don’t have to provision anything, underlying instances are automatically replaced if they fail, and resource requirements are automatically handled, as is horizontal scaling.
So, so, so much is done (near) out of the box that we don’t need to worry about, it’s amazing. I’m sure there’s a pre-Kubernetes way of doing it, but I don’t imagine it’s nearly as low friction.
When that 15% occurrence happens, community forums exist, failing that, we pay for great support from our cloud provider, and absolutely worst comes to worst, we can just blow away the whole thing and redeploy because our whole environment is setup so that you can go from blank slate to apps running production again in just under an hour.
Ideally your automation and everything else is done so you have a reproducible environment. If that is the case and you recreate the environment when you run into that 15% problem you should be recreating the problem.
Yes there were methods of doing the same thing pre containers. smoother and simpler IMO
Simplest was the ship everything in a tarball using embedded deps so everything was inclusive. Production was pointed to the latest release via a link. New release was done and you simply changed the link. Rollback was as simple as changing the link back to the previous version.
> So, so, so much is done (near) out of the box that we don’t need to worry about, it’s amazing. I’m sure there’s a pre-Kubernetes way of doing it, but I don’t imagine it’s nearly as low friction.
My suggestion would be that you not rely on your imagination, but actually look into some non-k8s options. NixOps is a good place to start. You'll be pleasantly surprised how much simpler it can be, and may even get an improved emotional connection with the greybeards out of it.
I'm not sure I communicated that clearly -- I meant that people at NOT Google just upgraded the job title. There's no question that some extremely qualified people work on the Kubernetes platform itself.
The issue is that they've thrust it upon the rest of us lowly mortals as a general toolkit, when it's only potentially-appropriate for companies at Google scale, in terms of both traffic and talent.
I don't think Kubernetes is necessarily overly complex. I use it for a side project, and knowing the config primitives, it's been pretty easy to set up a web app with postgres, redis and a load balancer on a single node hosted on DigitalOcean. Since I'm already familiar with k8s from work, I find the maintenance of the mini cluster to be pretty hands-off.
> - Straightforward upgrades of the environment to incorporate security patches
How do you ensure that your exposed containers have all the relevant security patches, especially if the images aren't uniform? Are you using something like Watchtower to monitor for vulnerable packages and automatically rebuild and redeploy the containers when e.g. the underlying Ubuntu or Alpine image uses a vulnerable library?
Lots of people have the mistaken impression that containerization inherently protects their application from running vulnerable code. If you already have this built in to your pipeline, I'll be impressed!
Most large orgs I'm looking at pretty much just have guys logging into tin, on a "secure" network, configuring things based on written instructions.
Is that better than k8s? Is is VMware? Is it Ansible, Puppet or Chef?
How are we dealing with node failures, are you paging and waking people up or just running everything in super HA? How are you making good useage of your compute, VMs?
The funny thing being that all of these things are applicable in a k8s-backed environment anyway. k8s is a container orchestrator and scheduler. You still need a platform (VMWare, AWS, GCloud) and configuration management separately. So, I guess the answer is "yes"?
>How are we dealing with node failures
First, in an ordinary system, "node failures" are rare. This thing that k8s encourages where the first response to a problem is "just kill the pod and hope it comes back OK on its own" is distasteful in many ways, and only further cements why MS-centric ops people are gung-ho. In my day, when there's a problem with a system, you divert traffic, take snapshot, analyze to figure out the reason it failed, and then prevent failures. Even in k8s-world, failures cost something, and it's bad practice to allow them to occur regularly in production (note that making "just reboot^H^H^H^Hschedule it" a routine is separate from being able to tolerate failures transparently).
Second, you use the same systems that k8s ingresses use internally: load balancers with health checks like HAPROXY, nginx, Envoy, etc.
> How are you making good useage of your compute, VMs?
First, not dedicating a bunch of it to redundant and unnecessary distributed systems mechanisms. k8s is not cheap or easy to run.
Second, there are robust toolkits and monitoring options for managing all kinds of workloads, including controlling scale. Depending on your platform and use case, there are a plethora of options. These problems are not new.
The crux of the issue is that k8s is misunderstood as a generalized toolkit for every common-but-not-trivial operational task, because that's how Google promotes it to maximize adoption and thus wedge themselves into a position of more control v. AWS. k8s is a complex multi-node container orchestrator and scheduler. If you didn't need one of those before you knew what k8s was, you probably don't need one now.
You didn't really answer the question and honestly I don't think you entirely understand the proposition, but it doesn't really matter...
Honestly, I disagree with you. Putting k8s on tin is a fairly nice system that gives you much of the power of IaaS providers that you potentially don't need anymore. It makes it easy to create logical environments, allows packing of compute, and gives you standard APIs to develop against, allowing you to move providers, share platforms, etc.
Now I do agree, you shouldn't generally have services that restart constantly, but some of the biggest headaches I've came across is sysadmins and that assumed their systems will just work, and now they don't and it's a clusterfuck.
With regards to cheap, it doesn't cost any extra to get k8s as a managed service by google. It's fairly cheap to use on DO, and if you need to run it on tin, it's a damn sight easier than bootstrapping the underlying IaaS in my opinion, so frankly I don't think your rant represents reality.
Theres a lot of hackers in the industry that are new to running systems. I'd rather they use a framework than build bespoke, but that's me. And thus I ask again, what's better?
I mean, if you're asking me to prescribe a one-size-fits-all infrastructure solution for any problem, that's not a thing. The answer is "it depends". What's better is to understand the mechanisms at work and use what you need to get a robust, reliable solution.
If I were to prescribe a general infrastructure platform for your generic web app, the high-level would be
a) use something for infrastructure-as-code to define the resources needed, so they can be rebuilt/spawned in new environments on demand;
b) use something for config management and image construction, ideally something like NixOS that has reproduceability and dependency encapsulation as a fundamental part of the OS;
c) use a production-grade load balancer like haproxy or Envoy and configure it properly for the infrastructure that it sits on top of;
d) use a production-grade web server to serve requests, which is probably either Apache or nginx.
Note that just saying "boom k8s" doesn't really resolve these concerns. k8s schedules, routes, and executes arbitrary containers, which have usually been built by developers who don't know what they're doing, and which are likely to contain tons of random outdated junk as a result of cascading FROMs in the Dockerfile and stray files in the developer and/or CI pipeline's build context (which are frequently sensitive btw). Container images should be just like any other image running in prod: constructed by competent admins and controlled with an appropriate configuration and patch management mechanism. The fact that applying those changes executes an entirely new image instead of just restarting a service is really an implementation detail, it's not a solution to anything.
Your chosen k8s ingress is probably using nginx, haproxy, or Envoy under the covers, and you have to tune that either way, whether you call that a livenessProbe or a health check. There's nothing fundamentally better about the k8s YAML for this than the actual configuration file (and indeed, in the early days of k8s, I was hacking through the alpha-stage ingress plugin and editing haproxy configs by hand anyway), though I suppose if you have a use case where half of your applications need one load balancer and the other half need another, you may get some benefit here. That's pretty rare, though.
If you ever do hit appreciable load, you may find that your container's `npm start` server has some inadequacies for which "omg pay $CLOUDPROVIDER more money and spin more pods" may not be an adequate solution.
And lastly, k8s doesn't enter the picture until you have something to run it on so it doesn't do anything for your infrastructure-as-code problem, even if the answer to that is just "rent a Kube cluster from Google" -- something still has to actually go rent that, and it should be scripted.
So I mean, yeah, if you like Kubernetes because Helm charts are convenient, by all means go ahead and run Kubernetes. Just don't pretend like that magically solves the problems involved in a robust architecture, especially if you ever expect to tune or profile your systems.
You don't really have a solution to: Persistent Storage, Pod Security Policies, Admission Controllers (such as checking your deps have been security checked), Resource Utilization or Creating environments (you mentioned it but never specified it).
You don't actually understand how k8s does healthProbes.
Most of the problems you're leveling at k8s still exist with your technology choices, or any really. An incompetent admin is incompetent whatever the tool.
Containers lacking reproducible builds are a bit of a problem, but I doubt it's a problem solved by many/any tools in it's entirety.
For the record, I've actually used k8s to run critical national infrastructure for major government services with considerable load, that categorically should not go down.
I've also used it as a shared platform for CI/CD in major orgs with lots of seperate delivery teams. It's literally bliss compared to trying to create that with openstack, VMWare or an IaaS provider.
You should really try the tool in anger before you dismiss it, because most of your points aren't legit pointing at what k8s gives you.
Also, as someone who is a competent admin, who has also been a software developer, having a sysadmin who can barely code trying to debug someone elses at 4am is also a broken model. Having developers involved in support and building of things, is so they can actually suffer the pain of code that isn't reliable, doesn't log or event, doesn't tell you it's started, doesn't have health checks, isn't easy to deploy, etc, etc.
> You don't really have a solution to: Persistent Storage, Pod Security Policies, Admission Controllers (such as checking your deps have been security checked), Resource Utilization or Creating environments (you mentioned it but never specified it).
Again, I'm not going to draft out a complete architecture for some hypothetical application. The point is that k8s leaves the fundamental questions unsolved, just like non-k8s. This is noteworthy because many people are apparently operating under the belief that k8s will "reduce complexity" by handling these core infrastructure problems transparently and intrinsically, and it doesn't.
> Most of the problems you're leveling at k8s still exist with your technology choices, or any really.
Right, that's exactly the point. Most people say k8s is worthwhile because they think it's a magic bullet that has self-contained and automatic remediations for core infrastructure concerns. That's because there's no way that the complexity is worthwhile if it doesn't. If you're left with the same basic set of problems regardless, what are you getting by putting k8s in there after all?
k8s is probably not the wrong choice 100% of the time. It's just that most people who are jumping on that bandwagon are simply jumping on a bandwagon, and flailing and screaming to everyone else that the bandwagon is a magical land of fairy tales and unicorns. If there is legitimate, real, well-vetted engineering rationale for selecting k8s for a particular use case, it should be selected, of course. Vague statements alluding to its mystic "literally bliss"-inducing powers do not comprise this, despite popular opinion to the contrary.
> You should really try the tool in anger before you dismiss it, because most of your points aren't legit pointing at what k8s gives you.
I have used it, repeatedly. Granted that the last cluster I ran in prod was a couple of years ago, and I'm sure things have improved in that time. But it doesn't change the fundamental equation of the cost/benefit tradeoff.
> having a sysadmin who can barely code trying to debug someone elses at 4am is also a broken model.
Agreed, obviously. What does that have to do with Kubernetes?
You didn’t explain how nose failures can be handled. In k8s a new pod is started ( with a storage migration story)
Without that and obviously people have been doing h/a before k8s you have To write your own fragile buggy scripts or use software that’s more just centric than container centric or and this was the question what do you do ?
"Node failures" have been handled much more elegantly than k8s's simple "kill and rebuild" approach via live migration for at least a dozen years. [0] Wikipedia lists 15 hypervisors that support it. [1]
If you're interested, Red Hat has a very thorough guide on how to achieve this with free and open-source software. While you have to run VMs rather than containers, it's much more robust. [2] There are proprietary options too.
And then, there's also the good old fallback method that k8s uses: just divert traffic to healthy nodes and fire up a replacement. There are many frameworks for that simple model, and there's no reason to pretend that it's done exclusively with handcrafted "buggy shell scripts", nor to pretend that "buggy shell scripts" are inherently worse than "incorrect YAML configs that confused k8s and killed everything in our cluster" (see OP for a compendium of such incidents).
I would really like to hear what's your proposed alternative for managing software. The world is digitising at a staggering pace and we have to deal with ever increasing density and complexity in software deployments. How would this look in your ideal world?
In my ideal world, it'd look a lot like SmartOS + NixOS, but that's an ideal. There is a massive middle ground between k8s and some hypothetical ideal, and k8s is completely on the "horrifying monstrosity that you shouldn't touch with a ten-foot pole unless you really have no other options" side of things.
Most server-grade operating systems include facilities that are robust, mature, compact, performant, and reasonably well-integrated, and for the things that aren't part of the OS, there is a long and glorious lineage of applications that can lay claim to those same virtues. Kubernetes makes use of many of them to do its work.
Those of us who've configured a router, load balancer, or application server independently are just perplexed when someone acts like k8s is the only way to handle these very common concerns. We're left asking "Yeah, but... why all this, when I could've just configured [nginx/Apache/haproxy/iptables/fstab]?"
The naive admin will say "because then you just have to configure Kubernetes!", but unfortunately, stacking more moving parts on top of a complex system typically hurts more than it helps. You'll still need to learn the underlying systems to understand or troubleshoot what your cluster is doing -- but then, I think part of what Google et al are going for here is that instead of that, you'll just rent a newer and bigger cluster. And I guess there's no better way to ensure that happens than to keep the skillset out of the hoi polloi's hands.
I assume that many "devops" people are coming from non-nix backgrounds, and therefore take k8s as a default because they're new to the space and it's a hot ticket that Google has lavishly sworn will make you blessed with G-Glory. But systems have been running high-traffic production workloads for a very long time. Load balancing, failover, and host colocation have all been occurring at most shops for +/- 20 years before k8s released in 2014. These aren't new problems.
Alan Kay has called compsci "half a field" because we're just continually ignoring everything that's been done before and reinventing the wheel rather than studying and iterating upon our legacy and history. If anything is the embodiment of that, it's Kubernetes.
I appreciate the lengthy reply and I sympathise with your concern regarding the cargo culting of technology trends. It's not the first time it happens nor the last. I disagree in general with your view and I think that past few years have brought tremendous innovation in the space: software defined networking, storage, compute; all available to the "hoi polloi", as open source high quality projects that are interoperable with each other and ready to deploy at the push of a button. And you know why this has happened? It's because Kubernetes, with all it's complexity, has become the defacto standard in workload orchestration, and has brought all the large players to the same tables, scrambling to compete on creating the best tools for the ecosystem. I am not naive to think Google didn't strategise on this outcome but the result is a net positive for infrastructure management.
I also sense a very machine centric view in your message, and there is a certain beauty in well designed systems like SmartOS and NixOS. But you are missing the point. The container orchestration ecosystem, for all it's faulty underpinnings (Linux, Docker, Kubernetes), is moving to an application centric view that allows the application layer to more intelligently interact with and manipulate the infrastructure it is running on. Taking into consideration the Cambrian explosion in software and the exponential usage of this software (tell me an industry that is not digitising?), this transition is not surprising at all.
Regarding the complexity of Kubernetes, some of it is unavoidable, especially considering everything it does and the move from machine centric management to cluster centric management. There are other tools that are operationally simpler (Docker Swarm, Nomad), but they definitely don't offer the same set of features out of the box. By the time you customise Nomad or Swarm to feature parity with Kubernetes you will end up with a similarly looking system, perhaps a bit more well suited for your use case. The good part is that once an abstraction becomes a standard, the layers underneath can be simplified and improved. Just take a look at excellent projects like k3s, Cilium, Linuxkit and you will see that the operational complexity can be reduced, while the platform interface maintained.
To summarise, I am very happy that Kubernetes is becoming a standard, and I am convinced that 30-50 years from now we will look at it as we look now at the modernisation of the supply chain which was triggered by the creation of the shipping container.
First, thanks for your response, it's been a good discussion.
I agree that there's a great deal more technology being made publicly available and that said technology is beneficial in the public eye. I don't necessarily agree that that technology is needed by most people, despite the Cambrian explosion you reference.
On machine centrism, if anything, containers have increased the importance of systems concerns, because now every application ships an entire userland into deployment with its codebase. As long as containers come wrapped in their own little distribution, each code deployment now needs to be aware of its own OS-level concerns. This is anything but application centric. If you want application-centric deployment, just deploy the application! True application-centric deployments are something like CGI.
A better understanding of the hardware+software stack and a return to fundamentals instead of piling frameworks sky-high on top of each other making it very hard to impossible to debug which layer is buggy when things go sideways.
> It is designed to solve a very real and very hard problem.
Completely agree. The thing is that the problem it's designed to solve has been badly misrepresented.
If your problem is "At Google, we have fleets of thousands of machines that need to cooperate to run the world's busiest web properties, and we need to allow our teams of thousands of world-class computer scientists and luminaries to deploy any payload to the system on-demand and have it run on the network", then something like Kubernetes might be a reasonable amount of complexity to introduce.
If your problem is "I need to expose my node.js app to the internet and serve our 500 customers", it's really, really not.
I actually agree with most of this. However, one must recognize that Google is not solely responsible for Kubernetes at this point. Under the banner of the CNCF, K8s is worked on by literally hundreds of companies and thousands of developers whose interests are not necessarily all aligned with the "Rent-Your-Server-From-Me" lock-in doctrine espoused by the large cloud providers. Many companies have an interest in making K8s easier to use from an operational perspective, which gives me hope for the platform's future.
I gotta say, I feel like I was having a much better time of it when Continuous Delivery was a logical outgrowth of Continuous Integration.
You just keep building farther and farther out from compilation through testing and packaging and installation and deployment. Everyone on the Dev team can keep up with the changes and understands where there code goes and when. That's what I thought DevOps was about.
Now it's a separate division that treats me like a mushroom: kept in the dark and fed bullshit.
In my experience I haven't had that much difficulty finding problems that k8s solves and what problems it doesnt, for me. I eased into k8s using rancher and groked stuff as I went along, a little ansible and you can go a very long way if you want to use your own hardware.
Not everything has to be run like dedicated servers in a closet named after lion king characters that you SSH into every day "just to check", and keeping software up to date is the same its ever been, pay attention and apply patch, lol.
NoSQL products like DynamoDB, Cassandra, Redis, MongoDB etc have all been growing massively over the last few years as companies move to an increasingly multi-technology mix.
So I also agree that Docker and Kubernetes are going to significantly grow as well.
There seems to be a lot of people who are building large scale, well behaved applications on NoSQL systems that are finding out "you know.. for a lot of these applications you don't need 'MVCC/ACID or nothing.'"
Like I said, there's a place for those systems. Redis in particular is close to my heart. But many devs hear "NoSQL" and think "Great, I love JSON! Who cares about SQL, SQL is for fogies!"
I have little doubt that the staff at places like Google and Facebook are qualified to weigh the concerns and choose the system that works best for their use case. The concern is about the rest of us, and the rapid proliferation of experimental systems developed for internal projects at MegaCos whose demands make any off-the-shelf system untenable.
The fact that Google or Facebook release a project essentially as an academic exercise doesn't mean that it should come anywhere near mainstream use. People see "Facebook" or "Google" on the front matter and want to be like the cool kids on the block, hardly realizing the quagmire they're plunging into.
MongoDB gained prominence by claiming massively improved write performance and encouraged devs to switch off real databases for a system of record and then, woops, it turns out that "write performance" is sans-`fsync`. That's sort of fixed now, many years after the controversy, but they still built the entire NoSQL story on the back of that deceit.
As an autodidact who detests formal schooling, I hate to say it, but sooner or later, there is going to need to be some type of vetting/licensing at play here, both to stop the mindless proliferation of "grab anything Googly" and to hold those who engage in predatory behavior accountable.
Yes because SQL databases never fall over and are the pinnacle of modern software engineering.
And there is nothing worse than people who think that when other developers are making technical decisions it is because of hype and not some reasoned judgement.
> NoSQL products like DynamoDB, Cassandra, Redis, MongoDB
As a DBA ...
Cassandra is despised. And I would say Postgres is growing a lot faster than MongoDB. Redis caching and zsets power the Internet now.
NoSQL, in general, has a much more limited set of mgmt. tools and has fallen out of favor recently for SoT.
> SQL databases never fall over and are the pinnacle of modern software engineering
Well, RDBMSs almost never fall over when you use SSD and indexes. And they really are the pinnacle of software engineering. Sounds like you're not a DBA.
Most of the time it is because is is cool and CV building, with zero business value.
Oh, and one gets to write Medium blog posts, organise conferences, sell books and consulting services, which might be the actual business value after all.
Note that "fall over" in the context of a mature RDBMS means that the database got slow because it wasn't properly tuned or administered, whereas in the context of NoSQL datastores, it may well mean "didn't write any data for the last five minutes, try to rebuild transactions from customer support tickets and your card processor's logs I guess????" or "woops, I just lost about $20 million in bitcoin". I think the potential tradeoff there is clear.
There is something worse, its resume driven development - which is where I see the majority of NoSQL databases used (on a single server with a few GB's of data).
Not every application will need MVCC/ACID eventually. As another commenter pointed out, you can always assert that RDBMS like features will always be needed and that NoSQL is considered "shiny thats why people choose it" but some people actually know what they're getting into and have made a reasonable informed decision on what kind of persistence their application will need and how to scale it.
Kubernetes represents the complete "operationalization" of the devops space. As companies have built out "devops" teams, they've mostly re-used their existing ops people, plus some stragglers from the dev side. These are the people you hear talking about how great Kubernetes is, because for them, they see it as "run a Helm chart and all done!". Which makes sense, since they were, not too long ago, the same guys fired up about all the super-neato buttons to click in the Control Panel. 90% of "devops" people at non-FAANG companies are operations people who just think of it as a new name for their old job.
Among this set, there's no recognition of the massive needless complexity that permeates all the way through Kubernetes, no recognition of the tried-and-tested toolkit thrown away and left behind, no recognition of the fact that we're working so hard to get things that we've had as built-in pieces of any decent server OS for decades. No recognition that Kubernetes exists so it can serve as Google's wedge in the Rent-Your-Server-From-Me Wars, and no awareness that just in general, there's no reason it should be this hard.
Of course, to them, it's not hard. They have an interface with buttons, they can run `helm install`, they get pretty graphs via their "service mesh". That's what I mean by "operationalized"; Kuberenetes is meant to be consumed, not configured. You don't ask how or why. You run the Minikube VM image locally and you rent GKE or EKS and go on your merry way. The intricacies are for the geniuses who've blursed us with this death trap to worry about! Worst-case, you use something like kops. Start asking questions or putting pieces together beyond this, and you're starting to sound like you don't have very much of that much-coveted "devops mindset" anymore.
"What happens if there's a security issue?" Oh, silly, security issues are a thing of the past in the day and age where we blindly `FROM crazy-joes-used-mongodb-emporium:latest-built-2-years-ago`. Containers don't need updates, you goose. They're beautiful, blubber-powered magic, and the Great Googly Kubernetes in the sky is managing "all that" for us. Right on.
I'm picking on Kubernetes specifically here because it's the epitome of all this, but really everything in "devops" world has become this way, and combined with the head-over-heels "omg get us into the cloud right now" mentality that's overtaking virtually every company, it's a bad scene.
Systems have gotten so much more convoluted and so much dumber over the last 5 years. The industry has a lot to be embarrassed about right now.