It's upstream Kubernetes + a PaaS framework built in top of it.
It takes care of role-based access control, has a secured Docker registry (prevents applications from pulling each other's source code), Jenkins integration and can automatically build, push and deploy your applications.
Our team started using it and it's great. The documentation is top-notch (it's probably the best docs I've ever seen in an open source project).
I've seen many teams re-invent the wheel over and over again, when OpenShift already does most of what they need.
Happy to answer questions!
https://www.openshift.org/ (`oc cluster up` and a running Docker is all it takes for a first test)
`minishift` seems to be similar to `minikube`. On my mac, running `minikube start` successfully starts a minikube instance in Virtualbox.
Unfortunately `minishift start` seems to sit there and fail after 120 seconds (with xhyve and vbox) because "the docker-machine didn't report an IP address", and it seems that the docker-machine is not even created.
This is a shame, I'd very much like to try out openshift. If anyone else has the same issue here please let me know!
Edit: Someone replied but deleted their comment. I should have run `oc cluster up --create-machine`!
That being said, I do run a few 2-3 node OpenShift clusters and the additional complexity was well worth it.
However, I'm still confused by how the tools in the ecosystem interact with the capabilities of various cloud providers. That is, we're using DigitalOcean and Docker and I want to get our infra to a point where I can easily spin up a brand new staging environment (staging-2 say) with an isolated postgres node using an attached volume which also say runs redis, a proxy node, a couple of nodes for application servers, and a couple of nodes for background jobs, all w/private networking, secure, w/non-root access, and run a quick task to seed dbs.
I just can't seem to find guides which put the whole thing together, just pieces and I'm lost in researching an overwhelming number of tools from Ansible to Terraform to Kubernetes to Helm, etc, etc, etc.
It should go a bit like this:
- Use Terraform to provision VMs, networking resources and storage from DigitalOcean. Basically, write the scripts that make your whole infrastructure available with a single command.
- Then use Ansible for anything you might want installed on those machines - Kubernetes, security packages, SSH keys for your team.
- Use Kubernetes to then deploy your application on top of your secure and replicable infrastructure.
Each of the steps above should be roughly one shell command. If you are disciplined enough to always provision machines, install packages and deploy your app via config files, this should be very much achievable.
How does autoscaling work if terraform/ansible are involved before kubernetes can take over?
Firstly, kubernetes is able to create multiple instances of your app as load increases and scale them out over all the nodes.
Secondly, the nodes your kubernetes cluster is running on can also autoscale. With terraform you can set up an AWS autoscaling group for example to automatically increase the size of your cluster as load increases.
With a dockerfile is there any point to using Ansible/Puppet/Chef any more? I have used those tools to keep 'how machines are installed' in code in the past, but with dockerfiles these days I don't get the point anymore. Seems like the future will be having a few dockerfiles (staging, prod, dev, db, etc) then just finding a place to run them (digital ocean, docker swarm, aws container service - or your macbook).
Bash is perfectly fine if the machine can be installed with less than 100 lines of code and immutable infrastructure is being used (so no need for idempotent operations). Just make sure to add ShellCheck to the CI pipeline.
Nope, Ansible just needs SSH access.
It is always tough to predict the future; something < 100 lines can easily grow over time. It is nice to start with a tool like Chef/Ansible/Puppet which can more easily grow as the project grows vs Bash.
1. Build an AMI with Packer that contains Ubuntu + Docker + aws-cli.
2. Build and publish docker images for all the services that will run on the instances. (one image per service, not per environment like you suggested)
3. Configure the EC2 instance or AutoScaling Groups with userdata. The userdata usually just contains a `docker run ...` instruction.
I used to have Chef, run test kitchen, bake AMIs, cycle all the machines. Now I have a single AMI and do most of the testing on the docker side and then some on staging. In both cases I am using terraform to manage the overall infrastructure (and userdata scripts).
The only issue is how to manage secrets as userdata are readable with the DescribeInstance ACL. The best solution that I found is to upload the secrets on S3+KMS and use IAM Role on the machine to retrieve them. Or use Hashicorp Vault.
The point of using Docker containers is that you get to deploy to production your dev environment as is.
Cook a base VM image that has the docker engine and anything else you need. Do this once and spin up VMs you need after that. This is what images are for, and they're far easier, faster, more reliable and more secure than dealing with all this other junk.
If anything changes, update the startup script, then run this "make new base image" script and you're all set. Update VMs as needed. Really doesn't get much more complicated than that and using tools like ansible just to setup your VMs so you can run docker containers later is unnecessary.
Unnecessary = not needed to function. Replacing functionality is just preference, which is fine, but specifically not necessary.
My point is that's a loaded question. You could just as easily ask why do you need bash in every VM when you can just have Ansible in a single machine.
I'm curious why you recommend this? I'd argue the opposite. With the declarative nature of Terraform, you can know exactly which part of your infrastructure is going to change before you actually apply it. With Ansible, this information is a bit more opaque. You basically just run the playbook and pray it goes as planned and things haven't drifted too much.
But for a single-command fire-and-forget commercial solution, take a look at http://gravitational.com/telekube
It allows you to, basically, define your own "Kubernetes flavor" with pre-installed components like CI/CD pipeline, databases, redis, etc, call it "developer cluster foo" and have X replicas of it running on any infrastructure. Telekube runs a single process per box and manages dozens of parts (kube services, docker, systemd, terraform) automatically for you, does atomic in-place version upgrades of the entire stack, etc. It is basically a "kubernetes appliance" and that's how many of our customers use it: ship it in a box into on-premise environments.
Yes, yes. It was a shameless plug since I work at Gravitational. We believe that DevOps folks have better things to do that spend any time provisioning Kubernetes and keeping it (and a dozen of components it now consists of) alive.
Do you document what security options you use with Telekube anywhere?
One of the things I've noticed with the wide variety of k8s distros and installers that there are is that it's not always easy to establish what they've chosen to do in that regard, and as some of the options like RBAC are newer and have some operational overhead, not every distro uses them.
Teleport maps your corporate groups/roles to both SSH and Kubernetes.
There are so many ways to split out application environments in k8s (namespaces being one, generated prefixes in a helm template being another) that you should really try to use your cluster as a single compute resource. Maybe segregate for prod if you're not confident in your ability to lock it down, but everything else should just be creating/scaling pods, then adding nodes to the cluster when it runs out of compute.
There's a lot of cool things about kubernetes (e.g. I had an automated SSL cert fetcher for LetsEncrypt that applied to any SSL ingress I added) but it still does some weird things sometimes (like constantly trying to schedule pods on instances without enough spare memory, and then killing other pods because of that; fairly certain that's not supposed to happen).
I think I'll revisit it next year and hope that it's a bit easier to get into. I'm especially hopeful about using it with Spinnaker and some sort of CI, though I couldn't find anything lighter weight than Jenkins that was straightforward to get set up on it.
I assume you're explicitly declaring the memory requirements for the application?
EDIT: I ask because usually I read examples just throwing deployments at a cluster, ends up the more explicit you are with the app requirements (CPU/mem, etc.) the better a job the scheduler can do. I realize this sounds like advanced common sense.
That's good to know for next time!
Setting requests and limits to the same values puts it in "Guaranteed", whereas setting requests and limits to different values gives you "Burstable". The default QoS class is "BestEffort", which makes the pod expendable. (This isn't the best-documented part of Kubernetes.) (Edit: You can see the QoS class of a pod with "kubectl describe pod <name> | grep QoS".)
QoS classes are important for scheduling. When Kubernetes needs to evict pods, it will pick burstable and best-effort pods first.
Note that setting memory limits can sometimes not work well with apps that use GC. I have had particularly disappointing experiences with Go apps. Go's GC is a bit odd in that it reserves a huge chunk of virtual memory (which is why Go apps, if you look at "vsz" only, often seem to take more RAM than expected), and is not very aggressive about releasing it. I have a Go app that uses only a few megs actual, but because of the GC will allocate half a gig and get OOM-killed by the kernel before the GC is able to collect.
Spinnaker + K8s is an incredible combo. We actually ran it inside Kubernetes, and it was able to upgrade itself. Pretty awesome stuff.
What I am looking for is:
- Ability to easily deploy containers
- Ability to route by url
- Ability to swap out containers without affecting others
Does Kubernetes solve this problem for me? Is there a better option?
I would also recommend you checkout now for deploying containers and adding ability to add route by path.
Also, can Kubernetes be used to deploy to a single VPS instance (e.g: a 10$ Digital Ocean droplet) or is it only for a multi-node system like GKE?
The benefit I see in it is orchestration of large clusters of containers. The offset against that benefit is that there's quite a lot of complexity involved.
For single node solutions, if you want to use containers at all, I'd just use docker.
We especially like the interface they built that ties everything together.
- A complete application e.g. wordpress + mysql containers, can be represented as pods
- Pods can be "scheduled" e.g. auto-scaled, across "nodes" e.g. servers, with load balancing etc
Is that right?
As opposed to the old-school method of having to describe all that in words to an ops team (or not and just expecting them to figure it out): "This is a Java app. It needs JDK 1.6 and at least 1GB of RAM. It expects to write logs at /var/log/foo.log. It needs a MySQL database and Redis and Elasticsearch at configured hostnames and ports. We need to run at least 6 instances horizontally scaled behind a load balancer."
Wow, brittle. Can't Java use environment variables? And since when does development tell ops what hostnames a machine should have, not to mention how many instances to run?
Sorry, don't mean to jump all over you, but your words apparently jumped all over me. :)
Not to be cheeky, but since "devops"!
Docker, Kuburnetes, etc... all of these things live on the point of intersection between developers and operations. Some of the config artifacts (e.g. dockerfiles) typically live in source control, which is usually the domain of developers. But the values with which they're populated are typically set by operations.
It is indeed a dance that the two groups have to work out among themselves, and different organizations will handle it differently. Ultimately, I think you find that it really isn't FUNDAMENTALLY different from the dance that they already do in the old school. There will always have to be a handshake, where dynamic values are stored someplace and code knows to point to that place. How you handle that handshake is ultimately a human process thing.
The point is it all comes as part of the framework though, so hopefully building for kubernetes will be consistent.
Also, whilst you could put wordpress in a pod (php, mysql) it's probably more typical to put nginx + php-fpm inside containers next to each other (a pod) rather than mysql. You'd probably run mysql as another service, at least that's how I typically do it.
That, and a loads of very verbose YAML to describe things.
Thats probably not a good idea since
1. A pod runs on a single machine .
2. A pod is a basic scaling unit, you probably don't want scale both Wordpress and MySQL together.
> its basically supervisord but for containers, across a pool of servers?
Thats just ReplicationController part of kubernetes.
- a WebApp pod for the WordPress, which can be scheduled/load balanced across several nodes
- a MySQL Mast pod, and possibly MySQL Slave pods which would be scheduled/load balanced
I REALLY want to get into all this but this seems like SO MUCH MORE WORK than just running them traditionally...
Here's an analogy: let's say you've written a command line tool and you want to install it. The simple answer is to just copy the binary to /usr/bin. Does it need config files? Library dependencies? That's fine, copy them to the appropriate directories. But then you, as the sysadmin, have the responsibility of remembering exactly what the state of the system is. If you need to make changes later on, you need to spend time figuring out exactly what needs to be updated, and then manually apply those changes without making mistakes.
The cleaner solution is to use a package manager like rpm or apt-get. You have to spend some time up front to package your code, including explicitly defining things like package dependencies and clean-up scripts. But once that's done, you can let the package manager do the work of figuring out how to get the system into the desired overall state. You're doing more work in the short term, but it pays dividends in terms of maintainability.
Neat! Think I'm getting the appeal now
Kubernetes makes more sense if you're running big clusters of systems.
It is... Unless you have hundreds of servers - and then it pays off handsomely.
Still, I'll keep on plugging away at kubernetes; I actually quite like the design of the system, very tidy!
If you don't manage Kubernetes yourself this is actually not that much more work. It is sort of a shift in thinking, and you have to retrain, but once you do that it should actually be faster.
I am doing this right now and the paradigm shift is just really big, that's why it seems to take so much longer.
One of the problems with books about things like Kubernetes which move quickly is that they can be well out of date before they hit final release.
I think some setups will suite swarm better, whilst others will benefit from the richness of what k8s provides.
A deployment is a supervisor for pods and replica sets
So what's the difference between these supervisors?
For example, the standard way to do a rolling update using RCs is to create a new RC that is responsible for the updated pods, and then gradually increase/decrease the replica counts to reach the desired state. This is conceptually simple, but the downside is that the Kubernetes API doesn't know anything about the relationship between the old and new controllers. All the responsibility is pushed to the client.
With Deployments, both the old and new configurations are first-class objects. So you can view the history of previous configurations, and you can query the progress of a rolling update. You also get better-defined behavior when multiple clients are trying to concurrently make changes, because Kubernetes can arbitrate between them at a higher level.
> A ReplicaSet ensures that a specified number of pod “replicas” are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all.
A ReplicaSet (RCs are old and deprecated) manages a set of Pods. It has a template and a target replica count.
A Deployment manages ReplicaSets across versions/upgrades. When you change a deployment it creates a new ReplicaSet and does a rolling upgrade from new to old version by tweaking replica counts in new and old ReplicaSets.
I don't think the differences are well-documented, though.
(Helm aims to be a package manager for kubernetes, and its packages are called Charts)
Because Kubernetes is great about applying docker labels, we get the k8s container name, Pod name, Pod namespace, UID, and then the normal docker metadata provided by logspout-redis-logstash. Then use the normal, and essential IMHO, multi-line codec on the logstash side of things: https://www.elastic.co/guide/en/logstash/5.4/plugins-codecs-...
We have a few ``if [docker][image] =~ "foo"`` statements to snowflake the types of multiline split patterns, but all in all it just works.
The next level up the hierarchy of needs is to also grab the systemd journal content from the Node itself and send that along, too, but it has not yet become a priority. Not to mention the likely substantial increase in store size once the much, much chattier kubelet traffic arrives in ES.
Anyone have resources they'd like to share for me to pick up the basic requirements to get a docker stack up?
Up to now I've used docker compose
Second, please don't run a database in it, omg. See point one. If there were just one application that is not reasonable to run in a container, it would be a database.
Third, yes, this seems to be the new way things happen in the software world. I'm worried about what tools we'll need to develop to overcome the cacophony of noise and half-solutions that is incumbent in the GitHub era.
Yes engineers chase shiny new things and argue about it defensively. That's real, but it's also perennial—all us curmudgeons did the same thing in our day. This is cycle-of-life stuff. Such discussions quickly converge to the same old thing unless we're careful to avoid that, and the burden of avoiding it falls more on the senior members.
We detached this subthread from https://news.ycombinator.com/item?id=14453544 and marked it off-topic.
This is also good supporting evidence for the amount of SV heterodoxy that HN-the-institution is willing to stomach (that topic was broached last time we crossed paths).
Are perennial, cycle-of-life topics not expected to regularly appear on discussion platforms? I find it grating to see "k8s this, k8s that" all the time. Should I register a complaint as well, so that these threads can be killed next time they come up, on the theory that "many are bored" by the topic, based on the existence of my complaint?
Specifically, which things are not allowed to be mentioned in the future? Was it:
point 1, "Please consider whether you need k8s before you use it";
point 2, "Please don't run a database inside Kubernetes";
or point 3, "Yes, the software world is hardly manageable due to the unintended effects exposed by GitHub and friends"?
Which of these is too tedious and boring to express on HN?
I definitely would understand if only the flamebait comment were off-topic'd, as I know that was close to the edge. I'm more worried by the fact that counterpoints that are apparently "boring" or "tedious" justify censorship.
1. Containers are a better application level abstraction than VMs. Why as someone deploying an app should I care about machine details. I want my application to be deployed n replicas with x resources and to be discoverable.
2. Containers make your application self documenting on its dependencies. Yes it is some work to get your stuff containerized but we haven't come across anything yet that was all that difficult.
3. Running a container on its own on bare Docker does not give you discovery, multiple nodes, restart policies or many other things k8s does.
4. Much of dev ops roll your own that is going on is solving problems that nearly everyone has and why not do it in a proven environment that does nearly all of the work for you.
I don't think at all that putting apps in containers, shipping them with a resource request/service descriptor, and then exposing them via service discovery is going anywhere at all. Whether it is k8s and Docker or something else, once you are using it, it is clear that this is really the abstraction deployments should be at. I'd say the odds are that you do need k8s stuff for any enterprise/business deployment of services. I'd be surprised if within a few years people are doing it any way else.
2. This is called "static linking". I also sympathize with static linking, but again this comes with implications that are not well understood by many of its advocates. In 4 years, I expect that some fad will emerge to repopularize dynamic linking under some new name, like "securable containerized orchestrated hypothalmized applicationators". Everyone will say "Dude, I can't imagine nobody using securable containerized orchestrated hypothamlmized applicatorness in 4 years". Rinse and repeat.
3. Yes, Docker sucks. We agree.
4. Sure, but you don't need to roll your own. Much of trendy devops is a reinvention of old things that we've already had and tried over the last several decades. For the most part, these come with the same old problems, and there are reasons they fell out of favor.
There are definitely times when the tradeoffs will fall such that Kubernetes makes sense. Those times are much rarer than most people who are running k8s want to admit. That's because they're subconsciously running k8s as a status symbol, and lying to themselves about its necessity and value in their use case.
I'm not saying the way we were doing things before k8s came around was exactly right. For example, going back to shared hosting is probably not a bad idea for the vast majority of things, and stuff like logging into your FTP server and copying your files have been gussied up as "cloud object storage" (and costs a lot more money now). So it's really just a general oscillation of trends, I guess.
Specifically, Kubernetes provides the tools to "pin down" the database to the extent you want/need: Stateful sets, node affinities, QoS classes, nodepools and so on. You want to avoid OOMkilling the database, you want to dedicate the appropriate amount of resources to it, you may want to dedicate a specific type of hardware to it, etc.
It's not for beginners, but nor is it anything like the disaster you portray it as.
However, in the real world of Docker and Kubernetes, it abstracts away components that actually matter to the function and administration of the database, and which can only be simulated by relying on experimental features like StatefulSets, or flaky features like node affinities that require each node to be registered with the correct labels and, as far as I know, are treated as preferences, not demands (i.e., pods will be scheduled on other nodes if nodes matching the affinity are not available).
It is for beginners because non-beginners would realize that they're just taking a very complicated and winding route back to step one.
If you want to avoid all OOM kills triggered by exceeding the cgroup spec, if you want to avoid the complexity and potential risk involved in configuring sturdy backing volumes and dealing with the questions around what happens when a node gets rescheduled onto a node that doesn't have that volume, if you want to ensure that the database is running reliably and stably on a specific type of hardware, the right solution is to go buy a server (or at the very least use a dedicated VM, not a container that will thrash it all over the place and expose the process to the risk of things like a dockerd hang), install the database on it, and leave it alone. Not to try to shoehorn it into k8s and try to grab this stuff that k8s exists to hide out of some weak abstract space.
The risks associated far outweigh the benefits of Kubernetes for an application like a database (and arguably many other types of applications). As far as I know, databases are supposed to be administered with a great deal of respect for their rock-solid stability. The very suggestion of running one on a two-year-old platform should make any business very scared, even if that platform was conventional. k8s is anything but, and it is still trying to figure out how to provide the basics. Running a production database in k8s is a shockingly distasteful affront to stability just in principle.
It sounds like the only reason people are able to give to justify database-in-k8s is "I want to be able to control it with `kubectl` like I can control everything else". My personal impulse would be to say that people who insist on using one administrative utility only should probably not be allowed to operate servers. A more realistic solution may be something like a kubectl proxy mode, where one can register external servers within the cluster namespace and use `kubectl exec <my-db-server> /bin/bash` to initiate an SSH connection.
If your database can deal with failure of single node and uses network storage it's totally doable to run databases on Kubernetes.
Take a look at Patroni on how to run Postgres on IaaS or Kubernetes.
Database servers are also designed on the assumption that each node will be available on a consistent basis, at a consistent address, and that the data directory will be available to the server as soon as it starts up (except for rare node bootstrap operations).
This is the exact opposite of what Kubernetes/Docker seek to provide, and in fact, such things can only be provided within Kubernetes by extensive special configuration that leans heavily on experimental features.
There could not be a worse ideological match. Yes, you can try really hard to ram that square peg through that round hole, eventually pushing it through with significant damage to both the peg and the hole, but why would you?
Not true. You can have dedicated machines for your db instances, check out 1. node affinity 2. Statefulsets.
Statefulsets are designed to provide consistency (no "split brain"), stable network identity ("at a consistent address"), stable storage identity ("and that the data directory will be available to the server as soon as it starts up"). Those features are beta but it's a matter of time (hardening and the likes) before we tut them stable.
> why would you?
For all the same reasons you would run any other workload on the same platform.
For k8s to work well for an application, that application has to be anonymous, without any masters or controllers. It has to be able to tolerate the sudden vaporization of any member. It has to be willing and able to share host hardware with any number of other services, including some which may be bad neighbors/CPU hogs, and it has to be content to be rescheduled in the event that a node goes down, that a pod is killed to ease the transition, etc. Databases fail on virtually every point of this.
If the database and Kubernetes start from opposite design paradigms, why does it make sense to run a database inside Kubernetes? I am still not getting it. `kubectl delete pod/my-postgres-pod` is not a smart thing; you don't want any of that scheduling magic that Kubernetes provides.
At most you would want Kubernetes to tell you that your database is failing health checks and execute the STONITH process to failover to replica, but you hardly need Kubernetes if all you care about is process monitoring.
So can you elaborate on the reasons? Databases are simply not designed for this kind of infrastructure and I see no value in trying to pretend they are. Isn't this the reason that CockroachDB exists, so that people can finally run their DBs in something like Kubernetes without endless headaches?
I think it would be very interesting to analyze the stability and performance features of a PgSQL k8s deployment, PgSQL VM deployment, and PgSQL bare metal deployment. The only issue is that you can't expect the failures to be open with their data.
I call those segments "virtual machines." It's rather harder to blog about an RDS instance (or even a self-hosted EC2 one), but you minimize moving parts and the way your bleeding-edge systems can screw you.
While you can achieve the same with various other ways like good configuration management and virtual machines I have found that the overall cost of creating/running and monitoring deployments on Kubernetes is smaller than on a traditional VM based solutions.
* "Nuh-uh! We love Kubernetes and run EVERYTHING in it, so there. And we DID NOT do that just so we could be cool by copying Google or just so I could pad my resume, I triple-double-swear".
* "But, StatefulSets! It doesn't matter that it's taken over two years of engineering effort to get to a place where fundamental functionality that you've gotten automatically for the last 3 decades exists, like individually-addressable servers or stable disk space. It doesn't matter that they're still beta. Run your database in it TODAY, or you are in serious danger of exhausting your GCool Points!"
* "Please take back your mean comments about modern-day software development culture. Node.js was my first language and I for one find left-pad a miraculous innovation, which never would've been possible without a culture of everyone scrambling to copy some crappy module up to npm and their github profile so that they can brag about how many stars they have. I am thoroughly enjoying this same ethos being carried over into sysadmin/devops."
So many developers insist that elegance is having nothing to take away--and then they pile all of this stuff that they don't fully understand on top of other stuff they don't understand at all just because "it's devops". It's maddening and strange.
(While we're at it: most of you don't need your million microservices, either, and you're probably not fault-tolerant across them anyway.)
I believe there are already substantial rumblings in this direction given the current state of cybersecurity, so this new age of recklessness only stands to hasten it.
I am really hoping that we can avoid such strenuous regulations, lest highly functional autodidacts and anti-authoritarians find themselves shut out of yet another high-paying, at-least-semi-merit-based career. The evisceration of the legal profession should stand as a warning to us. We do not want the actually good and useful people, who generally have low tolerance for the systemic brutality of rote learning, edged out by status seekers who have no real value to offer but can occupy space in a room for the requisite time to earn their certifications.
And on the note about microservices, you couldn't be more right. These architectures are such nightmares to debug that special transaction tracing tooling had to be invented to ease the process, and you have to either go through and update all 90 of your services to use these or go through a centralized service broker (now called a "microservice mesh" so that people don't catch on) to append tracing tags to each request.
Please get real here, people. There are times when this kind of stuff may be useful (if you're running Google or Twitter, for example). That does NOT apply to your small company of 100 employees. You are only making things harder on yourselves.
Most puppet installations I've seen run by these small teams just degenerate into snowflake factories. Going to all that effort to run a huge packer pipeline so your templates are the same accross all your clouds providers - and then pulling whatever-the-current-random-version-of-a-bunch-of gems from rubygems.org onto hundreds of different machines. Arggggh!
The argument I keep hearing is that "if we don't use all the latest tools then how are we going to attract the best talent".
Every few years I dip my toes back into the devops space, just to see if it's improving. Nope. It's just getting worse.