But by the time they catch up to this technology I have a feeling it will become less important to administer k8s directly.
Across the board, it's partially fad and hype, but it's also because these workflows really are better. On the k8s side, a containerized architecture gives developers peace of mind in their local environments, makes you more provider-agnostic, and has the potential to scale more easily.
A sysadmin who can provide all of those is valuable, and deciding if that value is worth the learning curve is an individual decision. But to whatever extent the "resume-driven-development" aspect of container orchestration hype is distorting the market, it does seem to be in service of a better (if more complicated) toolset.
...unless you use cloud-specific services, which almost everybody should be doing because the management load of self-hosted alternatives is very high. Correctly care-and-feeding a message queue or a log store is so much more work than creating an SQS queue or a Kinesis shard, especially when you are not running a company centered around managing a message queue or a log store. Which you probably aren't. Localstack exists as an AWS stub for local development, but it sure isn't great and the impedance mismatch is pretty high--just trying to write Pulumi code that can target either it or AWS proper is an exercise in frustration.
Don't get me wrong: containers as a better way to ship "statically linked" applications are great. But none of the container-orchestration stuff seems out of the "hype" stage from anything I've seen as a consultant or as a line developer; I've never-not-once found a reason to reach for k8s on any major cloud provider. I guess it's OpenStack 2.0 for on-premises deployments, and in that light there's definitely some value, but your cloud provider is doing a lot of work that you're already effectively paying for by being in that ecosystem--for most users (who can't hire somebody like me) it's worth using it.
2) It's not just events in your application you have to worry about, but environmental stuff as well. SNS is the lingua franca of AWS and you've gotta put it somewhere, with reliability guarantees. That increases the amount of orchestration you have to do outside the safe-and-happy confines of k8s itself, and that makes my job way harder than having only-one-road-to-plow.
At my current company we use ECS/Fargate, but it's only really so we don't have to manage servers; each ECS service has one task and is independently managed. And it interacts directly with the baseline cloud provider so there's no weird jank at the edges of our compute vs. our datastores. The containers are a convenience here--I'd use EC2 just as readily. But I see no real value to using k8s instead and haven't at any of my consulting gigs to date, either. The k8s environment is both incomplete and at best kinda-incompatible with the things that actually matter, like your data, and TBH it casts real doubt on the reason to use it.
The value with k8s is that you can abstract the underlying hardware by using only k8s objects.
So your product might be fine, but everything that exists to feed it--be it monitoring and alerting or business analytics or security or even hands-on operational control surfaces--is going to be worse for it and create marginal drag every step of the way (either in terms of labor or money).
More and more (and your comments are reinforcing this position, TBH) I get the "it's a slick five-minute demo" argument out of k8s, much the same way that Docker brainwormed people long before it was good or useful. I have a cluster at home, and it's fine, but it's for play. I can't afford systemic drag from immature and possibly-wrong tech choices where I've gotta make money, though.
The problem with AWS is the price and the latency. Everything is fine until you get the bill. But by than you are completely locked to that architecture. The same apply to all the public cloud providers, not just AWS.
So as I see it, for new applications which are based on micro services, and want some day to become self managed, K8S is the only long term good bet.
I've moved nontrivial systems from AWS to GCP and in the reverse direction. It's a job done in Terraform/Pulumi and while a competently written application or set of services needs some work to do the move it's work you are likely to do once at most. (Emphasis on at most. The overwhelming, overwhelming majority of companies are way better off going multi-region in a single cloud provider than going multi-provider. Multi-cloud is for the rich and the silly.) The underlying cloud provider doesn't matter very much when you can pay somebody like me to come in for a month or two and help you make your application an actual citizen of the platform you want to use and leverage its efficiencies properly. The "good long term bet" is abstract interfaces in your code--the hype-driven cycle of the new-and-shiny means there's a nontrivial risk that k8s is no longer sexy enough to blog about by the time that "oh, we now need to move to a new provider!" even matters to you.
(I am contractually obligated not to step in the microservices pothole. It's a good way to waste development time and not ship, though.)
AWS VMs tend to cost between 3 to 4 times more than equivalent VMs offered by smaller no frills service providers such as Hetzner and Scaleway.
You may argue that you don't mind paying a hefty premium for a service that has plenty of competing offers, or that some high-level service provided by AWS is nice to have, but it's hard to argue in favour of needlessly spending 3 to 4 times as much to provide the same service, or get somewhere around 30% of the cloud computing resources for the same price tag.
You know what you don't get with a Hetzner--which is fine for what it is, this is impugning their customers and not them as a provider? You don't get the for-free metrics and alerting of an AWS. You don't get their sizing. You don't get their effectively-infinite inventory (EDIT: I was just reminded of a former employer who managed to make AWS tap out of a particular instance type in a region--how do you think Hetzner's gonna fare if you hit that kind of scale? Are you sure you built an effectively multi-region, multi-master system on your magic k8s cluster?). You don't software-driven infrastructure at every layer of your stack--sure, you get kubectl, now scale your cluster without waiting for hours for a human to rack a machine. Also without crossing your fingers every time.
(Scaleway is better about APIs and responsiveness, to their credit. But we're not out of the woods yet you are now the barrier to reliability and you're going to have to invent or duct-tape half of a cloud provider on your own to get something done. I too enjoy not doing things that help ship products, but not when I need money...)
You know what you do get, though? You get risk. You're worse at data integrity and backup than AWS is and Hetzner is worse at DR options than AWS; these are not exactly controversial statements, so I trust that you'll just go with it. But wait, there's more--you also get inflexibility. You get the risk of k8s itself--which is, let's be real, kind of a tire fire if you aren't Google, I've never seen a k8s shop of nontrivial size where something wasn't constantly alerting or broken. And you also get inefficiency. You get inefficiency of spend as you buy more hardware than you need and then pay the marginal cost of managing and alerting all of it, because you need overhead space--which is usually deadweight space--for when your hardware fails. (Because your hardware will fail, and you need a way to fail over.) And you get inefficiency of people; you get to waste the time of an expensive resource (or apply an incompetent one, which might be your thing but it sure isn't mine) reinventing the wheel over and over again. Sure, you can run a RabbitMQ instead of a SQS. I hope you know how to deal with its nonsense (I barely do, and I've run it at scale) and I hope you have an ironclad backup plan for it. And even when you do, you get the people-time inefficiency of building and maintaining and spending time and attention on things you get as part of your "hardware" spend with a full-featured cloud provider.
From all indications from over a hundred clients at sizes from five employees to fifty thousand, you probably are no exception to any of the above. And maybe I'm wrong about that, maybe you're the one exception who does. But I will always bet the other way, even when it's me and I know that I'm no slouch at this, and that's why I use AWS or GCP. (It doesn't hurt that, ultimately, it ends up being cheaper, both because I need fewer infrastructure/devops people to manage it and because I build systems that don't require one to run enormous servers--when you don't box yourself into the corner of "I need to run this big fat daemon all day long" you can spend remarkably little on AWS or GCP in the first place!
Maybe doing things the hard, slow, scary, and risky way is your thing, and you're willing to pay in time, effort, and risk rather than in money. But, one, it really isn't much money if you're running something successful, and two, it is downright disingenuous to imply that the spend is like-for-like.
So more expensive is on a relative basis and taking into account the egress traffic from the cloud. My workload involve training machine learning models and serving them. Training on the cloud is 20X more expensive, and you cannot use commodity GPU (banned by Nivdia).
Moreover, My platform offer automl, which basically trade data scientist time for compute time. However, since I need to train 100's or 1000's of models, this can become very expensive, very fast.
Since I am not sure what my customers load will be like, I want to give them the option to move between clouds or on prem.
For the long term prospects of kubernetes. For me it is clear the kubernetes have found PMF and it is now at the first 1/4 or 1/3 of the S curve. IBM is all in, VMWARE is all in, Azure is all in, Gcp is all in.
Also, what are the alternatives ? Do you agree that containers are better than jar files or manual deployment options?. Do you agree that having a CI/CD pipelines with fully automatic unit/func tests is better than throwing code over the wall to some QA department?
So if containers are better packaging/deployment architecture, they need to be managed/monitored etc.
For micro services. In my case, the data since part is written in python, while the control plane (kubernetes operators) is written in go. So micro services are actually a very neat solution to a polygot product.
Containers are fine deployment tools, sure. And Fargate is a better management and monitoring framework than whatever you'll roll together, while also not making you pay overhead for hot-spare and failover inventory. (Their prices used to be really wacky; if you dig through HN you will find a post from the day Fargate was announced where I looked at the numbers and had some Questions. I do not anymore.) I get that you have GPU stuff to deal with, and maybe that doesn't work for you--but EC2 instances probably do, can be sized better, and can be dynamically scaled without breaking your back. Things you don't pay for are cheaper than things you do, you know?
On-premises--yeah, sure, use k8s, it's literally the only place where it makes any sense to do so. Of course, you could write good code with clear interfaces and separation of concerns so you can figure out an on-prem story after you have a product and after you have a business, but again, I too like rabbit-holing on stuff that doesn't help me ship. ¯\_(ツ)_/¯ Good luck, I guess.
Can you expand on this? What do you mean using commodity GPUs for training on the cloud is banned by Nvidia?
Its very curious. I'm reminded of about 6 years back when I sent an email to Randall Stephenson (AT&T CEO) asking why they feel they can charge extra for tethering after selling "unlimited data." I included a parable, naturally, describing a baker who sells you bread, with a license agreement stating you can only eat the bread by itself. If you want to make yourself a sandwich you need to pay an extra 16% fee when you buy the bread to be able to use it for any derivative product, such as sandwiches or bread pudding, or croutons, or anything beyond raw bread.
This is nonsense. I own the things I buy and even if the law tells me otherwise, I will never accept these insane premises.
We're looking at moving away from containerized local development (using Docker for Mac on high-end MacBook Pros) because the performance is abysmal. The Linux VM eats 100% CPU at idle (i.e., with no containers running at all) and our handful of pretty simple containers bring the machines to their knees. It's unclear if we're hitting some pathological case (maybe volumes of 100s of MBs?) but there are many, many other issues filed against Docker for Mac about its high CPU usage. We've tried all of the usual things--blowing away the VM, tinkering with different resource allocations for the VM, etc, but we haven't had much luck (nor have the other users who have filed issues against Docker for Mac, judging by the comments).
Of course not.
Different orgs will approach this problem differently. At my company, someone on our infra/devops team made pretty good abstractions eight years ago, and now we're using those same abstractions on top of K8s. So our devs really don't need to know anything about the substrate. But when I go to conferences and meetups, I talk to folks who basically expose the entirety of K8s to devs and expect them to self-serve.
I see this too and it worries me. You should rarely have to do this. CRDs tend to get overused. I'd encourage everyone to think twice before adding a custom controller. Vanilla k8s (with cloud-provider integration) is powerful, useful, and complex as it is.
I think because I don't fully understand k8s, yet, helm seems too magical. It's abstracting away what I already don't understand and doesn't feel right.
For now, I've been focusing on writing my manifests and applying them with kubectl to get a feel for what's going on under the hood. Maybe a time will come where it's a tool that I'll need to reach for, though.
If the course is "101" they should, at the very least, mention this upfront and direct the viewer to suitable resources so the viewer can level set.
Please do not abuse radio buttons to pretend to be radio buttons in one place (only one selection allowed in a group) and checkboxes (multiple selections allowed in a group) in another place right below. Your assessment page  needs better UX on this.
It would also be good to state that the courses are all free in the content area visible in the viewport when the page loads. There's a "Sign up for free" button way below, but seeing that there's no "Pricing" link anywhere, I wasn't sure if all the courses are free or if courses in the future would become paid. Clarity on this would also help.
Feels like DevOps championed the lowest common denominator. I have to deal with containers flipping out all the time because the distros running inside these containers were never intended to be what we demand them to be with orchestration.
Except for 2 containers, I’m only using FROM: distroless, FROM: alpine and FROM: scratch. All using multi-stage builds and pretty much none of them running as root.
Disclaimer: I worked for Instruqt and created some of these tutorials.
I also don't understand the "product-agnostic" part. Isn't k8s a product?
This looks like it's simply free k8s related course content.
Product-agnostic meaning despite kubernetes.academy being provided by VMware it's not covering Project Pacific or any other specific Kubernetes offering or integration by them or anyone else.
In this regards, I just want to mention dominik tornow, which did an excellent work trying to formalise the kubernetes internals.
In my opinion, if you are honestly looking to educate, and it is not just a marketing, then look into delivering the education that is better than anyone else.
As someone who just started working at a company that is converting from VMware to Kubernetes but who has zero experience with it and only a little with Docker, this seems like a good start. Of course I have about 20 other tabs in my browser for learning.
I would prefer that this content be made available in a written format as videos are not great for communicating this non-visual information (however most people are more comfortable watching than they are reading). A of the content of these videos with the diagrams included would be perfect.
Firstly, because I mentioned this before, the audio has some serious quirks that need to be fixed. The videos by John Harris in particular are very quiet and require me to set my audio to a volume that would be deafening on most other video or audio file. I actually had to adjust my audio hardware's volume because maxing it out on the OS was still too quiet.
Secondly, there seems to be a disconnect between what the page claims Kubernetes Academy is and what content is covered in the course. None of the videos are meant to be used as code-along projects and none of them offer any instruction or direction to the viewer. The videos cover the theory of Kubernetes and nothing else, even the "operations" section has videos that follow a train-of-thought approach to some tips and tricks but assume that you already know how to use the tools being described. This is particularly strange in the case of the "Introduction to Kubectl" video because the content is in no way an introduction and the instructor begins by saying that he assumes you are already comfortable with kubectl. The description of the video even seems to contradict the title's claim that it is introductory.
Thirdly, the videos are not instructional. I did find value in this course and I understand Kubernetes better now than I did before but they are actually lectures and are not meant to get a viewer up and running with their own project. When I saw this yesterday I dedicated my secondary monitor to the page and set my terminal to full-screen on the primary monitor and was ready to dig into Kubernetes. Even the "operations" video on kubectl is impossible to follow without constant pausing and back-tracking. The instructor types out a command while explaining it and immediately executes and moves on to the next command. Again, it seems like written content being forced into a video format.
Although I did learn from the course I wish that I had just started with Kubernetes's own tutorial because I still had nothing Kubernetes-related accomplished to show for my time. I would recommend this content to someone who does not know what Kubernetes is and would like to know enough to decide whether or not to learn it.