Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes Academy, a free product-agnostic education platform (kubernetes.academy)
285 points by frostmatthew 54 days ago | hide | past | web | favorite | 66 comments



the hype around k8s is unreal..... But should everyone even learn k8s? A lot of the core features of k8s and container orchestration is getting abstracted away at a rapid pace with more things being built on top of k8s. I see this at my current company where we have a ton of ops people who have only ever used vmware....with cloud migration and container focused workflows being the standard now they are fearing like they are left behind. So I see a lot of them learning to code, trying to learn k8s, trying to become more devops with automation. Its quite a steep learning curve.

But by the time they catch up to this technology I have a feeling it will become less important to administer k8s directly.


That push towards growth is real across both sides of the developer-administrator spectrum (I know many experienced developers struggling through React/Vue, ETL pipeline tools, or more devops than they'd prefer).

Across the board, it's partially fad and hype, but it's also because these workflows really are better. On the k8s side, a containerized architecture gives developers peace of mind in their local environments, makes you more provider-agnostic, and has the potential to scale more easily.

A sysadmin who can provide all of those is valuable, and deciding if that value is worth the learning curve is an individual decision. But to whatever extent the "resume-driven-development" aspect of container orchestration hype is distorting the market, it does seem to be in service of a better (if more complicated) toolset.


> a containerized architecture gives developers peace of mind in their local environments

...unless you use cloud-specific services, which almost everybody should be doing because the management load of self-hosted alternatives is very high. Correctly care-and-feeding a message queue or a log store is so much more work than creating an SQS queue or a Kinesis shard, especially when you are not running a company centered around managing a message queue or a log store. Which you probably aren't. Localstack exists as an AWS stub for local development, but it sure isn't great and the impedance mismatch is pretty high--just trying to write Pulumi code that can target either it or AWS proper is an exercise in frustration.

Don't get me wrong: containers as a better way to ship "statically linked" applications are great. But none of the container-orchestration stuff seems out of the "hype" stage from anything I've seen as a consultant or as a line developer; I've never-not-once found a reason to reach for k8s on any major cloud provider. I guess it's OpenStack 2.0 for on-premises deployments, and in that light there's definitely some value, but your cloud provider is doing a lot of work that you're already effectively paying for by being in that ecosystem--for most users (who can't hire somebody like me) it's worth using it.


I think that you might be able to replace SQS specific queues with knative eventing. This should provide some insulation from aws specific services.


1) I don't use "work-in-progress" things in production (unless I own it and wrote it) and I have a hard time recommending others do so.

2) It's not just events in your application you have to worry about, but environmental stuff as well. SNS is the lingua franca of AWS and you've gotta put it somewhere, with reliability guarantees. That increases the amount of orchestration you have to do outside the safe-and-happy confines of k8s itself, and that makes my job way harder than having only-one-road-to-plow.

At my current company we use ECS/Fargate, but it's only really so we don't have to manage servers; each ECS service has one task and is independently managed. And it interacts directly with the baseline cloud provider so there's no weird jank at the edges of our compute vs. our datastores. The containers are a convenience here--I'd use EC2 just as readily. But I see no real value to using k8s instead and haven't at any of my consulting gigs to date, either. The k8s environment is both incomplete and at best kinda-incompatible with the things that actually matter, like your data, and TBH it casts real doubt on the reason to use it.


So I have no doubt that SQS is more mature than knative. In my case I am developing an actual product IN k8s (with around 40 new CRDs) , so my goal is to use as much of the platform as possible (including istio and knative) and avoid cloud specific services.

The value with k8s is that you can abstract the underlying hardware by using only k8s objects.


"The underlying hardware" has side effects. You can't exactly ignore them. And your cloud provider has non-substitutable stuff. Just as an example from my day today, by hewing to k8s you lose stuff like Athena and Glue for data analytics and ETL. I'm sure somebody out there would like to charge me 10x what AWS does to do a worse job of it inside of k8s, but no. So instead you get a Frankenstein of Terraform/Pulumi/CloudFormation and (poor and poorly expressive) k8s configuration and you've geometrically increased the complexity of your system, your failure cases, and the challenge of solving a problem when you're under the gun.

So your product might be fine, but everything that exists to feed it--be it monitoring and alerting or business analytics or security or even hands-on operational control surfaces--is going to be worse for it and create marginal drag every step of the way (either in terms of labor or money).

More and more (and your comments are reinforcing this position, TBH) I get the "it's a slick five-minute demo" argument out of k8s, much the same way that Docker brainwormed people long before it was good or useful. I have a cluster at home, and it's fine, but it's for play. I can't afford systemic drag from immature and possibly-wrong tech choices where I've gotta make money, though.


Thanks for the feedback. I do not think that k8s is bad bet long term as you paint it to be. Of course it is less mature than AWS. But the point is 5 years from now.

The problem with AWS is the price and the latency. Everything is fine until you get the bill. But by than you are completely locked to that architecture. The same apply to all the public cloud providers, not just AWS.

So as I see it, for new applications which are based on micro services, and want some day to become self managed, K8S is the only long term good bet.


I gotta ask, because now you're firmly in my backyard--what's the biggest cloud spend you've ever been in charge of? I ask because "Cloud costs" keeps coming up as this bugbear reason to use k8s and it isn't a real concern for the 99th percentile of applications. An application that's expensive when running directly against a cloud provider's APIs will remain expensive when running in k8s, if not moreso because of k8s's steadfast refusal to pay attention to the bin packing problem. The galaxy-brain thought on HN is that cloud providers are so much more expensive than OVH or Hetzner or whatever--it's literally meaningless. People cost a lot. Even an inefficient use of AWS doesn't cost very much. By the time you're at the point where your cloud spend exceeds one FTE, you should probably have forty and the wins it gives you should be self-evident or you've screwed up somewhere else (and that "somewhere else" is probably your business plan).

I've moved nontrivial systems from AWS to GCP and in the reverse direction. It's a job done in Terraform/Pulumi and while a competently written application or set of services needs some work to do the move it's work you are likely to do once at most. (Emphasis on at most. The overwhelming, overwhelming majority of companies are way better off going multi-region in a single cloud provider than going multi-provider. Multi-cloud is for the rich and the silly.) The underlying cloud provider doesn't matter very much when you can pay somebody like me to come in for a month or two and help you make your application an actual citizen of the platform you want to use and leverage its efficiencies properly. The "good long term bet" is abstract interfaces in your code--the hype-driven cycle of the new-and-shiny means there's a nontrivial risk that k8s is no longer sexy enough to blog about by the time that "oh, we now need to move to a new provider!" even matters to you.

(I am contractually obligated not to step in the microservices pothole. It's a good way to waste development time and not ship, though.)


> I ask because "Cloud costs" keeps coming up as this bugbear reason to use k8s and it isn't a real concern for the 99th percentile of applications.

AWS VMs tend to cost between 3 to 4 times more than equivalent VMs offered by smaller no frills service providers such as Hetzner and Scaleway.

You may argue that you don't mind paying a hefty premium for a service that has plenty of competing offers, or that some high-level service provided by AWS is nice to have, but it's hard to argue in favour of needlessly spending 3 to 4 times as much to provide the same service, or get somewhere around 30% of the cloud computing resources for the same price tag.


This post is exactly the sort of thing I see a lot. It belies inexperience with systems when they go bad. And systems basically always go bad. (I made a very good living off of that!)

You know what you don't get with a Hetzner--which is fine for what it is, this is impugning their customers and not them as a provider? You don't get the for-free metrics and alerting of an AWS. You don't get their sizing. You don't get their effectively-infinite inventory (EDIT: I was just reminded of a former employer who managed to make AWS tap out of a particular instance type in a region--how do you think Hetzner's gonna fare if you hit that kind of scale? Are you sure you built an effectively multi-region, multi-master system on your magic k8s cluster?). You don't software-driven infrastructure at every layer of your stack--sure, you get kubectl, now scale your cluster without waiting for hours for a human to rack a machine. Also without crossing your fingers every time.

(Scaleway is better about APIs and responsiveness, to their credit. But we're not out of the woods yet you are now the barrier to reliability and you're going to have to invent or duct-tape half of a cloud provider on your own to get something done. I too enjoy not doing things that help ship products, but not when I need money...)

You know what you do get, though? You get risk. You're worse at data integrity and backup than AWS is and Hetzner is worse at DR options than AWS; these are not exactly controversial statements, so I trust that you'll just go with it. But wait, there's more--you also get inflexibility. You get the risk of k8s itself--which is, let's be real, kind of a tire fire if you aren't Google, I've never seen a k8s shop of nontrivial size where something wasn't constantly alerting or broken. And you also get inefficiency. You get inefficiency of spend as you buy more hardware than you need and then pay the marginal cost of managing and alerting all of it, because you need overhead space--which is usually deadweight space--for when your hardware fails. (Because your hardware will fail, and you need a way to fail over.) And you get inefficiency of people; you get to waste the time of an expensive resource (or apply an incompetent one, which might be your thing but it sure isn't mine) reinventing the wheel over and over again. Sure, you can run a RabbitMQ instead of a SQS. I hope you know how to deal with its nonsense (I barely do, and I've run it at scale) and I hope you have an ironclad backup plan for it. And even when you do, you get the people-time inefficiency of building and maintaining and spending time and attention on things you get as part of your "hardware" spend with a full-featured cloud provider.

From all indications from over a hundred clients at sizes from five employees to fifty thousand, you probably are no exception to any of the above. And maybe I'm wrong about that, maybe you're the one exception who does. But I will always bet the other way, even when it's me and I know that I'm no slouch at this, and that's why I use AWS or GCP. (It doesn't hurt that, ultimately, it ends up being cheaper, both because I need fewer infrastructure/devops people to manage it and because I build systems that don't require one to run enormous servers--when you don't box yourself into the corner of "I need to run this big fat daemon all day long" you can spend remarkably little on AWS or GCP in the first place!

Maybe doing things the hard, slow, scary, and risky way is your thing, and you're willing to pay in time, effort, and risk rather than in money. But, one, it really isn't much money if you're running something successful, and two, it is downright disingenuous to imply that the spend is like-for-like.


Thanks again.

So more expensive is on a relative basis and taking into account the egress traffic from the cloud. My workload involve training machine learning models and serving them. Training on the cloud is 20X more expensive, and you cannot use commodity GPU (banned by Nivdia).

Moreover, My platform offer automl, which basically trade data scientist time for compute time. However, since I need to train 100's or 1000's of models, this can become very expensive, very fast.

Since I am not sure what my customers load will be like, I want to give them the option to move between clouds or on prem.

For the long term prospects of kubernetes. For me it is clear the kubernetes have found PMF and it is now at the first 1/4 or 1/3 of the S curve. IBM is all in, VMWARE is all in, Azure is all in, Gcp is all in.

Also, what are the alternatives ? Do you agree that containers are better than jar files or manual deployment options?. Do you agree that having a CI/CD pipelines with fully automatic unit/func tests is better than throwing code over the wall to some QA department?

So if containers are better packaging/deployment architecture, they need to be managed/monitored etc.

For micro services. In my case, the data since part is written in python, while the control plane (kubernetes operators) is written in go. So micro services are actually a very neat solution to a polygot product.


I used to work at IBM and I have first-hand experience with exactly how wonderful (that smell is not the dog, that is sarcasm) their Kubernetes implementation is; without mincing words, I would consider IBM's enthusiastic adoption of k8s to be a warning sign rather than a positive indicator. VMware, ditto, they're a trailing-edge company flailing for modern relevance. Azure is a cloud provider so persistently awful that they and IBM are the only ones I'd refuse a client on because it's not worth the frustration; not sure I'd bet on their thinking, either. And GCP--sure, the people who made k8s like k8s, that stands to reason.

Containers are fine deployment tools, sure. And Fargate is a better management and monitoring framework than whatever you'll roll together, while also not making you pay overhead for hot-spare and failover inventory. (Their prices used to be really wacky; if you dig through HN you will find a post from the day Fargate was announced where I looked at the numbers and had some Questions. I do not anymore.) I get that you have GPU stuff to deal with, and maybe that doesn't work for you--but EC2 instances probably do, can be sized better, and can be dynamically scaled without breaking your back. Things you don't pay for are cheaper than things you do, you know?

On-premises--yeah, sure, use k8s, it's literally the only place where it makes any sense to do so. Of course, you could write good code with clear interfaces and separation of concerns so you can figure out an on-prem story after you have a product and after you have a business, but again, I too like rabbit-holing on stuff that doesn't help me ship. ¯\_(ツ)_/¯ Good luck, I guess.


> Training on the cloud is 20X more expensive, and you cannot use commodity GPU (banned by Nivdia).

Can you expand on this? What do you mean using commodity GPUs for training on the cloud is banned by Nvidia?



Can any person selling a product essentially make unilateral decisions with respect to usage of the product in question?

Its very curious. I'm reminded of about 6 years back when I sent an email to Randall Stephenson (AT&T CEO) asking why they feel they can charge extra for tethering after selling "unlimited data." I included a parable, naturally, describing a baker who sells you bread, with a license agreement stating you can only eat the bread by itself. If you want to make yourself a sandwich you need to pay an extra 16% fee when you buy the bread to be able to use it for any derivative product, such as sandwiches or bread pudding, or croutons, or anything beyond raw bread.

This is nonsense. I own the things I buy and even if the law tells me otherwise, I will never accept these insane premises.


> On the k8s side, a containerized architecture gives developers peace of mind in their local environments

We're looking at moving away from containerized local development (using Docker for Mac on high-end MacBook Pros) because the performance is abysmal. The Linux VM eats 100% CPU at idle (i.e., with no containers running at all) and our handful of pretty simple containers bring the machines to their knees. It's unclear if we're hitting some pathological case (maybe volumes of 100s of MBs?) but there are many, many other issues filed against Docker for Mac about its high CPU usage. We've tried all of the usual things--blowing away the VM, tinkering with different resource allocations for the VM, etc, but we haven't had much luck (nor have the other users who have filed issues against Docker for Mac, judging by the comments).


~everyone should learn it, though not everyone should use it. k8s allows new deployment patterns and app structures. A senior dev. should know enough about it, that when a right opportunity comes they should be able to identify it.


> But should everyone even learn k8s?

Of course not.

Different orgs will approach this problem differently. At my company, someone on our infra/devops team made pretty good abstractions eight years ago, and now we're using those same abstractions on top of K8s. So our devs really don't need to know anything about the substrate. But when I go to conferences and meetups, I talk to folks who basically expose the entirety of K8s to devs and expect them to self-serve.


> lot of the core features of k8s and container orchestration is getting abstracted away

I see this too and it worries me. You should rarely have to do this. CRDs tend to get overused. I'd encourage everyone to think twice before adding a custom controller. Vanilla k8s (with cloud-provider integration) is powerful, useful, and complex as it is.


Let me ask you a question if I may: I have a server app I’m running on k8s, it’s stateful and external users connect to it (over grpc). Users connect to a specific server which is identified by ID, this is important and can’t be easily rearchitected. I also expose an API to spin up servers for new ids which users can then connect to. At present I’m doing this by having my “controller” api (which is just an app running in a pod which exposes a rest api then makes calls to the k8s api) create a new deployment+service per server and then add an ingress for that service. I’ve been considering taking the k8s functionality from my controller and putting it in a new CRD + controller. Is there a better way to do this? I feel like I’m fighting the system somewhat with my current approach and yet I don’t see a simple answer using vanilla k8s primitives. I feel like what I want is scalable statefulsets that automatically create an service+ingress to each pod created but nothing seems to offer that. My use-case seems simple and common, so how is everyone else solving this problem.


You may want to look into operators. It seems to address your requirements nicely.


totally agree :) I have been running Kubernetes in production for several years now on multiple projects. So far I had no need to use any CRDs or even Helm. Just well maintained deployments, services, secrets and ingress manifests.


I've started trying to learn more about kubernetes by setting up some personal services and it seems like everything I find online points to some helm chart to install.

I think because I don't fully understand k8s, yet, helm seems too magical. It's abstracting away what I already don't understand and doesn't feel right.

For now, I've been focusing on writing my manifests and applying them with kubectl to get a feel for what's going on under the hood. Maybe a time will come where it's a tool that I'll need to reach for, though.


So how do you run stateful workloads? E.g. postgres or mysql?


Generally speaking, you probably shouldn't run databases in k8s unless you need to scale them dynamically, or cloud disk performance is acceptable to you. Having said that, k8s is getting a lot of support for running databases on local disks. A lot of newsql databases (CockroachDB, TiDB, Yugabyte, Vitess, etc) are shipping with strong k8s support/integration.


As a matter of fact, we recently released the 1.0 GA version of TiDB Operator: https://github.com/pingcap/tidb-operator


VMware announced Project Pacific two days ago at VMworld which targets this usecase you are mentioning - https://blogs.vmware.com/vsphere/2019/08/introducing-project...


Not everyone should learn k8s. It is concerning to see people become desperate to learn to code so as to not be "left behind", this will likely lead to some very disenfranshised individuals in the future


because once you know how k8s works, you gain so much time and have to deal with much less low-value problems.. I couldn't go back to the old way


Maybe like Databases. Not all companies need a DBA today.


Short answer: Yes you should (If you're in DevOps).


Right out of the chute the Containers 101 video assumes a huge amount of contextual knowledge by the viewer, an existing in-depth knowledge of docker.

If the course is "101" they should, at the very least, mention this upfront and direct the viewer to suitable resources so the viewer can level set.


Yeah, that is the feedback I received from members of my team on the Kubernetes videos as well.


To the site designer:

Please do not abuse radio buttons to pretend to be radio buttons in one place (only one selection allowed in a group) and checkboxes (multiple selections allowed in a group) in another place right below. Your assessment page [1] needs better UX on this.

It would also be good to state that the courses are all free in the content area visible in the viewport when the page loads. There's a "Sign up for free" button way below, but seeing that there's no "Pricing" link anywhere, I wasn't sure if all the courses are free or if courses in the future would become paid. Clarity on this would also help.

[1]: https://kubernetes.academy/assessment


I wish more orgs would move over to orchestrating fleets of NixOS machines with NixOps. I can't overstate the benefits of truly immutable deployments. "Be this, machine, and be nothing else."

Feels like DevOps championed the lowest common denominator. I have to deal with containers flipping out all the time because the distros running inside these containers were never intended to be what we demand them to be with orchestration.


Luckily, most of the time you should be using well-designed containers, or even build your own.

Except for 2 containers, I’m only using FROM: distroless, FROM: alpine and FROM: scratch. All using multi-stage builds and pretty much none of them running as root.



Or just use distribution packages.


RancherOS/k3os and OpenShift/okd 4 are the more mainstream versions of this philosophy.


I like the hands-on tutorials of Instruqt: https://instruqt.com/public/topics/getting-started-with-kube.... They also got quite a few on Knative.

Disclaimer: I worked for Instruqt and created some of these tutorials.


The title implies that this is an education _platform_ - I was expecting something like Edx.

I also don't understand the "product-agnostic" part. Isn't k8s a product?

This looks like it's simply free k8s related course content.


> I also don't understand the "product-agnostic" part. Isn't k8s a product?

Product-agnostic meaning despite kubernetes.academy being provided by VMware it's not covering Project Pacific[1] or any other specific Kubernetes offering or integration by them or anyone else.

[1] https://blogs.vmware.com/vsphere/2019/08/introducing-project...


I think they mean cloud-provider-agnostic.


Is it bad that the best part of the tutorial for me was to discover fzf? https://www.freecodecamp.org/news/fzf-a-command-line-fuzzy-f...


That doesn't mean very much; fzf is great enough that it could be the best part of a tutorial even if the trial was already good :-)


Happy to see vmware going all in on kubernetes.

In this regards, I just want to mention dominik tornow, which did an excellent work trying to formalise the kubernetes internals.

https://medium.com/@dominik.tornow


https://www.katacoda.com/ is also an awesome resource - I've often recommended it for people wanting to learn and play around with Kubernetes with a low barrier to entry.


Looks like this is all setup to get you to ultimately take some exam called the CKA


Certified Kubernetes Administrator.

https://www.cncf.io/certification/cka/


Hi! I manage Kubernetes Academy. We are not affiliated with the CKA Exam, and have no expectation for people to take it. Our team has received questions on how to best prepare for the exam, so we created a prep course. No strings attached.


I don't like to downplay such efforts (education in particular) but as a really big organization, who is putting its first foot out of the K8s door, this is a really lackluster approach. You basically have video lectures at this "academy". Not sure if you have noticed the scene out there but one can learn a whole lot more by reading and practicing on site like Katacoda then they would spending the time watching lectures on this site. In any case, VMware is a behemoth when it comes to VMs, and K8s is a match made in heaven for you (no big deal if you came late to the party, you'll be able to catch up better than anyone else - so goes the logic of course).

In my opinion, if you are honestly looking to educate, and it is not just a marketing, then look into delivering the education that is better than anyone else.


I agree, but this seems like the initial offering.

As someone who just started working at a company that is converting from VMware to Kubernetes but who has zero experience with it and only a little with Docker, this seems like a good start. Of course I have about 20 other tabs in my browser for learning.


There's really no conversion from VMware to Kubernetes. You need VMs for K8s. Unless you're entirely not going to use VMware for your VMs; you will need virtual machines to run your K8s cluster.


So the Certified Kubernetes Admin?


My only complaints are that the audio is not normalized across the videos and the viewing experience is negatively affected by sudden highs and lows in the volume. The cookie banner is also tickling my OCD as it pops up every time I change pages.

I would prefer that this content be made available in a written format as videos are not great for communicating this non-visual information (however most people are more comfortable watching than they are reading). A of the content of these videos with the diagrams included would be perfect.


I have now finished the videos and have a few thoughts.

Firstly, because I mentioned this before, the audio has some serious quirks that need to be fixed. The videos by John Harris in particular are very quiet and require me to set my audio to a volume that would be deafening on most other video or audio file. I actually had to adjust my audio hardware's volume because maxing it out on the OS was still too quiet.

Secondly, there seems to be a disconnect between what the page claims Kubernetes Academy is and what content is covered in the course. None of the videos are meant to be used as code-along projects and none of them offer any instruction or direction to the viewer. The videos cover the theory of Kubernetes and nothing else, even the "operations" section has videos that follow a train-of-thought approach to some tips and tricks but assume that you already know how to use the tools being described. This is particularly strange in the case of the "Introduction to Kubectl" video because the content is in no way an introduction and the instructor begins by saying that he assumes you are already comfortable with kubectl. The description of the video even seems to contradict the title's claim that it is introductory.

Thirdly, the videos are not instructional. I did find value in this course and I understand Kubernetes better now than I did before but they are actually lectures and are not meant to get a viewer up and running with their own project. When I saw this yesterday I dedicated my secondary monitor to the page and set my terminal to full-screen on the primary monitor and was ready to dig into Kubernetes. Even the "operations" video on kubectl is impossible to follow without constant pausing and back-tracking. The instructor types out a command while explaining it and immediately executes and moves on to the next command. Again, it seems like written content being forced into a video format.

Although I did learn from the course I wish that I had just started with Kubernetes's own tutorial because I still had nothing Kubernetes-related accomplished to show for my time. I would recommend this content to someone who does not know what Kubernetes is and would like to know enough to decide whether or not to learn it.


Kubernetes Is a Surprisingly Affordable Platform for Personal Projects when your personal project is to learn how to use Kubernetes. Otherwise it's a waste of time.


I wish this was a literal cybernetics education platform.


Heads up - CSS is messed up on https://kubernetes.academy/assessment/results. There were 4 results but I could only see 2 of them.


I am eager to learn about containers and the progress around it. Are there any other resources that cover this well?


Product agnostic? K8s is a product tho.


Cloud providers offer it as a managed-service/product. K8 itself is not a product tho.


I can't play the videos: player.vimeo.com's server IP address could not be found.


free training materials, and no guild structure.. who is the winner here?


Does it cover Helm?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: