Hacker News new | past | comments | ask | show | jobs | submit login
The Cult of Kubernetes (christine.website)
418 points by tannhaeuser 12 days ago | hide | past | web | favorite | 327 comments





This is another case of "I have never encountered and don't deeply understand the problems this tool was built to solve, thus the tool is totally unnecessary and the people who use it are part of a cult".

This is the same kind of flawed reasoning you see in the front-end world where a bunch of people complain that they do all their work in jQuery so React must be a cult.

Pasting what I wrote in another comment:

The goal isn't "ease of deployment", the goal is "infrastructure as code" so that application infrastructure can be managed in a way similar to application source code (e.g. PRs, blame, code reviews, CI, rollbacks etc). This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations. With k8s, individual machines become a homogenized resource that do not need specialized provisioning depending on the application they will host.


...and the crux of the problem is where people who have never encountered the problem Kubernetes solves, still start using Kubernetes.

I think the crux of the problem is that everyone encounters the problem Kubernetes solves. As the GP states, Kubernetes gives you

"infrastructure as code" so that application infrastructure can be managed in a way similar to application source code (e.g. PRs, blame, code reviews, CI, rollbacks etc). This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations

Who doesn't want that? Of course you want that.

But will the investment of time and effort pay off for your organization, and if so, how quickly? That's the hard question to answer. It depends on scale, personnel, the types of workloads involved, how easily your tools and practices can be updated, and presumably many other considerations. From my personal experience it seems like in practice the answer to this question is so murky that the deciding factors turn out to be social, including the personal risk aversiveness of the people making the decision, people's loyalty to the company versus their own resume, and whether leadership cultivates a hyperoptimistic growth mentality of making 10x or even 100x decisions (i.e., make decisions assuming the company will be 10x or 100x bigger in a year.)

The problem, then, is helping people compare the cost/benefit of Kubernetes compared to their current practices, for their own organization.


> Who doesn't want that? Of course you want that.

If you only have a couple of servers you probably want to think of them as individual machines rather than abstract resources. A lot of equations simplify when you set x to 1.


There are lots of tools that give you "infrastructure as code". Why is Kubernetes special?

It's also directly supported by multiple first-tier cloud providers.

I mean, so is Terraform (kinda).

Terraform supports first-tier cloud providers, but the other way around?

Perfect way to put it.

What is the point of this question? Why are the other tools special?

My point is that if "infrastructure as code" is your sole requirement, Kubernetes doesn't seem like the first choice. Adopting Kubernetes is not a small task, but it seems to be the go-to answer for a lot of the HN crowd.

Don't get me wrong: it's a great tool for some things. But IMO, for 80% of projects it's completely overkill.


+1. Infrastructure as code is exactly that, code. For AWS it is Cloudformation template code, Cloudformation service and some CI/CD on top, like jenkins or ansible. Or Terraform for unlucky ones. K8s is container orchestration, like AWS ECS, totally different beast.

CloudFormation is for the lucky ones and Terraform the unlucky ones? I'm thinking you've never used one of the two...

On AWS, Cloudformation is far more reliable and lean (less LOC) than TF. No corrupted state issues, all resource properties are supported, parallel (fast) resource creation just for starters. And TF sales pitch about "multi-cloud" is nonsence, resources are too different between different clouds.

K8s is the only IaC platform with heavy buy in by all the major public clouds.

I'm not particularly interested in limiting myself to using only the tools that large companies have deemed worthy.

There are plenty of tools out there that can get the job done at the scale that the vast majority of businesses operate in with lower operational and cognitive overhead than Kubernetes.


I'd argue the opposite. Only a relative handful of companies, startups etc. encounter the problems IAC solves. Most do fine without it.

Or maybe they use Puppet or Chef. Most won't even need that.


That’s how you get every other startup thinking Heroku is the solution and then two years later they realize they might need to invest in building out their own self-documenting, automated architecture. More than happy to upvote you as it means more work opportunities for me down the line.

Lots of people have some the problems k8s solve.

Infrastructure as Code - If you're not doing infrastructure as code, how do you know who is taking what actions on your infrastructure? How do you know your tests are running on an environment that represents production? How do you know the tester or dev hasn't fixed and not commited?

CI/CD - Do you have a quicker way to create test environments than just running kubectl create ns?

Resource Utilisation - Sharing servers to save money. Obviously you can use VMs, but do you want to do nested VMs on cloud?

I'm not sure most people should run k8s, but in a world where you can use GKE, I can't really see why not. What offers a better solution?


I'm in the situation that Kubernetes does solve the problem I have... deploying application stacks across varied configurations for purposes of testing at a PR level. Working on software that is customized and delivered to multiple downstream clients is hard to test against. Yes, there are other options, but K8s is a very good fit, and the best option we have.

YMMV... I've also been in a scenario where I just did separate pushes to dokku boxes behind a load balancer. There's plenty of room for in between.

Right now, I feel kind of like I'm treading water though.


Often it is that they think they have never encountered the problem Kubernetes solves or have "gotten lucky" and never encountered it.

Well how else do you build webscale software? What if later you want to deploy microservices. Premature optimization seems like a good idea with infrastructure because you have room to grow.

> This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations. With k8s, individual machines become a homogenized resource that do not need specialized provisioning depending on the application they will host.

This is a very true statement. I'm looking at it with a bit of a different view as well: We're currently setup very classically. Ops maintains terraform, VMs, config management. Some developers have taken over some highlevel cluster management via the configuration management. This works very well.

As long as you're pushing 3 applications around. Onboarding an application into a config management solution can easily take 2 - 4 engineering weeks on the ops side. And the work will always be on the ops-side because that's where the required expertise of the configuration management lies, which is a hefty chunk of specialized knowledge. If you're looking at a ramp-up from 3 applications to 5, 8, 20, like in our case, that's ... ugly. In our case, that's actually planned, because we've been bought due to our experience in this context. Yey.

That's a huge investment if you look at time alone, it's like 80 engineering weeks at worst. That's a year of nothing else all of a sudden. A situation like that magnifies the operations team as a bottleneck even worse.

And that's IMO where the orchestration solutions and containers come in. Ops should provide build chains, the orchestration system and the internal consulting to (responsibly) hand off 80% of that work to developers who know their applications.

Orchestration systems like K8, Nomad or Mesos make this much, much easier than classical configuration management solutions. They come with a host of other issues, especially if you have to self-host like we do, no question. Persistence is a bitch, and security the devil. Sure.

But I have an entire engineering year already available to setup the right 20% for my 20 applications, and that will easily scale to another 40 - 100 applications as well with some management and care.

That's why we as the ops-team are actually pushing container orchestrations and possibly self-hosted FaaS at my current place.

Hm, guess that got a bit longer. You hit a nerve somewhat.


So you think application delivery was less "as code" before Kubernetes?

From my standpoint, Kubernetes is a lot more manipulating state and a lot less code than what pretty much anything it replaces. In fact, almost nothing is "as code" until you introduce third party products such as Helm into the mix.

But that matters little, since the point of using it is not technology but standardization. It has the potential to commodify cloud infrastructure. A bit tongue in cheek perhaps and a far from accurate technical statement, but Kubernetes looks more and more like what Openstack should have been.


To be fair to the author a lot of folks I know are jumping on K8s and their use case mostly looks like that of the blog author. In those specific case you indeed are joining a cult.

K8s is meant to reduce devops work and complexity. Most businesses do not reach that level of complexity and will never need K8s.


If it's a cult, then it's the cult of developer fashion where participants complain about free tools they don't understand and don't have to use.

Tbf many developers have to use whatever someone up the chain considers fancy.

If technical leadership is making poor engineering decisions then that is where they should place the blame.

"My office is a cult" instead of "this tool I get paid to abuse is a cult"


It's a cult as much using source control system, build system, and editor (IDE) is.

e.g. you can always code, and zip-up files, and build directly using command-line (we still can do it, right?), but when you need to deliver that automation bit, and most importantly hand-off to the next person what you've done - e.g. codified for real, then it becomes a must to have a system like this.


"infrastructure as code" can be, and has been, solved with a combination of .git and bash scripts.

Kubernetes solves problems some groups will run into with shell scripts[†]. But not the problem of using git or GitHub, which is nonexistent.

[†] let us take as given that these problems are numerous and painful!


> the goal is "infrastructure as code"

You can have infrastructure as code without running a second containment layer on top of the containment layer your cloud provider runs (which describes many k8s deployments).


Infrastructure as code is a much older practice, not something that Kubernetes enables. Kubernetes can be criticised for being too complex for most things that benefit from IaC.

I mean, good on the author, but this isn’t what Kubernetes is really for.

Kubernetes is basically a way to run a Java-like application server that can run things other than Java. If that sounds like an appealing prospect to you, the complexity of Kubernetes may be a good fit.

Kubernetes is complex because sometimes you need to be able to do complex things. Sometimes you operate at a scale where spending 12 hours writing a deployment script is ok, because it will save you hundreds of hours in the near term. Kubernetes expects you to write a bunch of custom integrations to tie your k8s clusters into whatever ITSM / ITIL process you use.

But complaining that running a blog on Kubernetes is too complex is like complaining that a semi is a terrible vehicle because it’s hard to park at the grocery store.


Kubernetes is the new Java Application Server for people who didn't realize that Java Applicaiton Servers were a terrible idea.

Despite a long track record of failure individuals are trying to introduce the complexity of J2EE onto kubernetes. It doesn't need to be that way. Kubernetes can be very simple and it has been up until recently. Once the Enterprise Architects got their hands on it and decided everything needs to be a plugin and nothing should work out of the box the complexity started to creep up.

You should be able to run your small blog on kubernetes without requiring a team of consultants to set it up or manage it. Just waving your hands and saying well it needs to be complex to scale is a total lie.


This is the cycle of life. Zawinski's Law is a powerful force.

The same thing happens with ticketing systems: $old_ticketing_system is way too complicated and bloated, so let's jump to $new_ticketing_system because it's small and easy to understand. Oh, but we miss $feature_1, so let's ask for that. And $feature_2, and $feature_3. Continue until $new_ticketing_system becomes way too complicated and bloated, at which point you find $newer_ticketing_system, which is great except that it's missing $feature_3. Oh, and $feature_4. And 1 and 2, come to think of it. Oh geez, now $newer_ticketing_system is also coming apart at the seams, time to migrate to $even_newer_ticketing_system. . .


This strikes as the exact opposite of reality. Java App Servers were specifically built for vertical scale. You just paid $80,000 to rack 30 CPUs and now you need to a way to optimally utilize all of them so we have a deployment model for sticking multiple applications in a single multi-threaded runtime. That was a pretty decent concept for 2005 and was pretty successful. The concept of packing code into archives (jar/war) has proven to be pretty durable.

Kubernetes is explicitly about managing horizontal scale where the hardware is abstracted away.


That's not the point of the analogy. The point is that k8 and JAS are designed for "large scale" deployment, whatever that means.

(But yes, you're correct about what "large scale" meant in the early 00's)


You don't think Kubernetes is used for bin packing?

Odds are someone / some people will create some kind of simpler solution with an easy default setup within the next ~5 years. Maybe as a wrapper over Kubernetes, or maybe as something new and interoperable with it.

Maybe it'll involve a bunch of "serverless" buzzwords or some newly invented buzzwords. That's how things usually go historically. A lot of value can still be extracted if you're careful to ignore the cult-y bits. Containers, serverless/FaaS, and Kubernetes can be pretty great if you're the plane pilot dropping the cargo on the island rather than the cult living on it, and future stuff will probably be even better.


Oh they have and are; but what is the business case open source such a wrapper? Wouldn’t you just host it on your own hardware and basically be another Heroku? This kind of software is complex enough it would take corporate backing (either an industry group or VC backing) so it’s probably not getting built unless there’s a business case.

Even Google only released K8s because they thought it would push people towards GCP — which it didn’t because it was too easy for AWS to implement a similar service using Google’s open source tech. The open source freemium model for infrastructure tech is pretty much dead as a result of this kind of activity.


The wrapper can still be something self-hosted. There will always be SaaS/PaaS/IaaS abstractions out there for just about everything; even MySQL. The idea would be for someone to be able to very easily self-host something that's as simple to use and configure and interface with as AWS EKS or Google GKE.

I have no idea what business case there'd be to open source it. Maybe some open source devs with former experience at a big company will create one just for fun after they leave the company? Who knows. It'll probably happen eventually, by someone, either way.


Would you do your day job “just for fun” after you just quit?

The reality is that most of this technology — especially around infrastructure — has become so complex at scale that the tech strategy and the business strategy are the same thing. So infrastructure software has to match your business architecture which is largely dictated to a technology org. Which is why “tech ops” these days is largely just “ops” — the technological complexity is a reaction to increased business sophistication, not the other way around.


Me? No. But some people seem to like to do that. And some people do seem to be genuinely very passionate about infrastructure and such. Kubernetes can be applied to a lot of different business and technological architectures, I think, and a simpler alternative could be similarly general.

> it was too easy for AWS to implement a similar service using Google’s open source tech

AWS had the most K8S deployments even before they released EKS, according to the CNCF:

https://www.cncf.io/wp-content/uploads/2018/08/cncf_survey_g...


> Odds are someone / some people will create some kind of simpler solution with an easy default setup within the next ~5 years.

I know it's not the same but Docker Swarm is pretty great if you just want to deploy some container images on a single host or a cluster - this guide covers setup + traefik + swarmpit ui https://dockerswarm.rocks


Or perhaps somebody will evolve Kubernetes itself into becoming simpler?

I know that's a pipe dream, but really, why does it have to be? What would have to happen for people to actually work on making existing things simpler and better factored rather than reinventing the wheel?

My personal theory is that it's largely because that kind of work simply isn't being valued highly enough. Reinventing the wheel is a much lower friction path to take and has a higher chance of being rewarded highly. It shouldn't be like that, though.


Do we even need k8s for a personal blog? What problem does it solve for someone coming from docker or a VM?

k8s is a building block one can use to provide a simpler service, and if you want to convince anyone it needs a refactoring, maybe provide some specifics?


How about Hashicorp nomad? (https://www.nomadproject.io/)

The problem with making things simpler is compatibility. If you make something simpler, but don't stay compatible, it might be better to just find a new name for your simple version.

I think k3s is sort of trying to do that?

Have you tried it? Is it any good? I've been looking for a simpler Kubernetes, although I don't know how much simpler it can be in practice and still do the same things.

I've tried k3s, and while I was impressed, it leaves out several important features.

RKE is a great alternative, and kubespray is quite stable well.


That would be knative.

I think you’re getting it backwards here. Kubernetes was explicitly built because the existing solutions were not robust enough to enable containerization at Google’s enterprise clients. Docker existed, and there were plenty of quick and easy ways to deploy your blog from a docker container and get it working. Those still work today.

The “running my blog” use case is a Docker use case. Kubernetes was designed from the ground up to enable transparent integration between containerized apps and ITSM platforms. I have always viewed it as more of a scaled application framework than a hosting platform.


What about Nomad? It looks simpler, although it doesn’t seem to have batteries included for things like ingress. I’ve never used it, but I’m curious.

But then the question arises, why would I need kubernetes to run a small blog... why not just run the blog. I mean.

The main problem is that smaller shops are adopting the fads of very large tech companies, but the large tech companies usually adopted those tools to deal with the kinds of scale that the smaller shops just don't have.

I’m not sure I agree. K8s’ primary benefit is that it strings together the IAAS abstraction into a single api.

It allows you to deploy multiple replicas, automatically setup a load balancer and handles maintaining the link between the LB and the backend. While also replacing any failed replicas.


Completely valid points, and I agree to all of them. Alas, as other commenters have pointed out, the issue is devs at smaller companies deploying smaller products buying into the idea that they need k8s. I believe that it is the community's duty to educate these devs on what k8s is and when it is needed.

They are probably scared that when they need to change jobs, the next company will require "5 years of Kubernetes experience". So they convince everyone at their current company to jump into a complexity clusterfuck to see how it works "in production" and can put it on their resume. This is how the entire IT industry works today.

I am fucking appalled writing config files is a noteworthy skill is 2019. So should you.


K8S is hardly just "writing config files" is it? You've still got to understand a ton of moving parts underneath before you're able to.

K8s is writing config files just like Python is writing Python Syntax.

If you don't understand the underlying mechanism, either with Python or K8s yaml files, you're going to have a very bad time.

Somewhat ironic side note - Asking folks to write K8s config files is exposing too much complexity for some developers I work with. And I kind of get it. Properly setting up a service with changing environment variables, secrets, ingress, API Roles, AWS IAM roles, and horizontal autoscaling can get a bit nuts.


Yeah, fully integrated “DevOps” at scale is a pipe dream. You will always have some segregation of dev and ops because the scope of knowledge is so different, especially today where “Ops” often means “expert in XXX cloud vendor’s product portfolio and how our operating model uses the features”.

What we call “DevOps” is really a delicate balance of giving the dev teams enough rope to hang themselves while child-proofing the gallows.


> What we call “DevOps” is really a delicate balance of giving the dev teams enough rope to hang themselves while child-proofing the gallows.

Oh, dear, @wayoutthere, my dev team is about to hate you because I'm going to use that quote extensively over the next few weeks... (giggle...)


I don’t think DevOps leaders are claiming DevOps should be fully integrated so much as there should be a culture of collaboration and empathy, shared metrics and incentives, and preference for end to end automation... rather than antagonistic “throw it over the wall”, “I’m a dev and am too important to be paged” behaviour, etc., which has nothing to do with skill specialization.

Good contracts lead to good collaboration. Kubernetes provides the foundation for a solid end to end contract for managing complex systems automatically. It’s incomplete, but extendable.


> “I’m a dev and am too important to be paged”

Also sounds too important to be dev.


You realize that you're commenting in a thread for a blog post that's 100% about configuration, right? There are articles like this popping up on the front page here every few days.

I've never said that K8S is just about writing config files. I've said that it is appalling that wiring config files is still a technical "skill" that warrants articles and discussions in 2019.

What's even more ridiculous is that most people here probably don't even see any alternatives. I've had this conversation several times and it inevitably reveals the unwavering (and irrational) belief that manually entering cryptic text somewhere is the only way to make reusable configurations.

Truly, we have become the tools of our tools.


I really hate the term "Configuration as code" because it's not code. For most of us, code means something that can be stepped through. For many, that means stepped through in a debugger. Descriptive languages have almost never provided that facility and we don't have it with Kubernetes or Docker Swarm.

Yes it's great to capture your configuration in version control. But if at the end of the day I'm staring at a config file in one window and a log file in another and waiting for enlightenment to grab me, that's not scalable and it's rigid. It also pisses me off to no end.

It's essentially the Frameworks vs Libraries debate all over again. I'd much rather have something imperative.

Declarative systems create a perverse incentive to keep things the way they are because it's difficult to reason about how changes affect the system, and it's virtually impossible to explore those effects. There are no guideposts you can use to apply Local Reasoning, and so there is no pressure to organize this 'code' in a manner that supports it. So as the system matures, everyone is working off of memorization. There are too few bite-sized chunks that can be learned a bit at a time. You are locked into your current way of thinking and you've locked out anyone who can bring fresh perspective.

It doesn't take a genius to see this will end badly. Again. It just takes anyone with enough distance to have perspective.


There is a use case where its really the best solution I've seen so far, say you need to cluster a long running stateful service. Its written in C so making it stateless is absolutely non-trivial. So simply load balancing won't work. Docker swarm could work but compared to Kubernetes stateful sets who is actually using docker swarm or other clustering technology for scaling stateful services.

But they do need k8 ... on their resume.

I wish we were more willing to offer on-the-job training.

But I've worked with too many people who claim to be Senior or Lead developers but can't actually explain what they do.

I've been tempted a lot lately to try to think of a software team like a sports team. Coach, assistant coach, trainers, and physical therapists all about making you think about your abilities at a different, sometimes philosophical level.

The Surgical Unit idea of Brooks has always bugged the hell out of me. I've known enough nurses to know that you don't want to put surgeons in charge of more than one life at a time and then only for a couple of hours, and letting them interact verbally with those people is a fucking disaster half the time. Not unlike some highly decorated software developers I know. They're brilliant as long as they don't actually have to help people.

The head game in software has been overlooked for far too long and to everyone's detriment. Users as well as producers.

If we had the training part right, this FOMO anxiety would be classed as a disorder.


Kubernetes is WebSphere reinvented then, along with all the management complexity and the expensive consultants?

Great analogy.


Context is always key with these kinds of reactions. Few people feel that hex editors are unnecessary, they probably don't know they exist. The reason you see this sort of thing with Kubernetes is that for whatever reason its hype over-extends its problem domain and many times people who do not need them receive the suggestion (or insistence) to use them. If someone were to tell you to use a hex editor to edit your JavaScript, you might very well reach the conclusion that hex editors are a cult, and useless. Someone might then point out that there are actual completely justified use cases for them. That's what I see happening here: whether it be indirect (tons and tons of blog posts and articles about moving to Kubernetes), or direct (an employee insisting that the company's infrastructure be moved to Kubernetes), or a mix (starting to see Kubernetes experience as a requirement for jobs that probably doesn't need it), all of a sudden you have the backlash against the perceived "Kubernetes for everything" culture (which in turn looks like a weird straw man to people who actually know what its for).

> this isn’t what Kubernetes is really for.

i'm guessing that writing dyson in Nim is a tacit acknowledgement of that: if this were something geared toward production ecosystems, it would be in golang like kubernetes? although there is the helm luafication, so perhaps dyson is part of a fringe of non-golang k8s auxilliaries.

another way to implement this is with a 'static CMS' where there are still static pages except built into a situated deploy. the 'cultish' (cultic? anyway) aspect of k8s appears to be to phrase all the things in terms of k8s constructs rather than using k8s constructs as a foundation and abstracting out.

i learned about 'rollout' from the CI portion of this post, although initial attempts to search for a comprehensible description of it fail.


Honestly dyson is just something I wrote for myself to see how difficult it would be to write. I don't expect anyone else to use it. The tool is also a punny name, because you'd need to terraform a dyson sphere before you(r apps) can live in it.

I just wanted something with easy templating syntax like this: https://github.com/Xe/within-terraform/blob/master/dyson/src...

The fact that cligen (https://github.com/c-blake/cligen) exists too makes it super easy for me to define subcommands of the thing: https://github.com/Xe/within-terraform/blob/master/dyson/src...


Well, maybe it isn't _only_ what it's for, but I have thought of doing largely the same and "bundle" a bunch of assorted projects onto a single 3-node cluster, including my blog.

The only thing that's really prevented me from doing so is that I have my own micro-PaaS (https://github.com/piku) that makes it trivial to run a bunch of different apps/services on the same VPS, and the added complexity isn't really necessary.

But since I deal with k8s practically every day at customers, attrition might be compensated by not having to switch tooling.

YMMV within this sort of scope.


I mean, that’s kind of my point. It’s just not designed for that use case.

Someone could build a simplified fork / derivative of Kubernetes designed for this purpose. That would be pretty rad actually, but it would cease to be Kubernetes because the complexity is the point.



k3s.io is great, and I use that as well - is to be what I will eventually run in "production" for my personal stuff.

> The only thing that's really prevented me from doing so is that I have my own micro-PaaS (https://github.com/piku) that makes it trivial to run a bunch of different apps/services on the same VPS, and the added complexity isn't really necessary.

How does it fare in production? I've got a tiny app with two containers (a frontend an a batch job) - it seems like a decent use case.


My website (https://taoofmac.com) is exactly that (web and batch workers) and has been running on it for almost 3 years now, but bear in mind that it does not use containers - it merely deploys relatively isolated services in virtualenvs (or equivalent).

Since I use CloudFlare (hi jgrahamc!), it's been peachy.


Kubernetes is basically a way to run a Java-like application server that can run things other than Java. If that sounds like an appealing prospect to you, the complexity of Kubernetes may be a good fit.

I do not agree with your assessment of Kubernetes. It is not equivalent to something like WildFly or TomEE. Kubernetes runs/manages application servers (and not just java once), along with a whole host of other things devops things at Scale... Kubernetes is great for setting up a blog, and the 11 other applications the author is trying to run.


> complaining that running a blog on Kubernetes is too complex

think you're really missing the point here. Why do you think this was included in the post? https://twitter.com/dexhorthy/status/856639005462417409


Oh I get the author’s point, but her use case was “basically a Heroku replacement for easy deployment”. It’s just the wrong use case for Kubernetes and it is well known that deploying to Kubernetes is a bit of a nightmare.

What is the right use case for K8s?

And why is "ease of deployment" not something that someone should expect from K8s?

I've been sitting on the K8s sidelines for a bit as things iron out, and I've been deploying it on bare metal on a test bed over the last few days with the intention of using it as IaaS for some of my own apps.

It seems to be what it's meant to do. Keep my app running on infra following the rules I set.


> What is the right use case for K8s?

It is for making things easy to deploy ONTO the K8s cluster, not for making the cluster itself easy to deploy.

The key is only to use it for cases where the deployment of the containers without K8s is more difficult than the deployment of K8s.


The goal isn't "ease of deployment", the goal is "infrastructure as code" so that application infrastructure can be managed in a way similar to application source code (e.g. PRs, blame, code reviews, CI, rollbacks etc). This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations. With k8s, individual machines become a homogenized resource that do not need specialized provisioning depending on the application they will host.

Except Kubernetes has sucked all the oxygen up in the industry and has had a subset of adherents that trash the alternatives such as Heroku, Cloud Foundry, etc.

As such, people do expect it to replace Heroku.


Anyone who suggests k8s as an alternative to Heroku is wrong. Heroku is a product that manages infrastructure so that the programmer doesn't have to, k8s is a solution for operational engineers that want a code-driven approach to managing their own infrastructure. Suggesting k8s as a replacement for Heroku is like suggesting docker as a replacement for EC2.

And yet, they both fight for the same budget. They're absolutely alternatives in that sense. Not necessarily the right tool for the job, but alas, that's not stopped the wave. And eventually, even Heroku will be based on K8s, as they see the writing on the wall.

Also note that most adherents view K8s as a replacement (more accurately new API) that replaces EC2. the fundamental unit of computing becomes the Pod, not a VM.


Isn’t the right answer there Knative or Cloud Foundry? (On top of k8s)

I run more than just my blog. I'm also moving discord and IRC bots there too.

Where would you draw the dividing line between where you think it does make sense to use something like Kubernetes and where it doesn't?

I'm starting to feel that the whether or not you need Kubernetes is closer to old conversations related to whether or not you need a dedicated DBA. Can your organization survive and recover with hourly/nightly backup restores to recover from an incident? Is your replication so complex that you really need a guy to ensure that's never getting into a bad state? Worse case scenario can most people working on the project restore the database to a valid state if something does go wrong? I feel like these questions have similar representatives in whether or not Kubernetes is right for an organization.

Kubernetes also makes more sense if you look at it as a common way for an organization to run applications among disparate teams with a shared operations infrastructure. It provides a standardized model for things to work the same enough to work the same. If you are only delivering one kind/whole organization thing and it all looks and quacks like a duck maybe you should just deliver a duck instead of putting a duck hat on Kubernetes and asking it to quack.

I don't think there is anything wrong with designing an application that would transition easily into Kubernetes but I feel like many of the proposals/PoC I have seen in the last few years are either fresh systems that get consumed by Kubernetes complexity or are poor replacements to systems that already exist and only seem to serve as resume padders for the team architecting the replacement which gets a viking funeral as soon as they leave. Often the latter case is because the underlying architecture and goal of the system is pre-Kubernetes and doesn't fit the model well of having mostly stateless/replaceable pieces.


There isn’t a dividing line per se; every operating model is going to have different breakpoints. In general though, I would consider Kubernetes an “enterprise technology”. If you’re a startup you’re going to be better off paying AWS/Heroku for one of their more managed services than hiring someone to build / manage a Kubernetes cluster.

A note on this - AWS (or any other host's) managed k8s will not reduce the need to understand what's going on under the hood. It's still k8s, you're just not running the daemons to make it work (which is, arguably, the easy part).

Kind of implicit in the other responses here are use cases that call for a full cluster and all the complexity that goes along with it.

But running a single node cluster is totally valid. I use one at home to run arbitrary containers (a DNS-over-TLS proxy, VPN server, radius server, network AP / switch manager, etc.). I don't use load balancers and just use host-based persistent storage, which removes the vast majority of the complexity.

I've seen a lot of people get wrapped up in the complexity of wanting to be able to have a persist storage-backed process go down on one node and come back up on another, and that's not unreasonable, but that's a lot of stuff to figure out early on.

If I weren't using this as a single node k8s instance, I might use it to manage VMs, which might be easier to understand but much heavier and more work to maintain. With what I have now, I've got a folder of YAMLs that defines everything to run on my node, and I'm able to easily put all their persistent data in the same top-level dir for easy backup.

I think the perception of k8s might change over time once people realize that it provides a lot of value even if you completely rule out the tougher stuff to do on bare metal (like load balancers and shared persistent volumes).


You've signed an > 99.9% availability SLA with a customer and a service like Elastic Beanstalk or Heroku isn't sufficient for your needs.

Technically, the cloud in general probably isn't for you, since none of them guarantee more than 3x9's on any of their services.

Yup, meant to say >= 99.9%. Of course most businesses operate more like what Cloudflare makes explicit with it's "100%" SLA. Architect for 3 nines and then just pay the penalty when the 3 nines architecture doesn't hit 4 nines.

If you need your workload to span over an (possibly growing) number of server hardwares, you may want to use k8s.

There are other options that are good contenders in case of such a workload.


The best way I can desct K8s is language agnostic J2EE.

I think devs often make bad decision makers because in some sense tech is often an addiction rather than a pragmatic choice.

The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.

I try hard to correct for this bias but sometimes struggle with exactly the same thing. There's just something about wanting to have a uniform "world-view" with fewer explanatory variables that never stops being motivating.


Part of the problem is the hiring process (plus attitudes seen on here).

Your resume needs to have lots of fashionable buzzwords rather than pragmatic good enough / keep it simple choices. You must keep on learning (lots of things rather than mastering any one thing). I can write a really nice site in standard Django with some JQuery, and it will take me half the time that adding React to it will. But adding React will make me much more employable and get me a better wage.


You've touched on some very key problems here.

It's seems like at some point around five years ago the three tier architecture with it's division of labor vanished over night. I'm not saying things were perfect back then but I've never seen any demonstrably objective reasons why it was replaced.

I went from having to be mindful a few configuration items which arose from deploying my war to different environments to slogging through configuration hell in the Terraform and AWS world. I've been learning way more about Ops than I ever cared to know while at the same time becoming a -10x developer in terms of shipping business value.


The real trick is to make your site load so fast people swear it's magic. I use a combination of serving things from ram and https://instant.page to do this with a fairly boring plain old HTML rendering on the server app. I even have a Progressive Web App out of it too.

Honestly, very true. After doing some brief work for a financial services company, the one thing they were consistently surprised at was how fast the application ran!

Yeah, duh. I render simple HTML templates on the server and serve them as browsers expect them; not with a thousand lines of JS for topping.


Are there any side effects from preloading pages when hovering over the link?

Probably? I don't have side effects on hyperlinks though.

The "Pages not preloaded" page notes that it excludes addresses with query strings just in case they run some action that you might not want to trigger on hover. You can override the default behavior if you know it's not an issue.

I'm reminded of a post from a few years ago where someone's website had a table of items with [delete] links and would take database actions based on GET requests to those URLs. Who cares? It looks the same to a human browsing it.

And then it got crawled by a search engine which followed all the links to see where they went.

But if you're not doing anything unusual like that, I don't see how prefetching HTML would cause any problems.


What do you use to serve from RAM?

Rails with turbolinks or Django/Laravel + pjax is good enough for most purposes. When Kubrrnetes first appeared it was laughable if you used it for anything less than provisioning a massive fleet of servers. Now it's something you sprinkle on your corn flakes.

Yep. We just started implementing it at my place. I had only just started and wanted to say that it seemed like overkill, but it was under way when I started and bringing that up in my first week didn't seem like a good way to start.

Top be fair it has reduced our server costs a bit (after maybe 6 months of developer time). I am unconvinced it will be worth the hassle.


FTE dev, fully loaded, is what? $250,000 per year? More?

Are the improvements worth $125,000?


We are in Spain, no San Francisco, so a fair bit lower than that. IF the startup goes well and we need to scale maybe it will be worth it. And it does give us the advantages of high availability.

Though one comment I saw about Kubernetes on here a few weeks back concerned an old schooler like me. The guy suggested that if something goes wrong, just kill the pod and let kubernetes bring up another. Apparently that's the way you are supposed to do things. Something seems really wrong with that approach to me. Just throw resources at the problem with very little understanding of why things went wrong.


Docker + Kubernetes = the death of YAGNI

It seems to me that the requirements for personal infrastructure and professional service-grade infrastructure have drifted so far apart that essentially, if you know one world you don't (automatically) know the other at all.

Tbh, I have no real world experience in this, so it might just be my own delusion. However, I've recently started getting into self-hosting some of the services I use. I'm using a simpler infrastructure than what OP described and while it is the right choice for me and a useful skill to have, I feel like it absolutely won't get me anything in the sysadmin/ops/etc. job space. I've actually considered adding more "enterprisey" tech to it (like Ansible or comparable stuff) just to make it more sexy for recruiters.


The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.

It is typical for devs.

Meanwhile ops have to support every half-arsed tyre-fire technology until the end of time, because a dev wanted to try it once, and now it’s in prod with users relying on it.

Kubernetes is in a sense the pushback against that “do what you want, as long as k8s is up, what you run in your pods is your problem, not ours”.


It's typical for web application devs. There is a huge ecosystem of software developers outside of web services who are much less fad-happy and much more focused on using established tools to produce useful, reliable systems.

Webdev is where the money is. It's where people with a CS degree or programming experience are most likely to find a way to put food on the table. Everything else requires more expertise and, aside from the most specialized of applications, pays less money. So as it is, webdev is the center of the universe, and RDD is table stakes for being considered a professional in the field.

That's not quite true; most of my colleagues with CS degrees work for non-tech companies in factories (doing really boring stuff, but still).

Webdev does seem to pay better than most other stuff, though.


DBEs as well. We're constantly getting new database back ends for apps to the point that the Ops DBAs support some 9 backend database solutions. Granted, that probably falls under WebAppDevs for the most part.

Not all of them though.

Still happily using JEE/Spring/ASP.NET + VanillaJS in what concerns webdev projects.


Being old doesn't mean good. From my experience, using j2ee or spring to make a web app is grossly overcomplicated (I have heard of but not yet used spring boot). Asp.net is fine but anyone who is paying $$ for that is probably a dumbass

From my experience it still offers more performance than anything based on JavaScript, or scripting language du jour.

And while other AOT compiled languages might offer a little bit more performance, they lack in tooling and libraries.


> “do what you want, as long as k8s is up, what you run in your pods is your problem, not ours”

Until somebody cyberattacks those pods and steals all personal data of your users because the devs didn't bother to apply security patches. But hey, it's not your problem. You are not responsible for the pods. k8s is still up.


True. I own all 24 clusters from a management perspective plus own the core OS container they use. I rebuild the OS container, patch, and upgrade the clusters quarterly. I currently have to manually check to make sure they're not using some third party OS container and reject it if they do. I'm working on a PodSecurityPolicy that enforces that so I don't have to manually do it any more. They are fully aware of this because I'm part of their process, attending their scrums and adding lifecycle bits to their Jira backlog. It was initially a shock to them and pushback happened but since I "own" the environments, and could provide good reasons for it, and showed them it didn't adversely impact their workflow, they seem good with it. I can't say they aren't complaining about it among themselves though :)

But hey, it's not your problem. You are not responsible for the pods. k8s is still up.

But that has always been true. If a dev leaves a SQL injection for example in the code and it got penetrated, absolutely no one would blame the sysadmin for that.


In the case of sql injection the responsibility indeed weighs more on devs. But often it's a grey area. What about upgrading openssl lib for example, or patching Struts framework (see Equifax hack)?

My interpretation of DevOps is that it's one team with shared responsibility and not "shove your stuff in that pod and don't bother me."


I think one root cause is that the two demands Dev usually have for Ops (keep the system protected and up-to-date and keep the developed software working in a well-defined environment) are sometimes directly conflicting - and developers don't always seem to realise this can be the case.

E.g. you could imagine some extreme case in which dependency X, version N has a critical vulnerability - but at the same time, the developed software relies on exactly version N being present and will break horribly on any other version.

You'd need Dev and Ops to actively work together to solve this problem and no amount of layering or containerization would get you around that.


I worked for someone who had mindset that whatever technology developers wanted was always good and ops should just shut up and put up with it because devs are the ones that make the money for the business.

Very infuriating mindset to deal with.


And I've worked at companies where the devs where expected to know their place and not question ops, because ops was seen as the serious adults in the room keeping things running and devs where seen as easily distracted children chasing after the shiniest thing that most recently caught their attention. Made perfect sense when I was on the ops side and was super annoying when I was on the dev side :)

Imagine if people could all just get along.


Let's make a movement to bridge the fundamental divide between Dev and Ops.. we can call it OpsDev.

You are forgetting Cybersecurity team. Now, that's a fun party. SecOpsDev.

You jest but a dev chucking an insecurable thing over the fence to ops is very common. I will bet that’s how there are so many open MongoDB’s out there.

Having worked in both DevOps (or Ops as we called it in 2002), there really was a belief that the developers were stupid, and they'd burn the whole place down if we gave them any leeway. As a developer, I've seen DevOps as a frustrating gate at times. The only things I think can fix this divide are communication and built trust. (and probably less assumed malice)

> Having worked in both DevOps (or Ops as we called it in 2002), there really was a belief that the developers were stupid, and they'd burn the whole place down if we gave them any leeway.

I've been on both sides of this divide myself, but have spent the last fifteen years or so as a developer. In my experience, developers will burn the whole place down if we're given the chance.

We're focused on writing code, and it's boring to write the same code over and over: we want to write new code, in exciting ways, and we are surprised when it fails in exciting ways.

We're focused on delivering features; our incentives are all about getting it done, not about getting it done well (our industry doesn't even have a consistent view of what's good or bad: note that C/C++ are still used in 2019) or supportably. Some organisations really try had to properly incentivise developers, but I've not seen it really work yet. DevOps is an attempt to incentivise developers by getting us to buy into ops. I've read a lot of success stories, but not seen a lot of success with my own eyes.

I do my best to be diligent, I do my best to wear my Ops hat — yet I still fall down. I don't think that it's unavoidable, but so far I've not avoided it, and I've not seen others avoid it either.


Smaller companies I've worked for don't really seem to suffer from this problem although once companies are larger and have separate teams (and, perhaps more importantly, managers who are incentivized in different ways) this problem always seem to arise.

once companies are larger and have separate teams

The real problem always starts when they become separate cost centers with separate budgets and have to independently show a 'profit'.


I've seen this in a 50-person volunteer group. The devs turned up every year with a proposal to throw away and completely rewrite what they'd done the previous year. No incremental upgrades -- a complete rewrite every time.

This worked great when several other business systems relied on their vanity toy, and invariably the API would change with every release.

There's a balance to be struck between 'never change anything because it's always worked' and 'new shiny every week'. In my experience it's an absolute nightmare getting people to agree where the line is, and on top of that, get management to buy-in and push-back when either side oversteps.


The company I work for went even further. Ops doesn't support anything in public cloud past the basic connectivity to the corporate network. Everything created in public cloud is the product/dev team's responsibility. Kubernetes? Not their problem.

This is why developers should own their ops.

> The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.

To balance this with a counter example from the quieter group of people not "hot for the latest tech":

I'm a "dev" and i've never had this problem, however I work for a small company, where everything I make and deploy I also have to maintain in some form or other. This gives me a strong bias towards operational simplicity and trying to essentially eliminate dev ops... New tech which is both complex and opaque in solution without clear cut advantages is basically repulsive to me, because trust and reliability without constant attention and tweaking is important.


You're not eliminating "dev ops", you're doing it right.

Eliminating dev ops is doing it right! The whole entire point of devops, as it was originally formed, was that making developers bear the load of operations would encourage them to simplify and automate it.

I should have added scare quotes around "eliminating", too, it seems :-)

well even in a small company you might need:

- non downtime deployments (yes you can have a downtime, but everytime you deploy an app?!)

- schedule more than one thing (no company has a single product that only has a single binary or at least nearly no company, there are some unicorns tough)

- some kind of automation (this is complex, no matter what you use)


> non downtime deployment

Oddly, a lot of small companies really don't need this. If your customers are mostly businesses in a limited set of time zones, having a maintenance window outside of their business hours is probably easier.


You are in a an excellent position, but beyond a certain size things become difficult to manage in this way.

I’m curious which technologies you’ve ended up working with.

> I think devs often make bad decision makers because in some sense tech is often an addiction rather than a pragmatic choice.

I also think this has a lot to do with how devs spend our time: with the tech itself. Whether your application is running on Kubernetes or a box in your garage matters to precisely zero customers as long as it performs well, but as developers we spend our whole day dealing with various APIs and technologies, so we develop an outsized sense of the importance of those things.


Which is why I think it helps a lot to work in domains where shipping software is not the core business, just a cost center to keep real business going on.

One quickly learns that business has a complete different set of priorities and dealing with software as little bonsai trees is not one of them.


I think jumping on new tech and marketing yourself is a good decision for a developer as its a good way to increase their compensation and market value. If you're a developer stuck maintaining a Java spring app at some unknown company the best way to make a shift is to pick up Go or something and move to a startup. Else your career will stagnate.

The best way to get promoted at many companies is to write a framework. The best way to get noticed is to write an open source framework. And so on.


Is this really true? There are plenty of job openings for people who are good at maintaining Java Spring apps!

They also pay a fraction of what the jobs "hip" companies pay, and often come along with developers Being treated as second-class citizens.

Is that true, or are you inadvertently comparing the cost of living between SF and other major cities?

Hip technologies are being used in SV, and they have to pay tons of money just to keep the talent pool large and circulating.

Older technologies are used in other cities, and there the market forces aren't so crazy.

But a good Java dev can make plenty of money in SV, and a Go developer will make a competitive salary by Dallas standards but not by SV standards (and probably have a harder time finding a new job).


I am currently working in NYC (living in NJ), with a total comp that is more than 3X what I was making when I left Dallas. Based on the market there, I would still probably be making 40% of what I do now had I stayed, and the company would not have been as good.

For the record, I have been using a JVM language as my primary work language since early 2012.


Lots of hip companies use the JVM, and sometimes, Java.

Not according to itjobswatch.co.uk where Java/Spring roles fetch top rates. Same with Indeed.co.uk so which job market are you referring to?

It’s not so simple. For eg if your skill set is in demand you can easily trade up to a better company than if it wasn’t. This is true in my own career. Also the bar to entry would be lower. So for this reason if you’re breaking in right now learning React is better than learning Spring.

Yes exactly my point. Go lookup salaries.

Java will be around forever, and becoming an excellent Java developer will absolutely remain highly lucrative for a very long time. That's its reputation at the companies I"ve worked for over the last 10 years (all startups). Go is more of an anti-language imho. Among myself and similarly minded colleagues, I would say its main attraction is its lack of features and it seems a haven for people that are grumpy like me. I always get a chuckle when I see it framed as "hip" because its just never felt like that to me. Elixir is Hip. Rust is somehow Hip. Go, just not in my experience.

Stagnate by what standard though? There is a certain joy in just maintaining the status quo.

> Stagnate by what standard though?

Range of employment options. Possibly salary, though that's more variable. There are some jobs keeping the lights on with legacy tech long after it is done being the hot thing, but typically with any particular stack it's a shrinking numbwrt of jobs often with shrinking average real pay unless it hits a phase where the decline in people able to do it exceeds the decline in work.

If you are riding out the last few years of your tech-focussed career (whether that's before retirement or before moving out of hands-on tech into, e.g., management) that's maybe not so bad, but if you planning being in tech for a longer period it's potentially extremely career-limiting not to adapt to current market focus.


> Range of employment options. Possibly salary, though that's more variable.

I'm not sure this is true. Most of the shops I've been in don't care about whether you know this or that language or library. You're expected to learn that as you need to. Most of what I've seen cut people from interview loops is missing fundamentals.


From a programming perspective, possibly. As an Ops Engineer, I'm having a hard time shifting jobs. Where I work now, it's heavily siloed so I can't shift into a CI/CD team because it's a different team or the Product Engineering team because they don't do Unix administration, automation, or Kubernetes (other than the deployment aspect). I focus on automation with shell scripts and Ansible plus Tower to get Infrastructure as Code going. I took on the Kubernetes role, and am the single point of failure for the 24 clusters I manage. And now management is asking what support contracts we have for Kubernetes (me, it's just me and asking questions in various places on the 'net). Add in that I'm taking courses for the CI/CD toolset and implementing them on my homelab. But I still can't get a bite on shifting jobs.

> Range of employment options.

Then you'd be wise to stick stuff like Java or .NET, because there are probably millions of jobs requiring them.


By financial standards :-)

It helps working for big corps.

The job might not be as interessing as riding every tech wave, but on the plus side there are plenty of tech waves that you save yourself from riding on.

Plus one gets to rescue projects that ended up betting on the wrong waves, getting back to boring old tech.


You mean, Kubernetes is the COBOL of 2050?

I mean that Kubernetes is the NoSQL, CoffeScript, BigData, Grails, SOAP... of 2019.

It is a bit unfair for Cobol, given that its latest revision is from 2014, and while verbose as it might be, it supports most of the nice features of any modern multi-paradigm language.


I think this is a bit unfair to SOAP.

SOAP was\is a pretty stable technology that did exactly what it promised to do, without too many releases or breaking changes for about 10 years.

Even today it has a good utility for the situations it is designed for...

RPC over a well known standard format, for tightly coupled endpoints, that require metadata, enforced schema, security, and perhaps transactions.

big problem for SOAP is it was the default for web services for a long time, when in reality a big shift happened in about 2008 where webservices were most likely NOT going to fit into those constraints. just my 2 cent


CORBA and DCOM certainly felt easier to use than all the headaches I had with SOAP interoperability.

And neither of them were selling "magic" stuff like BPEL and BizzTalk.


Additional factors I'd want to add:

Confirmation bias (where you've spent some time on k8s or whatever, and now you just want to cash in on your time loss, objective criteria be damned)

Generational churn (where you find yourself in a field where everything has been said and done, and you just need a new buzzword on your resume to start over; this goes hand-in-hand with corporate IT longing for fresh and cheap staff and their stack in need to look sexy)

Big media (where extremely large infrastructure runs on k8s or whatever, and gets disproportional airtime, because cloud providers want to sell you lots of pods, and people not checking whether the proposed arch is a good fit)


Decision fatigue and opportunity cost aversion is a big factor too, I think.

When comparing consumer products where there's lots of choices, I find myself finding an OK options and 'falling in love' with it - when I reflect, it's basically a way of cutting through all the reviews and deciding that one is the best and I've no reason to regret buying it or need to do any more trawling through reviews and comparisons. I'll just buy this one and be done with it.

This strategy often works, to be fair!


Not just devs, it's really a management problem all round.

Management (top bosses) often seem want the latest ie. Big Data . It doesn't mater it'll cost a fortune and you'll get better results on a single server.

And if the devs are out of control and pushing for %tech% and getting it, that's management at fault. To be a good manager you need to understand what your employees are doing. I've met too many that don't.


I've been around enough to see a couple iterations of this. Being able to spot when something is about to fade away and something else come into focus is a valuable skill for consultants. I suppose it's necessary for tech. progress but, man, a lot of money gets spent chasing the new thing.

I think that larger reason is that devs who don't act like that or at least don't pretend to be like that are considered less capable by many. Somehow pragmatical decision making that is seen as not being passionate.

There is also such a thing as being sleepy old.

And there is reasonable in between.


The other side of the coin is that there are very real improvements in newer tech and companies, in my experience, are only willing to support continuing education that is directly related to the tech stack that they are using.

So a developer that doesn't want to deal with already solved problems and who wants to advance their knowledge is incentivized to push for jumping to the newest tech.


I've suspected this to be the case almost everywhere I've worked. Another reason it happens is that anyone questioning the adoption of a new tech risks looking like they don't understand it.

However going the opposite way (sticking to one reliable tech stack and refusing to change even when something better comes along) could be just as damaging to a business.

How then, do you build a culture where people are open-minded to new tech without feeling obliged to jump on every bandwagon? I don't think I've ever seen an organisation get the balance quite right.


Actually trying all promising new technologies is another full time job, or at least take 20 hours a week.

Most developers just don't want to be left behind, so they pick up whatever is trendy at the moment. It's completely rational, because knowing what is trendy getting you hired.

However, implementing what's trendy, without carefully weighing pros and cons*, is what's dangerous.


You could say that to pretty much any technical abstraction though.

I entirely agree... in fact it applies to me more with maths than it does with tech.

Which maths are you doing? Unless you’re talking about already well defined formalizations.

Bayesian stats.

I think maybe it's easier to blame an external framework in hindsight, than to take the blame for some smaller solution that you personally created in-house.

Kubernetes is our one shot at having the universal vendor-neutral cluster interface. The fact that it's time consuming to do simple things directly against it doesn't surprise me in the same way I'm not surprised that writing todo app directly against POSIX abstraction would be time consuming. It's a great way to learn how these interfaces work though.

"todo app directly against POSIX abstraction"

After trying a few cloud based ToDo applications with mobile apps etc. I gave up and started using org mode files from a text editor.


Question: why hasn't someone come up with a simple interface that abstracts away the tricky bits if you just want to deploy a blog?

I think MS is trying to make tools like this for the C# world, but I haven't seen them yet.


Knative[0] pushes in that direction from the side of "complicated" Kubernetes. It's still far away from easy, but I expect that the solution will look like this -- a software that uses Kubernetes base to provide high-level primitives. Helpful cloud provider will give you a cluster with such thing already installed, as Google already does for Knative with the Cloud Run offering.

Microsoft allows you to publish a web application from Visual Studio project to Azure.[1] It's very simple, but more much opinionated. It's a great trade-off for an individual developer who needs to focus on functionality. In the context of this discussion, there's an important distinction -- it's not an interface, it's just a feature. It's tightly coupled to Azure from one side and to Microsoft dev stack from the other.

[0] https://knative.dev/ [1] https://tutorials.visualstudio.com/aspnet-azure/publish


Oh yeah, as a C# developer I definitely am familiar with app services.

But many organizations would rather not directly pay the costs of app services and instead indirectly pay the costs by making their developers tool around with Kubernetes.


"Deploying a blog" is already trivial with existing technology. You don't need complex infrastructure tooling for a blog.

The simplest technique for a blog, IMO, is using a static site generator. Deploying static assets is simple, and you have your pick of generators/languages.

> writing todo app directly against POSIX abstraction would be time consuming.

That actually wouldn't be bad, and probably faster than getting it running on Kubernetes.


> todo app directly against POSIX abstraction

So, a text editor?


but it isnt vendor neutral because you have to setup the k8s cluster differently in each cloud


Quick read didnt show Azure or GCP cluster creates. Did I miss that ?

They're not included in the base binary, and are instead provided as plugins. You can find a list of them here [1]; they've got AWS, Azure, GCP, DigitalOcean, vSphere, Docker, OpenStack, maybe a couple others.

[1] https://github.com/kubernetes-sigs?utf8=%E2%9C%93&q=cluster-...


At what point down the rabbit hole of abstraction do we pull back and look at the complexity required to do simple things ?

It's an interesting mix of comments on this post, where half are talking about the typical "new tech switcher trope" and the others "I use it at X/It solves my problem X". The first is expected, the later shows smoke and a bit of fire around Kubernetes. I use Kubernetes and I usually don't like new tech, but Kubernetes solves so many real/hard problems with dev and ops (enough that i'm willing to work with the problems it creates for me). In my experience, it's the real thing.

These posts about kubernetes are so ridiculous. Installing kubernetes on multiple servers is not difficult, and understanding the components is pretty straightforward if you've ever worked on a distributed system.

If you don't want to be in the "cult", dont use it. Meanwhile i'll be writing service and deployment yamls and avoiding all the proprietary expensive aws bs.


It's not difficult if it's your full time job. The learning curve is punishing though if you're trying to learn it from 9-12 AM on Saturday.

So what? It wasn't designed to make it easier for hobbyists to deploy their weekend projects, it's meant to provide ops-engineers with an infrastructure-as-code abstraction for distributed applications.

Exactly. The point of the first meme in the blog post isn't "kubernetes is so overcomplicated why would anyone deploy their blog on it", it's "your blog is nowhere near complex enough to merit running on kubernetes".

Everyone here is blaming the truck company instead of the person that bought a flatbed to transport a 2-lb box


The fundamental flaw with Kubernetes is that the UI is so bad. The abstraction is leaky and the naming is confusing. It certainly didn't stop git adoption.

kubectl has the ideal UI for managing a k8s cluster.

I wouldn't go so far as to call it "ideal", but it certainly beats the web UI.

This post and the discussion of it on Hacker News could well be performance art. As an FPGA engineer I'm not entirely sure.

Now imagine how hard it is for executives that don't know anything about technology to aid in making long-term strategic decisions that depends on this!

> Now imagine how hard it is for executives that don't know anything about technology to aid in making long-term strategic decisions that depends on this!

Executives that don't know anything about technology shouldn't be aiding in making strategic decisions that depend on this. They might make decisions on the advice of others (including, hopefully, executives who do know something about technology—CEOs may make the decisions in some cases, but with CTOs and/or CIOs innthe loop, and if they know nothing about technology, that's as bad as if your CFO knows nothing about corporate finance.)


I don't know what it actually means to be a CTO or CIO, but I think a lot of them have been spending the last 15 years working out strategic visions and reading articles about tech trends.

It's really hard to know this stuff unless you are down in the weeds every day.


People just responding to the title. The author was successful in migrating! Mostly an article about how immature GitHub actions are now that it’s just out of beta.

Edit: As noted below github actions seem to still be in beta. The original point stands.


AFAIK Github actions are still in beta. I had some time set aside today to set up some build actions for a project I'm working on, but was roadblocked by a "request access to the beta" page. Do agree with your main point re: article though.

step 1) keeping a service up and running is hard. we have all these issues and it seems like we are struggling to do simple things

step 2) only if there was some magic tech that could solve all these issues. and have a cool name. and we could put it on out resumes... drum roll: K8Sssssss

step 3) bro. it’s working. i don’t really understand what it’s doing but look at all the containers we are running. and the config... super configurable. we’re devops we can figure this shit out right?

step 4) what do you mean we have to update the k8s version we’re running on? we barely got this one working. ahh... the beta tools we were using got a bit more polish... makes sense....

step 5) sob silently when you realize that the work k8s has supposedly saved you now goes into maintaining the k8s cluster. reminisce about the good all days when you could just xcopy deploy your app.

epilogue) in the age of the cloud, k8s make zero sense to me. use the abstraction provided by your cloud and focus on writing your crappy app. you’re not google or amazon. you don’t have to solve the problems they do and you’ll probably never have their scale. oh? you have thousands or bare-metal servers and looking for a solution that can help manage them? you can also afford a dedicated ops TEAM to manage them? (dave jumping on the latest tech trend does not count as a team). go ahead!!!


Yesterday I caught up with an old friend from where I grew up, for the first time in 12 years. He said many good things about Kubernetes.

He's working for a Swiss bank, and managing an ops team. In his case, I think k8s makes sense.

Then there's me. Doing remote work, struggling to sort out visas, wishing I could have his kind of life and stability, wondering how it all went wrong. To get a job like his requires experience in Kubernetes, Oracle, etc. I can't get that kind of experience with side projects.

"dave jumping on the latest tech trend" is probably trying to build up his résumé, rather than actually help the company. As someone who needs to build up his own résumé, what do you suggest to gain experience in this kind of technology? I don't want to intentionally deceive the company to let me try it when I know they don't need it. But I know that I need to learn it somehow.


there is more than one type of dave. there is the “jump on it, run it in production and make it the next guy’s problem dave” and there is the “play with it to learn what it is in a safe context/env dave”. you can definitely stay up-to-date with tech without betting the farm on it.

2nd thought is that as an employer you don’t want one trick ponies. You want people that understand the fundamentals and can learn and adapt. i will take a person that has good fundamentals, is curios and constantly learning over a person that knows a technology every day of the week.


You do realize that one of those cloud services you can use is k8s, right?

GKE is pretty amazing. They manage the k8s control plane for you, offer worker node scalability and you can use a decent, intent based, automatable API for declarative deployments. No need to mess around with VMs or proprietary lambda/serverless stacks.


yes. i realize, but having a 3rd part manage the control plane for you != you managing the control plane. again, this comes down to delegating the work to someone that does this for a living. You could say you're using k8s at that point but you're definitely not operating a k8s cluster

But is anyone arguing that using k8s => you must be running your own control plane? This seems like a straw man. You're not saying 'running your own k8s makes zero sense', you're saying 'k8s makes zero sense'.

maybe. my experience has been that most people that say they use k8s manage the whole thing. people that use k8s on gcp usually mention gcp.

I'm partially convinced that the "we're google scale" is cargo-culting.

but but... we handle billions of CPU operations per day. that’s webscale right?

That's CLOUD SCALE to be honest

I am one of these guys who is using K8S at home. The reason is a unified platform for work and home environment.

What's the cheapest you can run a k8s cluster in the cloud? I've been looking to spin one up in AWS, but it looks remarkably expensive for running personal projects.

If you go with a provider that provides the control plane for free, which is the way Google Cloud and Digital Ocean do it (and probably many others), a single node cluster is actually a valid cluster. It won't have redundancy / High Availability which is Kubernetes' raison d'être, but it works well. If so, Kubernetes is no more expensive than non-Kubernetes. In the case of Digital Ocean, $10/month.

I‘m running at three node Kubernetes cluster for less than $10 a month using this guide: https://github.com/hobby-kube/guide

Disclaimer: I‘m the author.


Damn, wish I found this before I spent days stumbling through my first Kubernetes set up.

Ignorant HA requirements, will this work fine with a single beefy node in home network or would I be better off running multiple vms on that single server which then run separate Kubernetes nodes?

You can get a single-node k8s cluster running super easily with [Minikube](https://github.com/kubernetes/minikube). The more recent Docker for windows/mac actually comes with a kubernetes distro that piggybacks off the docker vm.

If we're talking managed, DigitalOcean maybe? (The master server part is free, you pay for compute etc of the nodes)

You could possibly do it cheaper on AWS with RI discounts, but you'd have to setup the cluster yourself to avoid the ~$140/mo fee.


Consider just running Minikube on your laptop. It’s pretty realistic and won’t cost you a penny. Except maybe in electricity, it seems to consume an enormous wattage just to exist...

minikube --vm-driver=none consumes a lot less resources, and k3s even fewer.

K3s is fantastic. Lately I've been using K3d (which is the same thing, just in a docker container - much like Kind) It's super easy to spin up a cluster, and spin it back down with nothing really to clean up.

I’m using Hyper-V at the moment, I’ll check out your suggestions - thanks


I’m building a startup that provides hosted, shared Kubernetes clusters starting for 0$/month. https://kubesail.com - I agree with everyone on this thread that using k8s for a blog is like building your own house from scratch - but the analogy breaks down when everything underneath the Kube api is managed and setup for you - at that point is just becomes a standard, open cloud API :)

3 node k3s cluster on hetzner cloud for about 10€/month

And easily provisioned with https://github.com/xetys/hetzner-kube (not affiliated, but I tried it out once and found the deployment to work smoothly)

My cluster is at $30 a month, which is slightly more than my dokku server; but I get so much more out of it that I think it's worth it.

I'm not very familiar with kubernetes, what do you get out of it that you don't with dokku? I really like dokku and use it on my personal server for all my half-baked personal projects.

Mostly automatic DNS management and it being more easy to use more than one machine for things.

If you just/first want to practice actual, multi-node k8s on your local Mac (or Windows), I've just completed this: https://github.com/youurayy/hyperctl

Cheapest would probably be somewhere around 10$. Three VPS with 4GB ram somewhere.

Im running mine with three VPS and 8GB ram. Just for personal services and learning/fucking around purposes.


I run a very small GKE cluster with two 1vCPU preemptible nodes. Costs me a few bucks per month.

I was thinking of doing the same thing. How much in average do you pay per month?

I thought GKE the control nodes are free but but on EKS you pay heaps for them.

Same here, I run K8s both locally and on GKE for a project. My GKE cluster is just 2 nodes that have 2vCPU & 3.75GB RAM each. Performance is great and it has saved me an insane amount of time. I have also created an open source project that does one thing - updates your deployments :) https://keel.sh. Previously I tried several different hosting options but nothing is easier/more convenient than k8s for me.

Thanks a lot ! Keel looks exactly like something I would need!

How do you expose your services via an ingress when it needs to be behind a nat (via your home router/gateway)? Thanks!

Hi, my shameless plug (I am the creator of webhookrelay): https://webhookrelay.com/v1/guide/ingress-controller, using it both for services that are running in GKE and on minikube locally. It's cheaper than allocating a LB IP for backing services that don't get much traffic like Grafana and similar things.

One possibility (especially for "home Kubernetes" case) is not exposing the services to the outside world at all and using ZeroTier to access them https://www.zerotier.com/ It's L2 mesh VPN, and I believe you can even use MetalLB with it with some minor trickery. You can, of course, set up WireGuard or OpenVPN for yourself, too, but from my experience zt is the simplest for accessing the boxes behind NAT as you don't even need to set up any servers with real IPs.

DNAT. You map one/more ports from your router exposed on internet to ip:port of the local app.

However, http/https ports are already used on routers to offer an admin web GUI. It’s technically possible to circumvent this with some ad-hoc firewall rules, but it depends if the router admin UI let’s you do that.


> However, http/https ports are already used on routers to offer an admin web GUI.

not on the wan side I'd hope


Exactly, they aren’t exposed outside. That’s why you can “potentially” add rules to route request from the outside to an internal host:port, even 80/143. On the LAN you would still able to connect to router admin.

Using NodePort and Traefik ingress controller.

thanks!

You don't necessarily need an ingress, NodePort may be sufficient to expose a service.

Assuming you are running on physical hardware how are you managing storage? I've tried Rook but it seems somewhat buggy and overkill for my requirements (rsync would do).

I'm using K8S (with rancher's k3s) at home too ! My main reason is portability. When I need to unplug one of the Raspberries or move all services to somewhere else, I only need to change the storage layer.

So you're running bare metal k8s at home? What do you use for storage? That's my biggest question in how to move frok minikube at home to a true cluster.

Raid 1, 4tb, NAS with NFS :)

Thanks! That should be enough to get me going in the right direction, hopefully to a working setup :)

Do youbuse minikube at home? If it's a real cluster, I'd like to ask what you do for storage. I currently run minkube but would love to move to a real cluster setup.

Docker for Mac can run a k8s instance for you if you want.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: