Hacker News new | past | comments | ask | show | jobs | submit login

I have a question. At what point does k8s make sense?

I have a feeling that a microservice architecture is overkill for 99% of businesses. You can serve a lot of customers on a single node with the hardware available today. Often times, sharding on customers is rather trivial as well.

Monolith for the win! Opinions?




K8s is nice even without microservices. Yeah you don't get nearly the benefits you would in a microservice architecture, but I consider it a control plane for the infrastructure, with an active ecosystem and focus on ergonomics. If you have a really simple infrastructure, you will still need to script spinning up the VMs, setting up the load balancing, etc. but K8s gives you a homogenous layer upon which to put your containers. It's not too much of an overkill, especially with a hosted K8s from e.g. Google, AWS, and soon Digital Ocean and Scaleway.

Things like throwing another node into the cluster, or rolling updates are free, which you would otherwise need to develop yourself. All of that is totally doable, of course, but I like being able to lean on tooling that is not custom, when possible.

When your infrastructure does need to become more complicated, you're already ready for it. Even if I were only serving a single language, starting with a K8s stack makes a lot of sense, to me, from a tooling perspective. Yeah normal VMs might be simpler, conceptually, but I don't consider K8s terribly complicated from a user perspective, when you're staying around the lanes they intend you to stay in. Part of this may also be my having worked with pretty poor ops teams in the past, but I think K8s gives you a really good baseline that gives pretty good defaults about a lot of your infrastructure, without a lot of investment on your part.

That said, if you're managing it on a bare metal server, then VMs may be much easier for you. K8s The Hard Way and similar guides go into how that would work, but managing high availability etcd servers and the like is a bit outside my comfort zone. YMMV.


There's a huge range between monolith and microservice approach, and even a monolith will have dependent services. A simple web stack these days might include nginx, a database, a caching layer, some sort of task broker and then the 'monolith' web app itself. All of that can be sanely managed in k8s.


Right... IMO monolith is better understood as a reference to the data model than deployment topology. If you only have a single source of truth, then your application is naturally going to trend towards doing most of its business logic in one place. This still doesn't displace the need for other services like caching, async tasks, etc., that you identify.


I definitely wouldn’t be managing my own database or caching layer without a very good reason. I would use a managed service if I were using a cloud provider.


I hate the word Microservice, so I'm just going to use the word Service.

Most monoliths software companies build aren't actually monoliths, conceptually. Let's say you integrate with the Facebook API to pull some user data. Facebook is, within the conceptual model of your application, a service. Hell, you even have to worry "a little bit" about maintaining it; provisioning and rotating API keys, possibly paying for it, keeping up to date on deprecations, writing code to wire it up, worrying about network faults and uptime... That sounds like a service to me; we're three steps short of a true in-house service, as you don't have to worry about writing its code and actually running it, but conceptually its strikingly similar.

Facebook is a bad example here. Let's talk Authentication. Its a natural "first demonolithized service" that many companies will reach to build. Auth0, Okta, etc will sell you a SaaS product, or you can build your own with many freely available libraries. Conceptually they fill the same role in your application.

Let's say you use Postgres. That's pretty much a service in your application. A-ha; that's a cool monolith you've got there, already communicating over a network ain't it. Got a redis cache? Elasticsearch? Nginx proxy? Load balancer? Central logging and monitoring? Uh oh, this isn't really looking like a monolith anymore is it? You wanted it to be a monolith, but you've already got a few networked services. Whoops.

"Service-oriented" isn't first-and-foremost a way of building your application. It's a way of thinking about your architecture. It means things like decoupling, gracefully handling network failures, scaling out instead of up, etc. All of these concepts apply whether you're building a dozen services or you're buying a dozen services.

Monolithic architectures are old news because of this recognition; no one builds monoliths anymore. It's arguable if anyone ever did, truly. We all depend on networked services, many that other people provide. The sooner you think in terms of networked services, the sooner your application will be more reliable and offer a superior experience to customers.

And then, it's a natural step to building some in-house. I am staunchly in the camp of "'monolith' first, with the intention of going into services" because it forces you to start thinking about these big networking problems early. You can't avoid it.


This outage really doesn’t have much to do with K8s.


Maybe so, but you won't be affected by this outage if you never decided to deploy k8s in the first place.

Even if you deploy k8s privately, or over at Amazon, I think there's enough horror stories to make you think twice about the technology.

Then, if it isn't going to be k8s for microservices, what's a more reliable alternative?


As someone whose daily work happens on k8s, I'd say you better be paining a lot before you move to k8s. I take great care to avoid this, but if you aren't careful, you can end up "feeling" productive on k8s without actually being productive. K8s gives a lot of room for one to tweak workflows, discuss deployment strategies, security, "best practices", etc. And you can get things done reasonably fast. But that's like a developer working all day on fine tuning their editor and comparing and writing plugins and claiming that they are getting productive.

The key issue here is that k8s was written with very large goals in mind. That a small business can easily spin it up quickly and run a few microservices or even a monolith + some workers is just coincidental. It is NOT the design goal. And the result of that is that a lot of the tooling and writing around k8s reflects that. A lot of the advice around practices like observability and service meshes comes from people who've worked in the top 1% (or less) of companies in terms of computing complexity. What I'm personally seeing is that this advice is starting to trickle down into the mainstream as gospel. Which strangely makes sense. No one else has the ability to preach with such assurance because not many people in small companies have actually been in the scenarios of the big guns. The only problem is that it's gospel without considering context.

So at what point does k8s make sense? Only when you have answers to the following:

* Getting started is easy, maintaining and keeping up with the going ons is a full time job - Do you have at least 1 engineer at least that you can spare to work on maintaining k8s as their primary job? It doesn't mean full time. But if they have to drop everything else to go work on k8s and investigate strange I/O performance issues, are you ready to allow that?

* The k8s eco system is like the JS framework ecosystem right now - There are no set ways of doing anything. You want to do CICD? Should you use helm charts? Helm charts inherited from a chart folder? Or are you fine using the PATCH API/kubectl patch commands to upgrade deployments. Who's going to maintain the pipeline? Who's going to write the custom code for your github deployments or your brigade scripts or your custom in house tool? Who's going to think about securing this stuff and the UX around it. That's just CICD mind you. We aren't anywhere close to the weeds about deciding if you want to use ingresses vs Load balancers and how you are going to run into service provider limits on certain resources. Are you ready to have at minimum 1 developer working on this stuff and taking time to talk to the team about it?

* Speaking about the team, k8s and Docker in general is a shift in thinking - This might sound surprising but the fact that Jessie Frazelle (y'all should all follow her btw) is occasionally seen reiterating the point that containers are NOT VM's is a decent indicator that people don't understand k8s or Docker at a conceptual level. When you adopt k8s, you are going to pass that complexity to your developers at some point. Either that or your dev ops team takes on that full complexity and that's a fair amount to abstract away from the developers which will likely increase the work load of devops and/or their team size. Are you prepared for either path?

* Oh also, what do your development environments start to look like? This is partly related to microservices but are you dockerizing your applications to work on the local dev environment? Who's responsible for that transition? As much as one tries to resist it, once you are on k8s you'll want to take advantage of it. Someone will build a small thing as a microservice or a worker that the monolith or other services depend on. How are you going to set that up locally? And again, who's going to help the devs accumulate that knowledge while they are busy trying to build the product. (Please don't put your hopes on devs wanting to learn that after hours. That's just cruel).

I can't write everything else I have in mind on this topic. It'd go on for a long long time. But the common theme here is that the choice around adopting k8s is generally put on a table of technical pros and cons. I'd argue that there's a significant hidden cost of human impact as well. Not all these decisions are upfront but it is the pain that you will adopt and have to decide on at some point.

Again, at what point does k8s make sense? Like I said, you ideally should be paining before you start to consider k8s because for nearly every feature of k8s, there is a well documented, well established, well secured parallel that already exists in the myriad of service providers. It's a matter of taking careful stock of how much upfront pain you are trading away for pain that you WILL accumulate later.

PS - If anyone claims that adopting a newer technology is going to make things outright less painful , that's a good sign of immaturity. I've been there and I picture myself smashing my head into a table every now and then when I think of how immature I used to be. Apologies to people I've worked with at past jobs.

PPS - From the k8s site, "Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team." <-- is the kind of claim that we need to take flamethrowers to. On paper, 1 dev with the kubectl+kops CLI can scale services to run with 1000's of nodes and millions of containers. But realistically, you don't get there without having incurred significantly more complex use cases. So no, nothing scales independently.


I fully agree with you, and personally have taken the path of using Docker Swarm as a step-up to k8s, as it was so much easier to get along with. I would certainly recommend this to smaller businesses.


>The k8s eco system is like the JS framework ecosystem right now - There are no set ways of doing anything.

Given how both the JS and devops worlds seems to be progressing, is there any reason to believe that this will change before the next thing comes and K8S becomes a ghost town?


Very nicely written. While not a direct response to OP, you articulated some great points on k8s. k8s will naturally succeed as the future of data center orchestration as VM's give way to containers. But it is questionable if everyone needs it.


i agree with you on major points.

Also, migrating to microservices for existing services might not be worth it, especially if you don't operate at a massive scale.

Keep it simple stupid is still a solid design decision, despite all the microservice/container hype.

Most bussinesses only need a couple of servers that provide the service, spread redundantly with a HA capability.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: