Hacker News new | past | comments | ask | show | jobs | submit login
Why is Kubernetes getting so popular? (stackoverflow.blog)
755 points by a7b3fa 36 days ago | hide | favorite | 661 comments



Main benefits of Kubernetes:

• Lets companies brag about having # many production services at any given time

• Company saves money by not having to hire Linux sysadmins

• Company saves money by not having to pay for managed cloud products if they don't want to

• Declarative, version controlled, git-blameable deployments

• Treating cloud providers like cattle not pets

It's going to eat the world (already has?).

I was skeptical about Kubernetes but I now understand why it's popular. The alternatives are all based on kludgy shell/Python scripts or proprietary cloud products.

It's easy to get frustrated with it because it's ridiculously complex and introduces a whole glossary of jargon and a whole new mental model. This isn't Linux anymore. This is, for all intents and purposes, a new operating system. But the interface to this OS is a bunch of <strike>punchcards</strike> YAML files that you send off to a black box and hope it works.

You're using a text editor but it's not programming. It's only YAML because it's not cool to use GUIs for system administration anymore (e.g. Windows Server, cPanel). It feels like configuring a build system or filling out taxes--absolute drudgery that hopefully gets automated one day.

The alternative to K8s isn't your personal collection of fragile shell scripts. The real alternative is not doing the whole microservices thing and just deploying a single statically linked, optimized C++ server that can serve 10k requests per second from a toaster--but we're not ready to have that discussion.


I am ready! NetBSD is running on the toaster. I think haproxy can do 10K req/s. tcpserver on the backends. I only write robust shell scripts, short and portable.

As a spectator, not a tech worker who uses these popular solutions, I would say there seems to be a great affinity amongst in the tech industry for anything that is (relatively) complex. Either that, or the only solutions people today can come up with are complex ones. The more features and complexity, the more something is constantly changing, the more a new solution gains "traction". If anyone reading has examples that counter this idea, please feel free to share them.

I think if a hobbyist were to "[deploy] a single statically linked, optimized [C++] server that can serve 10k requests per second from a toaster" it would be like a tree falling in the forest. For one because it is too simple, it lacks the complexity that attracts the tech worker crowd, and second, because it is not being used by well-known tech company and not being worked on by large numbers of people, it would not be newsworthy.


I can see your point for small hobby projects. But enterprise web development in C++ is no fun at all. For example: "Google says that about 70% of all serious security bugs in the Chrome codebase are related to memory management and safety." https://www.zdnet.com/article/chrome-70-of-all-security-bugs...

Developer time for fixing these bugs is in most cases more expensive, than to throw more hardware at your software written in a garbage collected language.


That's why C++ is in brackets. :) I think the reason a hobbyist might be able to pull off something extraordinary is because he is not bound to the same ambition as a tech company. He can focus on performance, the 10K req/s part, and ignore the "requirement" of serving something large and complex that is likely full of bugs. "Developer time" is "hobby time". Done for the pleasure of it, not the money. Some of the most impressive software from a performance standpoint has been written by more or less "one man teams". I am glad I am not a tech worker. Even for someone who enjoys programming, it does not sound fun at all. No wonder there is so much cynicism.


True, but the alternative to C++ with that reasoning is Rust or Go (depending on your liking), not Ruby. And with both of these you can step around a lot of deployment issues, because a single server can be sufficient for quite high loads. Avoid distributed systems as much as you can: https://thume.ca/2020/05/17/pipes-kill-productivity/


I find that the big problems that k8s solve is the usual change management issues in production systems.

What do we want to deploy, okay, stop monitoring/alerts, okay, flip the load balancer, install/copy/replace the image/binary, restart it, flip LB, do the other node(s), keep flipping monitoring/alerts, okay, do we need to do something else? Run DB schema change scripts? Oh fuck we forgot to do the backup before that!

Also now we haven't started that dependent service, and so we have to rollback, fast, okay, screw the alerts, and the LB, just rollback all at once.

And sure, all this can be scripted, run from a laptop. But k8s is basically that.

...

And we get distributedness very fast, as soon as you have 2+ components that manage state you need to think about consistency. Even a simple cache is always problematic (as we all know how the cache invalidation joke).

Sure, going all in on microservices just because is a bad idea. Similarly k8s is not for everyone, and running DBs on k8s isn't either.

But, the state of the art is getting there. (eg the crunchydata postgresql operator for k8s.)


> I find that the big problems that k8s solve is the usual change management issues in production systems.

This.

I was in a company where devops was just a fancy marketing term, developers would shit out a new release and then it was our problem (we the system engineers / operations people) to make it work on customers' installations.

I now work as a devops engineer in a company that does devops very well. I provide all the automation that developers need to run their services.

They built it, they run it.

I am of course available for consultation and support with that automation and kubernetes and very willing to help in general, but the people running the software are now the people most right for the job: those who built it.

As I said in my other comment: it's really about fixing the abstractions and establishing a common lingo between developers and operations.


It depends.

If you want to be a successful indie company, avoid cloud and distributed like the plague.

If you want to advance in the big corp career ladder, user Kubernetes with as many tiny instances and micro-services as you can.

"Oversaw deployment of 200 services on 1000 virtual servers" sounds way better than "started 1 monolithic high-performance server". But the resulting SaaS product might very well be the same.


I just tell people I use a mono repo to house all my single file microservices.

Php under apache.


Great description!

I run a monolithic ensemble that abstracts away the concept of multiple processes to deliver a unified API.

In short, it's multithreaded.


recently talked to a non-SV software developer friend: yeah, I killed two days making a window show up on top of all others on ubuntu with QT (under KDE, so, a supported business-configuration...). After telling him that this is a standard-feature of most X windows managers and asking, whether he used Wayland (didn't know), we went into functionality. "Yeah, you can enter a command and execute it on a remote machine" - "isn't that what ssh is for and you can do with a 20 line shell-script"" - "Yeah maybe, but some management type developed it, while he was still a grunt so we have to keep it...". I bet, said management type is still bragging about how he introduced convenient remote command execution by writing a QT app and his own server (or maybe he's using telnet...).


Or .NET and Java.

All this docker and k8s stuff just feels like reinventing application servers, just 10x more complex as means to sell consulting services.


Rust and Go could help you there, and their deployment story is as excellent as C/C++: just compile and ship.

However, their learning curve is pretty steep (particularly Rust) and most developers don’t enjoy having to worry about low-level issues, which makes recruitment and retention a problem. Whereas one can be reasonably proficient with Python/Ruby in a week, Java/C# is taught in school, and everyone has to know JS anyway (thanks for nothing, tweenager Eich), so it’s easy to pick up manpower for those.


Do those highly available developers have the same quality as the Rust developers you could get, though? (On average.) I'd wager there is a correlation between interest in deeper topics as required by Rust and good long term design and quality. I've seen how marvelously efficient a small team of very talented people can be be. To replace them with the average highly available developers you would need far more than doubly the people and some extra manager, just because communication scales so badly. The 10x developer is a myth, but I think the 5x development team is realistic in the right circumstances.

Disclaimer: Neither do I claim to be a very good developer, nor do I think you, the reader, is only average. Just given that you are reading Hackernews is a strong indicator for your interest in reflection and self improvement, regardless of your favorite language.


The problem might be that those people interested in learning new things, solving complicated issues and building good long-term architectures are usually not too interested in sitting in a cubicle without a window.

So big companies might be forced to settle for less skilled developers, simply because the top tier is doing their own thing. I assume that's also why acqui-hiring is a thing.


Cubicles? You're too kind! Open floor plans are so much more 'creative' and 'communicative' and storage efficient. Also, they look so nice when you have visitors. (No joke: Our CEO told us he was so impressed with the office of this other company he visited – after being there once and not talking to the employees specifically about how they liked it! I'm shocked again and again how many decisions top managers like to make without listening to those it affects.)


Except Java and .NET also offer similar capabilities, yet many of those students seem happier to reinvent the wheel and being cooler.

For every time I have to deal with k8s I deeply miss application servers.


Additionally, for larger eneterprises, the operational overhead quickly grows and slows down new app development, if one is tied to a traditional server approach.

Within our teams, we’ve found we can do with an (even) higher level of abstraction by running apps directly on PaaS setups. We found this sufficient for most of our use-cases in data products.


> because it is not being used by well-known tech company and not being worked on by large numbers of people, it would not be newsworthy.

And at this point the hobbyist might wonder, "why isn't my toaster software being used by well-known companies? where are the pull requests to add compatibility for newer toaster models?"

> As a spectator, not a tech worker who uses these popular solutions, I would say there seems to be a great affinity amongst in the tech industry for anything that is (relatively) complex.

I think you have it backwards. General/abstract solutions (like running arbitrary software with a high tolerance for failure) have broad appeal because they address a broad problem. Finding general solutions to broad problems yields complexity, but also great value.


"And at this point the hobbyist might wonder..."

Not this one. :)


> great affinity [...] for anything that is (relatively) complex.

That, and mixture of a sunken cost fallacy/lack of the ability to step back and review if the chosen solution is really better/simpler rather than a hell of accidental complexity. If you've spend countless months to grok k8s and sell it to your customer/boss, it just has to be good, doesn't it?

Plus, there's a great desire to go for an utopian future cleaning up all that's wrong with current tech. This was the case with Java in the 2000s, and is the case with Rust (and to a lesser degree with WASM) today, and k8s. Starting over is easier and more fun than fixing your shit.

And another factor are the deep pockets of cloud providers who bombard us with k8s stories, plus devs with an investment into k8s and/or Stockholm syndrome. Same story with webdevs longing for nuclear weapons a la React for relatively simple sites to make them attractive on the job market, until the bubble collapses.

But like with all generational phenomenae, the next wave of devs will tear down daddy-o's shit and rediscover simple tools and deploys without gobs of yaml.


> Plus, there's a great desire to go for an utopian future cleaning up all that's wrong with current tech. This [...] is the case with Rust [...]. Starting over is easier and more fun than fixing your shit.

The domain I'm working in might be non-representative, but for me fixing my shit systematically means switching from C++ to Rust. The problems the borrow checker addresses come up all time either in the form of security bugs (because humans are not good enough for manual memory management without a lot of help) or in the form of bad performance (because of reference counting or superfluous copies to avoid manual memory management).

But otherwise I agree with you that if we never put in the effort to polish our current tools, we'll only ever get the next 80%-ready solution out of the hype train.


Have you tried modern C++ before switching to rust? Or were you already using it and it didn't provide all the necessary features?


I am all for modern C++, and although I like Rust, C++ is more useful for my line of work.

However, it doesn't matter how much modern we make C++, if you don't fully control your codebase, there is always going to exist that code snippet written in C style.


Does it have a borrow checker?


In the form of static analysis, lifetime checker.


So no?


Depends, not on the language, that is correct.

However I always have static analysers enabled on my builds, so it is almost as if they were part of the language. Regardless if we are talking about Java, C# or C++.

Just like most people that are serious about Rust have clippy always enabled, yet it does stuff that ins't part of Rust language spec.


I think the popularity of Kubernetes is that it can run all kinds of workloads on a single platform not really its performance. I've been in Ops business for over a decade now, in my case every single customer has a completely different kind of application architecture and sometimes even with similar use cases (i.e like ecom app or chatbots etc). With Kubernetes the differences are largely irrelevant encapsulated nicely within the container. This means in my universe I can run all of our customers on a single architecture and the differences are nicely abstracted.


Same feeling here but I came to understand that it's the enterprise and the design by committee that brings in the complexity.

K8S is developed by a multitude of very large companies. Each with their own agenta/needs. All of them have to be addressed. Thus the complexity. If you think about it they probably manage to keep the complexity to relatively low levels. Maybe because it is pushed to the rest of the ecosystem (see service meshes for example).

Being pushed by the behemoths also explains the popularity. Smaller companies and workers feel that this is a safe investment in terms of money and time familiarizing with the tech stack so they jump on. And the loop goes on.

Main business reason for all that though I think it's the need of Google et all to compete with AWS creating a cloud platform that comes to be a standard and belongs to no-one really. In this sense it is a much better, versatile and open ended openstack attempt.


That's kind of why people use Google Go. No memory issues, statically linked, easy cross-compile, and it can handle 10k requests on a toaster.

And yes, there is less fancy companies like one where I work where we don't use Kubernetes because it's kind of overkill if all of your production workload fits onto 2 beefy bare metal servers.

I can see a point in using Docker to unify development and production environments into one immutable image. But I have yet to see a normal-sized company that gets a benefit from spreading out to hundreds of micro instances on a cloud and then coordinating that mess with Kubernetes. Of course, it'll be great if you're a unicorn, but most people using it are planning for way more scaling than what they'll realistically need and are, thus, overselling themselves on cloud costs.


Running on only 2 servers kinda risky no? Your one hardware failure during a maint on the other away from an outage.


Yes, but with the cloud I'm also only one platform issue away from not having my virtual instances start correctly. Like the one on May 22nd this year.

Last year, my bare metal website had 99.995% uptime. Heroku only managed 99.98%.

Of course, I could further reduce risk by having a hot standby server. But I'm not sure the costs for that are warranted, given the extremely low risk of that happening.


I didn't bring up the cloud but ok. Can you link the outage? A googling couldn't find it.


> I would say there seems to be a great affinity amongst in the tech industry for anything that is (relatively) complex.

hah, i've noticed that too - specifically around k8s/deployments/system architecture too. i've taken to calling it complexity fetishisation.

i think it stems from the belief/hope that, whilst they don't "google sized" data today, they need to allow for it.

i'll take two toasters, please.


The reason for complexity fetishisation (great term!) is to show everybody else how amazingly smart you are. It's the infrastructure version of "clever" code.


Except to some of us it shows just the opposite.


Its starting to get depressing at my age. After nearly 20 years in the industry I think I have a knack for keeping things as "simple as necessary and no more" and managing complexity. Unfortunately interviews are full of bullshit looking at pedantic nitpicking, or stuff I learned 20 years ago in university and have never needed to use since.


Someone else above in the thread believes there is "great value" in being able to accomodate complexity, what the commenter refers to as a "general" solution, as well as this being worth any cost in performance. God help you if these are the type of people who are doing the interviews. Despite pedantic technical questions, I doubt they are actually screening for crucial skills like reducing complexity. Rather, the expectation is that the tech worker will tolerate complexity, including the "solutions" to managing it that are themselves adding more complexity, e.g., abstraction.


Just focus on staying relevant with new languages and ignore the hypetrain that is k8s. Its something you can learn on the job and can easily talk your way around in an interview.

The older i get the more i realise the less i want in my stacks.

This article listed as a benefit, frequent, multiple major updates each year, new features and no sign of it slowing down. I just cringed and wondered who the fuck is asking for this headache?

Ive been working a lot with wordpress lately and the stability of the framework is spoiling me rotten.


I predict that you'll get down-voted into oblivion, but I agree with you :)


Deploy that toaster on k8s and now you have a vc funded startup.


This, and more but it's all in the same vein.

I was pretty skeptical too but then handed over a project which was a pretty typical mixed bag: Ansible, Terraform, Docker, Python and shell scripts, etc... Then I realized relying on Kubernetes for most projects has the huge benefit of bringing homogeneity to the provisioning/orchestration which improves things a lot both for me and the customer or company I work for.

Let's be honest here, in many cases it does not make a difference whether Kubernetes is huge, inefficient, complicated, bloated, etc... or not. It certainly is. But just the added benefit of pointing at a folder and stating : "this is how it is configured and how it runs" is huge.

I was also pretty skeptical of Kustomize but it turned out to be just enough.

So, like many here. I kind of hate it but it serves me well.


As a fellow Kubernetes-hater, this is the best explanation I've read for its virality. Thanks!


I am still not convinced by Kubernetes, but you make a good point.


I don't agree with the last paragraph.

Your C++ example is orthogonal to the deployment aspect because it discusses the application. Kubernetes and the fragile shell scripts are about the deployment of said application.

How are you going to deploy your C++ application? Both options are available, and I would wager that in most cases, Kubernetes makes more sense, unless you have strict requirements.


Kubernetes is for orchestrating a distributed system. What I was suggesting is to (1) make a monolith and (2) make it fast, light and high-throughput. The goal of your service is to reliably serve users at scale; this is just another way of doing it, just much more esoteric.

A "C++ monolith" allows me to potentially bypass a lot of this deployment stuff because it could serve lots (millions) of users from a single box.


> A "C++ monolith" allows me to potentially bypass a lot of this deployment stuff

No it doesn't. Let's assume you write that application as a C++ monolith. Congratulations, you now have source code that could potentially serve 10k users on a toaster... If only you could get it onto that toaster. How are you going to start the databases it needs? How are you going to restart it when it crashes, or worse: When it still runs but is unresponsive. How are you going to upgrade it to a new version without downtime? How are you going to do canary releases to catch bugs early in production without affecting all users? How do you roll back your infrastructure when there is an issue in production? How do you notice when your toaster server diverges from it's desired state? How do you handle authorization to be compliant with privacy regulations? I'd love to see that simple and safe shell script of yours which handles all those use cases. I'm sure you could sell it for quite a bit of money.

What you fail to understand is that k8s never was about efficiency. Your monolith may work at 10k users with a higher efficiency but it can never scale to a million. At some point you can't buy any bigger toasters and have no choice but to make a distributed system.

Besides, microservice vs monolith is orthogonal to using k8s.


>Kubernetes is for orchestrating a distributed system.

No it's not. You can use it to run bunch of monoliths too. K8s provides a common API layer that all of your organisation can adhere to. Just like containers are a generic encapsulation of any runable code.


Sure, but you could just use plain Docker for the monolith. Containerization isn't the issue. If you're not orchestrating and connecting an array of services, what's the value add of K8s?


Standardization of deployment logic, configuration management, networking, rbac policies, ACL rules, etc etc. across on-prem or any cloud provider.

I can leave my current job, jump into a new one and start providing value within less than couple of days. Compared that to spending weeks if not months trying to understand their special snowflake of an infrastructure solving the same problems already solved a million times before.


You're totally confusing performance/throughput with reliability. No, "a single box" is NOT reliable.


I run a Dockerized monolithic application in ECS, and I'll be switching to Kubernetes soon. I am 100% sold on this approach and will never go back to any deployment methods that I've used in the past (Capistrano, Ansible, Chef, Puppet, Saltstack, etc.)

I use Convox [1] which makes everything extremely simple and easy to set up on any Cloud provider. They have some paid options, but their convox/rack [2] project is completely free and open source. I manage everything from the command-line and don't use their web UI. It's just as easy as Heroku:

    convox rack install aws production

    convox apps create my_app

    convox env set FOO=bar

    convox deploy

You can also run a single command to set up a new RDS database, Redis instance, S3 bucket, etc. Convox manages absolutely everything: secure VPC, application load balancer, SSL certificates, logs sent to CloudWatch, etc. You can also set up a private rack where none of your instances have a public IP address, and all traffic is sent through a NAT gateway:

    convox rack params set Private=true
This single command sets up HIPAA and PCI compliant server infrastructure out of the box. Convox automatically creates all the required infrastructure and migrates your containers onto new EC2 instances. All with zero downtime. Now, all you need to do is sign a BAA with AWS and make sure your application and company complies with regulations (access control, encryption, audit logs, company policies, etc.)

I run a simple monolithic application where I build a single Docker image, and I run this in multiple Docker containers across 3+ EC2 instances. This has made it incredibly easy to maintain 100% uptime for over 2 years. There were a few times where I've had to fix some things in CloudFormation or roll back a failed deploy, but I've never had any downtime.

My Docker images would be much smaller and faster if I built my backend server with C++ or Rust instead of Ruby on Rails. But I would absolutely still package a C++ application in a Docker image and use ECS / Kubernetes to manage my infrastructure. I think the main benefit of Docker is that you can build and re-use consistent images across CI, development, staging, and production. So all of my Debian packages are exactly the same version, and now I spend almost zero time trying to debug strange issues that only happen on CI, etc.

So now I already know I want to use Docker because of all these benefits, and the next question is just "How can I run my Docker containers in production?". Kubernetes just happens to be the best option. The next question is "What's the easiest way to set up Docker and Kubernetes?" Convox is the holy grail.

The application language or framework isn't really relevant to the discussion.

[1] https://convox.com

[2] https://github.com/convox/rack

P.S. Things move really fast in this ecosystem, so I wouldn't be surprised if there are some other really good options. But Convox has worked really well for me over the last few years.


so you build your docker images all by yourself using just a dockerfile and your statically linked app based on RHEL - for all that HIPPA and PCI compliance?? Iirc the current hottest shit of this shitshow (the dockerhub-using "ecosystem") was to use ansible in you docker builds because it's oh so declaratiev.


No, not just for HIPAA/PCI compliance, that's just one of the many benefits. Here's some more reasons why I love Convox, Kubernetes/ECS, Docker:

* Effortlessly achieve 100% uptime with rolling deploys

* Running a single command to spin up a new staging environment that is completely identical to production

* Easily spinning up identical infrastructure in a different AWS region (Europe, Asia, etc.)

* Easily spinning up infrastructure inside a customer's own AWS or Google Cloud account for on-premise installations

* Automatic SSL certificates for all services. Just define a domain name in your Convox configuration, and it will automatically creates a new SSL certificate in ACM and attach it to your load balancer.

* Automatic log management for all services

* Very easily being able to set up scheduled tasks with a few lines of configuration

* Being able to run some or all of my service on AWS Fargate instead of EC2 with a single command

* Ease of deploying almost any open source application in a few minutes (GitLab, Sentry, Zulip Chat, etc.)


well, I am not really interested in your convox-ads but more in your claim that it somehow makes the typical docker-workflow of running random-software from the net somehow HIPPA and PCI-compliant? That's an interesting claim, especially with your description of it as zero-effort.


No, Convox doesn't automatically make any application compliant. Convox makes it far easier to achieve HIPAA/PCI compliance by easily setting up compliant server infrastructure:

https://docsv2.convox.com/reference/hipaa-compliance

Note that dedicated instances are no longer required for HIPAA compliance [1]. Also note that the private Convox console is completely optional. You can achieve all of this with the free and open source convox/rack project: https://github.com/convox/rack

As I mentioned in my original comment, you still need to do a lot of work to set up company policies and make sure your application complies with all regulations.

You should also be aware that I'm comparing Convox with some other popular options for HIPAA-compliant hosting:

* Aptible: https://www.aptible.com (Starts at $999 per month)

* Datica: https://datica.com (I think it starts around $2,000 per month, but not 100% sure)

These companies do provide some additional security and auditing features, but I think there's no reason to spend thousands of dollars per month when Convox can get you 95% of the way in your own AWS account. PLUS: If you have any free AWS credits from a startup program, you might not need to pay any hosting bills for years.

[1] https://aws.amazon.com/blogs/security/aws-hipaa-program-upda...


In a world where your architecture is that simple I don't think kubernetes would be the choice for long.

I think for the average application there's still something to be said for manual cross-layer optimization between infrastructure, application, and how both are deployed.

What I mean is we can't yet draw too clear a line between the application and how it's deployed because there are real tradeoffs between keeping future options open and getting the product out the door. A strength of kubernetes is that if you get good at it it works for a variety of projects, but a lot of effort is needed to get to that point and that effort could have gone into something else.


Even then there's the question of "where are the logs?" "how is the application deployed?" etc.


> Company saves money by not having to hire Linux sysadmins

Citation? In my experience companies hire more sysadmins when adopting k8s. It's trivial to point at the job reqs for it.

> Company saves money by not having to pay for managed cloud products if they don't want to

Save money?! Again citation. What are you replacing in the cloud with k8s? In my experience most companies using k8s (as you already admitted) don't have a ton of ops experience and thus use more cloud resources.

> Treating cloud providers like cattle not pets

Again. Citation? Companies go multi-cloud not because they want to but because they have different teams (sometimes from acquisition) that have pre-existing products that are hard to move. No one is using k8s to get multi-cloud as a strategy.

> It's going to eat the world (already has?).

Not it won't. It's actually on the downtrend now. Do you work for the CNCF? Can you put in a disclaimer if so?

> just deploying a single statically linked, optimized C++ server that can serve 10k requests per second from a toaster

completely un-necessary; most of the HN audience is not creating a c++ webserver from scratch; most of the HN audience can trivially serve way more than 10k reqs/sec from a single vm (node, rust, go, etc. are all easily capable of doing this from 1 vcpu)


You says all what I want to say.


Disagree with the GUI part. Text based configuration is stable, version-controllable, and intuitive to use. Look at Xcode for a nightmare of GUI configuration.


> stable, version-controllable, and intuitive to use

Neither of those features are inherently impossible to do with GUIs. Alternatively, you can have a GUI editing your text based configuration.

Although, doesn't look as cool so, here we are.


There are quite a few GUI tools for k8s.

Currently mainstream k8s is text based, because it's still too fast moving and new. Creating a great GUI would be a serious overhead and there's not enough interest/demand for it. It'll come eventually.


Rancher does a lot of the orchestrating of k8s clusters and works well in my experience.


Rancher is even more approachable and streamlined than Openshift. Yeah we need k8s to be more streamlined. It's fragmented cluster-something right now. There is no pride in making systems more complex nor there is pride in knowing how to operate them.


OpenShift's web interface lets you do a lot (other alternatives are of course available). And it's quite nice being able to edit the YAML of everything via the web interface as well!


> • Company saves money by not having to pay for managed cloud products if they don't want to

In some cases, the cost of a managed cloud product may be cheaper than the cost of training your engineers to work with K8s. It just depends on what your needs are, and the level of organizational commitment you have to making K8s part of your stack. Engineers like to mess around with new tech (I'm certainly guilty of this), but their time investment is often a hidden cost.

> The alternatives are all based on kludgy shell/Python scripts or proprietary cloud products.

The fact that PaaS products are proprietary is often listed as a detriment. But, how detrimental is it really? There are plenty of companies whose PaaS costs are insignificant compared to their ARR, and they can run the business for years without ever thinking about migrating to a new provider.

The managed approach offered by PaaS can be a sensible alternative to K8s, again it just depends on what your organizational needs are.


"The alternative to K8s isn't your personal collection of fragile shell scripts. The real alternative is not doing the whole microservices thing and just deploying a single statically linked, optimized C++ server that can serve 10k requests per second from a toaster--but we're not ready to have that discussion."

You are writing this and i thought yesterday how to extend my current home k8s setup even further.

I would even manage that little c++ tool through k8s.

K8s brings plenty of other things out of the box: - Rolling update - HA - Storage provisioning (which makes backup simpler) - Infrastructure as code (whatever your shellscript is doing)

I think that the overhead k8s requires right now, will become smaller over the years, it will be simpmler to use it, it will become more and more stable.

It is already a really simple and nice control plane.

I like to use a few docker containers with compose. But if i already use docker compose for 2 projects, why not just using k8s instead?


You still need quite a lot of stuff for that one, statically linked, heavily optimized C++ server. In a way, that's actually what k8s comes from ...

How do you manage deployments for that C++ monolith? How is the logging? Logrotate, log gathering and analysis? Metrics, their analysis and and display? What happens when you have software developed by others that you might also to want to deploy? (If you can run a company with only one program ever deployed, I envy you).

All of that is simplified by kubernetes by simply making all stuff follow single way - "classical" approaches tend to make Perl blush with the amount of "There is more than one way to do it" that goes on.


> • Company saves money by not having to hire Linux sysadmins

.. but hire others to manage k8s? Or existing software engineers have to spend time doing so?

Many of the other points don't seem unique to k8s either.

I do like the alternative you've suggested though.


Are there any decent GUI services for creating the YAML files?

Most of it can be managed by text boxes on the front-end with selections and then it can just generate or edit the required files at the end of a wizard?


Your static app won't scale. Once your kernel is saturated with connections, once your buffers are full, you will get packets dropped. Your app may crash unexpectedly so you need to run it in an infinite loop. And of course your static example is a simple echo app or hello world. That works fine from a toaster. In the real world we use databases where every store takes a significant time to process and persist. You quickly overgrow a single server. Then you need distributed systems and a way to do versioning so you use containers and then you need an orchestrator so you pick K8S because it's most mature and there are many resources around. And then you can even do rolling updates and rollbacks. Finally you use Helm, Terraform and Terragrunt and never look back. It works surprisingly good. I lost several years by learning all this stuff, it was difficult but it was worth it. I have more visibility into everything now thanks to metrics-server, Prometheus Operator, Grafana, Loki and I have two environments so I can update deployments in one environment and test, once tested ok, I can apply to live. No surprises. No need to run 5 year old versions of software and fear updating it...


AFAIK, Yandex Taxi uses "single statically linked, optimized C++ server that can serve".


Can't agree more. To this add the moving-sand landscape of an ecosystem there.

But then again it's actually the first public/popular attempt on a cloud OS. There might be a next one with better ergonomics than yamls.


Well aren't Openshift and Rancher offering those ergonomics?


I've never coded in C++, so I'm curious - what is the feedback loop like? My understanding is that since C++ is a compiled language, you can't "hit refresh" the same way you can in JavaScript/Ruby/etc.

Is that an incorrect understanding? I know C++ is supposed to be great for performance, but in truth I've never needed anything to be that fast. And if I can get the job done just as well with something I already know, I won't bother learning something like C++ which has a reputation for not being approachable.

But maybe I don't have full context?


Technically you can have some sort of command in your C++ program to make it fork itself, that is restart itself disk, maybe after freeing non-shareable resources like listening sockets. One could even imagine that this command includes self-recompilation by invoking make (or whatever your build system is).

An alternative is to have your program expose its pid somewhere, and your make file could send a signal to that pid when a new version is ready. The advantage is that if your program crashes on startup, you don't have to do something different to restart it.

If your application has to include an "auto-update" feature (like browsers), use that instead - its certainly better to eat your own dog food. Maybe just hack it a little bit so that you can force a check programmatically (e.g. by sending it a signal) and so that it connects to a local updates server.

It is true that C++ is an overly complicated language; if you don't need maximal performance, you have a lot of AoT languages that are a bit slower ( something around 0.5 C ) but more "user-friendly". In particular, if you are into servers and want fast edit-compile-run loops, Go might be a good choice.


> I've never needed anything to be that fast

In a world where you are billed by ressources used, wouldn’t it be a good idea to have light and fast services that don’t consume those ressources ?

I’ve been wondering this for a time now.


The part of the equation you are missing is how little traffic the majority of business applications end up seeing.

A B2B app that has at most 10 concurrent requests can run on the smallest EC2 instance whether it's written in PHP or written in C++.

Bandwidth doesn't change with language choice and neither does the storage requirements of your app so those billable items don't come into the equation either.

So the CPU cost can effectively be dropped off of 95% of the apps out there today. At that point your main variable cost between C++ and something like PHP/Javascript is going to be the cost of development. All I can say to that is that it's a lot harder to find developers who can write C++ web apps at the same pace as developers slinging PHP for web apps. There is a reason Facebook uses a PHP derivative for huge portions of its web backend.


I agree with you.

So maybe the question we should ask ourselves is: why isn’t there a smaller, cheaper EC2 instance (or any other provider than AWS) ?

This industry is tailoring the levels. Of course it’s understandable because, well, they live on it. And they count on small instances to share hardware ressources to overbook said hardware.

And I don’t blame them for that, I’m doing the same on my own bare metal servers, hosting multiple websites for clients and making money on it.

But I have the feeling that there is a lot of ressource loss somewhere in it, just for the sake of loosing it because it’s easier. Maybe I’m wrong.


Scaleway, Hetzner, OVH, DigitalOcean, etc.

AWS LightSail.

... also the t1.micro is small and cheap. Could you give some concrete numbers?


I only use AWS when consulting customers prefer it.

I get a lot of mileage serving a lot of content for several domains on a free Google Cloud Platform f1 micro instance. I also prefer GCP when I need a lot of compute for a short time.

Hetzner has always been my choice when I need more compute for a month or two. For saving money for VPSs OVH and DO have also been useful but I don’t use them very often.


Back when Joyent still operated a public cloud you could get machines with 100mb of ram.

The problem as you go cheaper is the cost of ipv4 addresses.


Go will get you 90% there in performance. I recommend that instead of C++ because it compiles fast for that edit-build-refresh cycle. Go also has the benefit of producing static binaries by default which solves a lot of the problems Docker was for.

C++ is more for the extreme control over memory. An optimized C++ server can max out the NIC even on a single core and even with some text generation/parsing along the way.


If you're a ruby user looking for extra performance, try Crystal[1]. Everything you love + types compiled to native binaries. You can set up sentry[2] to autocompile and run every time you save, so the feedback loop is just as tight.

[1]: https://crystal-lang.org/

[2]: https://github.com/samueleaton/sentry


It doesn't have to be C++ of course. The main point of OP lies in C++ being fast and easy to deploy (at least it can be). Go, Rust, to some extend C# and Java also fall in that category. This feature set becomes interesting, because it has the potential to simplify everything around it. If you don't need high fault tolerance you can go a very long way with just a single server, maybe sprinkled with some CDN caching in front if your users are international.

If you do need higher availability you can go the route StackOverflow was famous for for quite some time of having a single ReadWrite master server, a backup master to take over and a few of ReadOnly slave servers, IIRC. With such setups you can ignore all the extra complexities cloud deployments bring with them. And just because such simple setups make it possible to treat servers like pets, doesn't mean they have to be irreproducible undocumented messes.


The other day (actually years) I did a gamejam in Go, I bound Cmd+R to essentially the following:

    system("go build")
    exec("./main")
Not a literal hot code reload that some advanced stuff get to enjoy, but nice enough to shorten the feedback loop.


Working in strongly statically typed languages is quite different, because the types (and the IDE/compiler) guides you. You don't have to hit refresh that often.

Even just working in TypeScript with TSed and a few basic strongly typed concepts (Rust's Result equivalent in TS, Option or Maybe, and typed http request/response, and Promise and JSON.parse) makes a big difference.

A lot less okay, just echo/print/log this object (or look up documentation eew), look at what does this look like and how to transform it into what I need. Instead you do that in the IDE.


I like that alternative. Especially now that we have Rust and Actix Web.


I would speculate that for companies these two statements will be exclusive:

• Company saves money by not having to hire Linux sysadmins

• Company saves money by not having to pay for managed cloud products if they don't want to

As a developer I want to right code, not manage a Kubernetes installation. If my employer wants the most value from my expertice they will either pay for a hosted environment to minimize my time managing it or hire dedicated staff to maintain an environment.


Absolutely agree.

A lot of people is just really interested in having something complex instead of understanding their actual needs.


And this is why I only give Linux like one decade more to still be relevant on the server.

With hypervisors and managed environments taking over distributed computing, if there is a kernel derived from Linux or something completely different, it is a detailed that only the cloud provider cares about.


That doesn't really make sense in the context of docker.

It's still Linux inside the container. Even if it's some abstract non-Linux service thing running the container, what happens in the container is still the concern of the developer.


Yes it does, my application written in Go, Java, .NET, doesn't care if the runtime is bare metal, running on an hypervisor type 2, type 1 or some other OS.

I run Docker on Windows Containers, no Linux required.

There are also the ugly named serverless deployments, where the kernel is meaningless.


or a Free Pascal server (powered by mORMot (https://github.com/synopse/mORMot))? Natively compiled, high performance, and supports almost all OS's and CPU's


404 not found


Try removing the trailing parenthesis.


What is your estimate about when we ll be ready to have that discussion?


So now it's kubernetes + kludgy shell/Python scripts.


If it makes things more deterministic, then yeah, why not?


I don't think that kubernetes is getting popular at all.


If you really think that deploying a single binary is an alternative to microservices then you don't understand microservices.


> The real alternative is not doing the whole microservices thing and just deploying a single statically linked, optimized C++ server that can serve 10k requests per second from a toaster--but we're not ready to have that discussion.

The alternative is to have a old and boring cluster of X identical java nodes which host the entire backend in a single process... The deployment is done by a pedestrian bash script from a Jenkins. It used to work fine for too long I guess and folks couldn't resist "inventing" microservices to "disrupt" it.


I’ll take a shot.

k8s is popular because Docker solved a real problem and Compose didn’t move fast enough to solve orchestration problem. It’s a second order effect; the important thing is Docker’s popularity.

Before Docker there were a lot of different solutions for software developers to package up their web applications to run on a server. Docker kind of solved that problem: ops teams could theoretically take anything and run it on a sever if it was packaged up inside of a Docker image.

When you give a mouse a cookie, it asks for a glass of milk.

Fast forward a bit and the people using Docker wanted a way to orchestrate several containers across a bunch of different machines. The big appeal of Docker is that everything could be described in a simple text file. k8s tried to continue that trend with a yml file, but it turns out managing dependencies, software defined networking, and how a cluster should behave at various states isn’t true greatest fit for that format.

Fast forward even more into a world where everybody thinks they need k8s and simply cargo cult it for a simple Wordpress blog and you’ve got the perfect storm for resenting the complexity of k8s.

I do miss the days of ‘cap deploy’ for Rails apps.


> k8s is popular because Docker solved a real problem and Compose didn’t move fast enough to solve orchestration problem. It’s a second order effect; the important thing is Docker’s popularity.

I introduced K8s to our company back in 2016 for this exact reason. All I cared about was managing the applications in our data engineering servers, and Docker solved a real pain point. I chose K8s after looking at Docker Compose and Mesos because it was the best option at the time for what we needed.

K8s has grown more complex since then, and unfortunately, the overhead in managing it has gone up.

K8s can still be used in a limited way to provide simple container hosting, but it's easy to get lost and shoot yourself in the foot.


>Before Docker there were a lot of different solutions for software developers to package up their web applications to run on a server.

There are basically two relevant package managers. And say what you will about systemd, service units are easy to write.

It's weird to me that the tooling for building .deb packages and hosting them in a private Apt repository is so crusty and esoteric. Structurally these things "should" be trivial compared to docker registries, k8s, etc. but they aren't.


.rpm and .deb are geared more towards distributions needs. Distributions want to avoid multiplying the number of components for maintenance and security reasons. Bundling dependencies with apps is forbidden in most distribution policies for these reasons, and the tooling (debhelpers, rpm macros) actively discourage it.

It's great for distributions, but not so great for custom developments where dependencies can either be out of date or bleeding edge or a mix of the twos. For these, a bundling approach is often preferable, and docker provides a simple to understand and universal way to achieve that.

That's for the packaging part.

Then you have the 2 other parts: publishing and deployment.

For publishing, Docker was created from the get go with a registry, which makes things relatively easy to use and well integrated. By contrast, for rpm and deb, even if something analog exists (aptly, pulp, artifactory...) it much more some tools created over time which work on top of one another, giving a less smooth experience.

And then, you have the deployment part, and here, with traditional package managers, it difficult to delegate some installs (typically, the custom app develop in-house) to the developers without opening control over the rest of the system. With Kubernetes, developers gained this autonomy of deployment for the pieces of software under their responsability whilst still maintaining separation of concerns.

Docker and Kubernetes enabled cleaner boundaries, more in line with the realities of how things are operated for most mid to large scale services.


Right, the bias towards distro needs is why packaging so hard to do internally, I'm just surprised at how little effort has gone into adapting it.

You need some system mediating between people doing deployments and actual root access in both cases. The "docker" command is just as privileged as "apt-get install." I have always been behind some kind of API or web UI even in docker environments.


Which two package managers do you mean?

dpkg, rpm, nix, snap, dnf, and I'm sure someone is going to respond with package managers I forgot.


You can always simplify your IT and require everyone to use only a small subset of Linux images which were preapproved by your security team. And you can make those to be only deb or rpm based Linux distributions.

The only problem with these Linux based packaging for deployments are Mac users and their dev environment. Linux users are usually fine, but there always had to be some Docker like setup for Mac users.

If we could say that our servers run on Linux and all users run on some Linux (WSL for Windows users) then deployments could have been simple and reproducible rpm based deployments for code and rpm packages containing systemd configuration.

Complete breeze and no need for Docker or K8s.


Late in replying, but my company has drop in build rules for Go binaries that automatically publish both deb and brew packages.


I'm guessing they meant to say package formats, in which case they'd be deb and rpm. Those were the only two that are really common in server deployments running linux I'd guess.

dnf is a frontend to rpm, snap is not common for server use-cases, nix is interesting but not common, dpkg is a tool for installing .deb.


dpkg and rpm cover the vast majority of production linux servers, which makes them the "two relevant" package managers.


> Docker solved a real problem

> everybody thinks they need k8s and simply cargo cult it for a simple Wordpress blog

docker _also_ has this problem though. there are probably 6 people in the world that need to run one program built with gcc 4.7.1 linked against libc 2.18 and another built with clang 7 and libstdc++ at the same time on the same machine.

and yes, docker "provides benefits" other than package/binary/library isolation, but it's _really_ not doing anything other than wrapping cgroups and namespacing from the kernel - something for which you don't need docker to do (see https://github.com/p8952/bocker).

docker solved the wrong problem, and poorly, imo: the packaging of dependencies required to run an app.

and now we live in a world where there are a trillion instances of musl libc (of varying versions) deployed :)

sorry, this doesn't have much to do with k8s, i just really dislike docker, it seems.


Of course the correct approach is the NixOS one. What docker really did is solve the packaging, distribution, and updating problem.

The dependency thing is just the fallout of the (bad) default provided by distributions.


I am a big fan of using namespaces via docker, in particular for development. If I want to test my backend component I can expose a single port and then hook it up to the database, redis, nginx etc. via docker networks. You don't need to worry about port clashes and it's easy to "factory reset".

In production this model is quite a good way to guarantee your internal components aren't directly exposed too.


that's sort of my point though - namespacing is a great feature that allows for more independent & isolated testing and execution, there is no doubt. docker provides none of it.

i would argue that relying on docker hiding public visibilty of your internal components is akin to using a mobile phone as a door-stop - it'll probably work but there are more appropriate (and auditable) tools for the job.


> docker _also_ has this problem though. there are probably 6 people in the world that need to run one program built with gcc 4.7.1 linked against libc 2.18 and another built with clang 7 and libstdc++ at the same time on the same machine.

You are supposed to keep only a single process inside one docker container. If you want two processes to be tightly coupled then use multi-container pods.


Hit the nail on the head. How else could you at the push of a button not just get a running application but an entire coordinated system of services like you get with helm. And deploying a kubernetes cluster with kops is easy. I don't know why people hate on k8s so much. For the space I work in it's a godsend


Good points but I think it would be accurate to say that Docker solved a developer problem. But developers are only part of the story. Does Kubernetes solve the business' problem? The user's problem? The problems of sys admins, testers, and security people? In my experience it doesn't (though I wouldn't count my experience as definitive).

At my company we have had better success with micro-services on AWS Lambda. It has vastly less overhead than Kubernetes and it has made the tasks of the developers and non-developers easier. "Lock-in" is unavoidable in software. In our risk calculation, being locked into AWS is preferable than being locked into Kubernetes. YMMV.


> I do miss the days of ‘cap deploy’ for Rails apps.

Oh boy I do not miss them. Actually I'm still living them and I hope we can finally migrate away from Capistrano ASAP. Dynamic provisioning with autoscaling is a royal PITA with cap as it was never meant to be used on moving targets like dynamic instances.


>I do miss the days of ‘cap deploy’ for Rails apps.

Add operators, complicated deployment orchestration and more sophisticated infrastructure... It is hard to know if things are failing from a change I made or just because there are so many things changing all the time.


What happens if you give the mouse a hammer?


For me, and many others: infrastructure as code.

Kubernetes is very complex and took a long time to learn properly. And there have been fires among the way. I plan to write extensively on my blog about it.

But at the end of the day: having my entire application stack as YAML files, fully reproducible [1] is invaluable. Even cron jobs.

Note: I don't use micro services, service meshes, or any fancy stuff. Just a plain ol' Django monolith.

Maybe there's room for a simpler IAC solution out there. Swarm looked promising then fizzled. But right now the leader is k8s[2] and for that alone it's worth it.

[1] Combined with Terraform

[2] There are other proprietary solutions. But k8s is vendor agnostic. I can and have repointed my entire infrastructure with minimal fuss.


I'm not sure "a plain ol' Django monolith" with none of the "fancy stuff" is either what people are referring to when they say "kubernetes", or a great choice for that. I could run hello world on a Cray but that doesn't mean I can say I do supercomputing. Our team does use it for all the fancy stuff, and spends all day everyday for years now yamling, terraforming, salting, etc so theoretically our setup is "entire application stack as YAML files, fully reproducible", but if it fell apart tomorrow, I'd run for the hills. Basically, I think you're selling it from a position which doesn't use it to any degree which gives sufficient experience required to give in-depth assessment of it. You're selling me a Cray based on your helloworld.


Reading this charitably: I guess I agree. k8s is definitely overpowered for my needs. And I'm almost certain my blog or my business will never need that full power. Fully aware of that.

But I'm not sure one can find something of "the right power" that has the same support from cloud providers, the open source community, the critical mass, etc. [1]

Eventually, a standard "simplified" abstraction over k8s will emerge. Many already exist, but they're all over the place. And some are vendor specific (Google Cloud Run is basically just running k8s for you). Then if you need the power, you can eject. Something like Create React App, but by Kubernetes. Create Kubernetes App.

[1] Though Nomad looks promising.


Curious why run it at all? The cost must be 10 times more this way. It is mostly for the fun of learning.

I come from the opposite approach. I have 4 servers two digital ocean $5 and two vulr $2.50 instances. One holds the db. One server as the frontend/code. One server to do heavy work and another to server a heavy site and holds backups. For $15 I'm hosting hundreds of sites, running so many background processes. I couldn't imagine hitting that point where k8s would make sense just for myself unless for fun.


Sounds like your setup lacks high availability. If you don't believe you need that, then yeah, kubernetes is overkill.


Few people actually need high availability.

If you do, the recipe is to reduce the number of components, get the most reliable components you can find, and make the single points of failure redundant.

Saying you can use Kubernetes to turn whatever stupid crap people tend to deploy with it highly available, is like saying you can make an airliner reliable by installing some sort of super fancy electronic box inside. You don't get more reliability by adding more components.


> Saying you can use Kubernetes to turn whatever stupid crap people tend to deploy with it highly available, is like saying you can make an airliner reliable by installing some sort of super fancy electronic box inside. You don't get more reliability by adding more components.

This is a bit funny, considering Airbus jets use triple-redundancy and a voting system for some of their critical components. [1]

[1] https://criticaluncertainties.com/2009/06/20/airbus-voting-l...


What about application upgrades?

Are you ok with your application going down for each upgrade? With Kubernetes, it's very simple to configure a deployment so that downtime doesn't happen.


If and only if your application supports it. Database schema upgrades can be tricky for instance, if you care about correctness.

On the other hand, atomic upgrades by stopping the old service and then starting the new service on a Linux command line (/Gitlab runner) can be done in 10 seconds (depending on the service of course – dynamic languages/frameworks sometimes are disadvantaged here). I doubt many customers will notice 10 second downtimes.


And that downtime can even be avoided without resorting to k8s. A simple blue-green deployment (supported by DNS or load balancer) is often all that's needed.

K8s only makes sense at near Google-scale, where you have a team dedicated to managing that infrastructure layer (on top of the folks managing the rest of the infrastructure). For almost everyone else, it's damaging to use it and introduces so much risk. Either your team learns k8s inside out (so a big chunk of their work becomes about managing k8s) or they cross their fingers and trust the black box (and when it fails, panic).

The most effective teams I've worked on have been the ones where the software engineers understand each layer of the stack (even if they have specialist areas of focus). That's not possible at FAANG scale, which is why the k8s abstraction makes sense there.


Takes a couple of minutes at most for an average application upgrade / deployment, a lot of places can deal with that. Reddit is less reliable than what I used to manage as a one man team.


So if a deployment is 2 minutes of downtime, you are limited to 2 per month if you still want to hit 4 9s of availability with no unexpected outages.


You can get a k8s cluster on DO for around $15 p/m. And that itself can host all your apps.


How do you do automated deployments, though? I don't like using K8s for small stuff, but I am also extremely allergic to having to log on to a server to do anything. Dokku hits the sweet spot for me, but at work I would probably use Nomad instead.


Set your pod to pull image always and have entrypoint shell script that clones the repo, kill the pod so on restart you could get your code deployed.

You could run init container with Kaniko that pushes image to repo and then main container that pulls that back but for that you need to do kubectl rollout restart deploy <name>

If you are looking for pure CI/CD gitlab has awesome support or you could do Tekton or Argo. They can run on the same cluster.


What's wrong with logging in to a server? I love logging in to a server and tinkering with it. Sure, for those who operate fleets of hundreds it's not scalable, but for a few servers that's a pleasure.


The problem is when you need to duplicate that server or restore it due to some error, you have no idea what all the changes you made are.

Besides, it's additional hassle and a chance for things to go wrong, the way I have it set up now is that production gets a new deployment whenever something gets pushed to master and I don't have to do anything else.


But this is a solved problem since... well, at least since the beginning of internet. I managed 1000s of Linux & BSD systems over the past 25 years and I have scripts that do all that since the mid 90s that automate everything. I never install anything manual; if I have to do something new, I first write + test a script to do that remotely. Also, all this 'containerization' is not new; I have been using debootstrap/chroot since around that time as well. I run cleanly separated multiple versions of legacy (it is a bit scary how much time it takes to move something written early 2000s to a modern Linux version) + modern applications without any (hosting/reproducibility) issues since forever (in internet years anyway).


That's great but then you're not doing what the commenter above said and "logging in to a server and tinkering with it"


True; I learned many years ago that that is not a good plan. Although, I too, love it. But I use my X220 and Openpandora at home to satisfy that need. Those setups I could not reproduce if you paid me.


> The problem is when you need to duplicate that server or restore it due to some error, you have no idea what all the changes you made are.

A text file with some setup notes is enough for simple needs, or something like Ansible if its more complex. A lot of web apps aren't much more than some files, a database, and maybe a config file or three (all of which should be versioned and backed up).


I would be a lot more confident trying to back up my old school apps than the monstrosity we have on kubernetes we have at present.


Make backup of /etc and package list. Usually that's enough to quickly replicate a configuration. It's not like servers are crashing every day. I'm managing few servers for a last 5 years and I don't remember a single crash, they just work.


I'm logging into a server because i need to, not because its 'pleasure'.

I don't hate it but if you need to login to a server regularly because you need to do an apt upgrade, you should have enabled automatic security updates and not login every few days.

If your server runs full because of some logfiles or stuff, you should fix the underlying issue and not needing to login to a server.

You should trust your machines, independently if it is only one machine, 2, 3 or 100. You wanna be able to go on holiday and know your systems are stable, secure and doing their job.

And logging in also implies a snow flake. Doesn't matter as long as that machine runs and as long as you have not that many changes but k8s actually makes it very simple to finally have an abstraction layer for infrastructure.


Devs went on holiday before k8's.


Yes true, not sure what point you are trying to make.


Flux or Argo can help with this. The operator lives on your cluster, and ensures your cluster state matches a Git repo with all your configuration in it.

Flux - https://github.com/fluxcd/flux

ArgoCD - https://argoproj.github.io/argo-cd/


Write a script which runs remotely over SSH and trigger on the appropriate event in your CI/CD host.


This is what I like to do. In my case, even the CI/CD host is just a systemd service I wrote.

The service just runs a script that uses netcat to listen on a special port that I also configured GitHub to send webhooks to, and processes the hook/deploys if appropriate.

Then when it's done, systemd restarts the script (it is set to always restart) and we're locked and loaded again. It's about 15 lines of shell script in total.


That's quite elegant.


Do you manage to aggregate all logs on a single place? Do you have the same environment as staging? How do you upgrade your servers? Do you have multiple teams deploying on their own components? Do you have a monitoring/metric service? How do you query the results of a Cron? Can you rollback to the correct version when you detect an error at production?


remote syslog has been a thing for how many years?! As has using a standard distribution for your app with a non-root-user for each app, easily wiped and set up every deploy (hint: that's good for security too!). Monitoring was also a solved problem and I guess Cron logs to syslog. Rollback works just like a regular deploy? (I wonder how good k8s helps you with db-schema rollbacks?)


Setting all of that from scratch is not really that easy, and I wouldn't consider "monitoring" to have been a "solved problem". syslog over TCP/UDP had many issues which is why log file shippers happened, and you still need to aggregate and analyze it. Getting application to reliably log remotely is IMO easier with k8s than remote-writing syslog, as I can just whack the developer again and again till it logs to stdout/stderr then easily wrap it however I want.

Deploying as distribution package tends to not work well when you want to deploy it more than once on a specific server (which quickly leads us to classic end-result of that approach, which is VM per deployment minimum - been there, done that, still have scars).

Management of cron jobs was a shitshow, is a shitshow, and probably will be a shitshow except for those that run their crons using non-cron tools (which includes k8s).


Yes, k8s makes it easier and more consistent. But it's not like all the stuff from the past suddenly stopped working or was not possible like GP made it sound ;)


Are you using a framework (cPanel etc) for this or just individual servers talking to each other? Need to move my hosting to something more reliable and cheaper...


I'm learning Elixir now, and it's quite confusing to me how one would go about deploying Elixir with K8s. How much you should just let the runtime handle.

How much of K8s is just an ad hoc, informally-specified, bug-ridden, slow implementation of half of Erlang.


> But I'm not sure one can find something of "the right power" that has the same support from cloud providers, the open source community, the critical mass, etc.

I totally agree. I would dearly like something simpler than Kubernetes. But there isnt a managed Nomad service, and apparently nothing in between Dokku and managed Kubernetes either.


I've been very pleased with Nomad. It strikes a good balance between complexity and the feature set. We use it in production for a medium sized cluster and the migration has been relatively painless. The nomad agent itself is a single binary that bootstraps a cluster using Raft consensus.


I was about to, but seems like you answered the question yourself through that footnote.


>but if it fell apart tomorrow, I'd run for the hills

The test is going from zero to production traffic in a new cloud region.


Swarm is still supported and works. I have it running on my home server and love it.

Kubernetes is fine, but setting it up kind of feels like I'm trying to earn a PhD thesis. Swarm is dog-simple to get working and I've really had no issues in the three years that I've been running it.

The configs aren't as elaborate or as modular as Kubernetes, and that's a blessing as well as a curse; it's easy to set up and administer, but you have less control. Still, for small-to-mid-sized systems, I would still recommend Swarm.


> setting it up kind of feels like I'm trying to earn a PhD thesis.

The kind of people who has to both set the cluster up and keep it up and also has to develop the application and deploy it and keep it up etc is not the target audience.

K8s shines when the roles of managing the cluster and running workloads on it are separated. It defines a good contract between infrastructure and workload. It lets different people focus on different aspects.

Yes it still has rough edges, things that are either not there yet, or vestigial complexity of wrong turns that happened through it's history. But if you look at it through the lense of this corporate scenario it starts making more sense than when you just think of what a full-stack dev in a two person startup would rather use and fully own/understand.


One of the things nobody liked to talk about in public when test automation was slowly "replacing" testers is that if you had the testers write automation, they brought none of the engineering discipline we tend to take as a given to the problem.

It's hard to make tests maintainable. Doubly so if you aren't already versed in techniques to make code maintainable.

I wonder sometimes if we aren't repeating the same experiment with ops right now.


There are elements of our company that want to move to Kubernetes for no real reason other than it's Kubernetes. I can't wait to see the look on their faces when they realise we'll have to employ someone full-time to manage our stack.


I'm not sure you have to, that seems like the whole point of managed services like GKE.


Do you have a recommended tutorial for engineer with backend background to setup a simple k8 infra in ec2, I am interested in understanding devops role better



What are Kubernetes' rough edges?


> setting it up kind of feels like I'm trying to earn a PhD thesis.

Are you following "k8s the hard way"? I've never had this problem; either:

`gcloud container clusters create`

Or

`install docker-for-mac`

And you have a k8s cluster up and running. Maybe it's more work on AWS?


Ekscli (if you want managed) and kops (if you want to manage it yourself) are just as straightforward.


So are you saying that, no matter what, if you want to reply your whole infrastructure as code (networks, dmz, hosts, services, apps, backups etc ) that you are going to have to reproduce that somehow (whatever the combo of AWS services are, OR just learn K8S

Effectively, "every infrastructure as code project will reimplement Kubernetes in Bash"


Not necessarily. You can have all of above with Terraform, Ansible, Puppet, Chef... etc.


Instead of a varied interface tools set, I can have one with consistent interfaces and experiences. Kubernetes is where everything is going, hence why TF and Ansible have both recently released Kubernetes related products / features. It's their last attempt at remaining relevant (which is more than likely wasted effort in the long run). They have too much baggage (existing users in another paradigm) to make a successful pivot.


Ironically for me, those two tools are part of the blessed triad that we use for all of our infrastructure as code and end-user virtual machine initial setup.

If we only got to keep two tools it would be kubernetes and terraform.


Last attempt at remaining relevant - lol.

This isn’t a competition, they are tools. Ansible is widely used and will continue to be so for a long long time. Its foundations - ssh, python and yaml are also in for the long run to manage infrastructure...


Yaml is on the way out, Cuelang will replace it where it's used for infra. It's quite easy to start by validating yaml and then you quickly realize how awesome having your config in a well thought-out language is!


I thought you were trolling with some called Cuelang but is actually a thing.

Yaml will still be used in 100 years, k8s is yaml based...


Yes but YOU won't be writing or seeing yaml, Cue will output the yaml and run kubectl (I'm already doing this to great relief)

Which is step 2 of my Enterprise adoption strategy. Step 1 is starting with validation, step 3 and beyond is where the real fun starts!


Hope that works for you ! Your statement above has nothing to do with yaml being on the way out...


well, then perhaps the growing frustration with a configuration language where meaning depends on invisible chars is an argument. And then what helm and others are doing text interpolating and add helpers for managing indentation.

There are many experiments into alternatives happening right now, so I do believe yaml's days are numbered. I'm actively replacing it where ever I encounter it with a far superior alternative. Cue is far more than a configuration language however, worth the time to learn and adopt at this point.


There are so many configuration files format and language already. The problem you describe seems to be fixed by XML for example


Exactly. That's a great way to put it. A bunch of Bash glue reimplementing what Kubernetes already does. Poorly.


Kubernetes is outstanding because proving ol bash scripts were bad and re-writing all Bash glue in Go glue, slapping 150 APIs on top (60 "new" and about hundred "old versions" https://kubernetes.io/docs/reference/generated/kubernetes-ap...), adding few dozen opensource must have projects - so finally it can be called "cloud native" - boom, a new cloud native bash for the cloud is born!


Actually it's worse than that - k8s was ported from Java, and the codebase is an incoherent mess, as k8s devs say themselves


Bash glue has rarely failed me. Never used k8s, but the horror stories I hear ensure I won’t feel dirty for writing a little bash to solve problems anytime soon.


There's an offshoot of this that I see from developers, especially at old, stodgy companies.

Once everything is "infrastructure as code", the app team becomes less dependent on other teams in the org.

People like to own their own destiny. Of course, that also removes a lot of potential scapegoats, so you now mostly own all outages, tech debt, etc.


I think that's been coming since ... well ever.

I worked in networking for the longest time. When I started there network guys and server guys (at least where I was). They were different people who did different things who kinda worked together.

Then there were storage area networks and similar, networks really FOR the server and storage guys.... that kind of extended the server world over some of the network.

Then comes VMware and such things and now there was a network in a box somewhere that was entirely the server guy's deal (well except when we had to help them... always).

Then we also had load balances who in their own way were a sort of code for networks ... depending on how you looked at it (open ticket #11111 of 'please stop hard coding ip addresses').

You also had a lot of software defined networking type things and so forth brewing up in dozens of different ways.

Granted these descriptions are not exact, there were ebs and flows and some tech that sort of did this (or tired) all along. It all starts to evolve slowly into one entity.


Conversely I think the trend of writing infrastructure as yaml may be the worst part of modern ops. It’s really hard to think of a worse language for this.


Fortunately YAML is not actually necessary in k8s, and only provided as a convenience because writing JSON by hand (or Proto3, lol) is annoying verging on insane.

We can build higher level abstractions easily having a schema to target and we can build them in whatever we want. That's a big boon for me :)


For kubernetes thats absolutely true and I think more people should do that (disclaimer I work with this https://github.com/stripe/skycfg daily). I think it actually would be easier for people to understand the k8s system if they did.

But, yaml is now everywhere in the ops space. Config management systems use it, metrics systems use it, its the defacto configuration format right now and that is unfortunate cause its bad.


Why?

We have plenty of yaml 'code' which is simple and does exactly what it needs to do.

For all other usecases, there are plenty of alternatives, including libs for your preferred language.

Most people use the yaml way because its easy and does exactly what it needs to do.

Everyone else has plenty of well supported and well working alternatives.


Its typing is too weak. It’s block scoping is dangerous. References, classes and attachments are all pretty bad for reuse. Schemas are bolt on and there is no standard query language for it.


> I can and have repointed my entire infrastructure with minimal fuss.

When you get to that blog post please consider going in depth on this. Would love to see actual battletested information vs. the usual handwavy "it works everywhere".


I sure will. 99% of the work was ingress handling and SSL cert generation. Everything else was fairly seamless.

Even ingress is trivial if you use a cloud balancer per ingress. But I wanted to save money so use a single cloud balancer for multiple ingresses. So you need something like ingress-nginx, which has a few vendor-specific subtleties.


Have you tried or considered Nomad (from the makers of Terraform)?


I haven't, and since I've sunk the cost into Kubernetes and know it very well now, likely won't end up there.

In retrospect though, maybe it's exactly what I needed. Great suggestion.


I've been using Nomad for my "toy" network, and I like it. It runs many services, and a few periodic jobs. Lightweight, easy to set up, and has enough depth to handle some of the weirder stuff.


Nomad, in its free offering, cannot compete with k8s for organization-wide usage:

- no RBAC

- no quotas

- no preemption

- no namespacing

This means: everyone is root on the cluster, including any CI/CD system that wants to test/update code. And there's no way to contain runaway processes with quotas/preemption.


can it handle networking (including load balancing and reverse proxies with automatic TLS) or virtualized persistent storage? Make it easy to integrate common logging system?

Cause those are the parts that I miss probably the most when dealing with non-k8s deployment, and I haven't had the occasion to use Nomad.


For load balancing you can just run one of the common LB solutions (nginx, haproxy, Traefik) and pick up the services from the Consul service catalog. Traefik makes it quite nice since it integrates with LetsEncrypt and you can setup the routing with tags in your Nomad jobs: https://learn.hashicorp.com/nomad/load-balancing/traefik

What Nomad doesn’t do is setup a cloud provider load balancer for you.

For persistent storage, Nomad uses CSI which is the same technology K8s does: https://learn.hashicorp.com/nomad/stateful-workloads/csi-vol...

Logging should be very similar to K8S. Both Nomad and K8S log to a file and a logging agent tails and ships the logs.

Disclosure, I am a HashiCorp employee.


Thanks, definitely widened my understanding of Nomad in pretty short time :)

Kinda feels bad that I don't have anything to use it on right now.


Does the Nomad WebUI support any kind of auth or just the Nomad-Bearer thing?

Thinking about completing my Hashicorp Bingo card.


It is on the roadmap to support JWT/OIDC!


Those are the advantage and the problem of nomad. We're using it a lot by now.

Nomad, or rather, a Nomad/Consul/Vault stack doesn't have these things included. You need to go and pick a consul-aware loadbalancer like traefik, figure out a CSI volume provider or a consul-aware database clustering like postgres with patroni, think about logging sidecars or logging instances on container hosts. Lots of fiddly, fiddly things to figure out from an operative perspective until you have a platform your development can just use. Certainly less of an out-of-the-box experience than K8.

However, I would like to mention that K8 can be an evil half-truth. "Just self-hosting a K8 cluster" basically means doing all of the shit above, except its "just self-hosting k8". Nomad allows you to delay certain choices and implementations, or glue together existing infrastructure.

K8 requires you do redo everything, pretty much.


I count the "glue it with existing infrastructure" to be higher cost than doing it from scratch. It was one feature that I definitely knew regarding Nomad, as one or two people who used it did chime in years ago in discussion, but for various reasons that might not be applicable to everyone I consider those unnecessary complication :)


Depending on how big the infrastructure is and how long you want to migrate over... usually there's not enough resources to "redo all from scratch", millions of LoC are already in production, people who owned key services are no longer in the company, other priorities for business exists other than have what you have already working in k8s..


The context was rather different (home setup), but all that you mention can be used as arguments Noth for and against redo, basing on situation in company, future needs, etc.

I have actually done a "lift and shift" where we moved code that had no support or directly antagonistic one to k8s because various problems reached situation where CEO said "replace the old vendor completely" - we ended up using k8s to wrestle with the amount of code to redeploy.


- Yes (Traefik, Fabio, consul connect/envoy)

- Yes, just added CSI plugin support. Previously had ephemeral_disk and host_volume configuration options, as well as the ability to use docker storage plugins (portworx)

- I haven’t personally played with it, but apparently nomad does export some metrics, and they’re working on making it better


nomad is strictly a job scheduler. If you want networking you add consul to it and they integrate nicely. Logging is handled similarly to Kubernetes. Cool thing about Nomad that it's less prescriptive


With "Swarm", do you mean Docker Swarm? Why has it "fizzled"?

The way I learned it in Bret Fisher's Udemy course, Swarm is very much relevant, and will be supported indefinitely. It seems to be a much simpler version of Kubernetes. It has both composition in YAML files (i.e. all your containers together) and the distribution over nodes. What else do you need before you hit corporation-scale requirements?


I use Swarm in production and am learning k8s as fast as possible because of how bad Swarm is:

1. Swarm is dead in the water. No big releases/development afaik recently

2. Swarm for me has been a disaster because after a couple of days some of my nodes slowly start failing (although they’re perfectly normal) and I have to manually remove each node from the swarm, join them, and start everything up again. I think this might be because of some WireGuard incompatibility, but the strange thing is that it works for a week sometimes and other times just a few hours

3. Lack of GPU support


To add another side, I use Swarm in production and continue to do so because of how good it is.

I've had clusters running for years without issue. I've even used it for packaging B2B software, where customers use it both in cloud and on-prem - no issues whatsoever.

I've looked at k8s a few times, but it's vastly more complex than Swarm (which is basically Docker Compose with cluster support), and would add nothing for my use case.

I'm sure a lot of people need the functionality that k8s brings, but I'm also sure that many would be better suited to Swarm.


Yeah I guess for smaller projects and the addition of using Docker Compose files, Swarm would be worth it.

If K8s supported compose scripts out of the box (not Kompose) that'd basically make Swarm unnecessary (at least for me)


This happened to me too when I was using swarm in 2017. Had to debug swarm networks where nodes could send packets one way but not the other. Similar problems as #2 where stuff just breaks and resetting the node is the quickest way I found to fix it.

Switched to k8s in late 2017 and it’s been much more solid. And that’s where the world has moved, so I’m not sure why you’d choose swarm anymore.


> What else do you need before you hit corporation-scale requirements?

Cronjobs, configmaps, and dynamically allocated persistent volumes have been big ones for our small corporation. Access control also, but I'm less aware of the details here, other than that our ops is happier to hand out credentials with limited access, which was somehow much more difficult with swarm

Swarm has frankly also been buggy. "Dead" but still running containers - sometimes visible to swarm, sometimes only the local Docker daemon - happen every 1-2 months, and it takes forever to figure out what's going on each time.


A little off topic but why do these orchestration tools always prefer to use YAML as I really feel it a harder to understand format than JSON or better TOML and it's the only thing that I don't like about Ansible.

I see ruby kind of uses YAML often but are people comfortable editing YAML files? I always have to look up how to do arrays and such when I edit them once in a while.


> when I edit them once in a while

Anything new needs a certain amount of sustained practice before you get the hang of it. I think I had to learn regex like four times before it stuck. I haven’t hit that point with TOML yet so I avoid it.

I’d suggest using the deeper indentation style where hyphens for arrays are also indented two spaces under the parent element. Like anything use a linter that enforces unambiguous indentation.

I prefer YAML for human-writeable config because JSON is just more typing and more finicky. The auto-typing of numbers and booleans in YAML is a pretty damn sharp edge though and I wish they’d solved that some other way.


Yup, even monoliths can benefit from certain k8s tooling (HPAs, batch jobs, etc).


"What cron jobs does this app run?"

Open cron.yaml and see. With schedule. Self documented.

Amazing. Every time. Even when some as my k8s battle wounds are still healing (or permanently scarred). See other replies for more info.


Just like my Ansible repo.


Do you have a recommended tutorial for engineer with backend background to setup a simple k8 infra in ec2?


Take a look at https://github.com/kelseyhightower/kubernetes-the-hard-way.

That’d be a great first step if the purpose is to learn Kubernetes. If, however, you want to set up a cluster for real use then you will need much more than bare bones Kubernetes (something that solves networking, monitoring, logging, security, backups and more) so consider using a distribution or a managed cloud service instead.


Setting up your own k8s from scratch is kind of like writing your own string class in C++: it’s a good exercise (if it’s valuable for your learning path) but you probably don’t want to use it for actual work.

Maintaining a cluster set up like that is a ton of work. And if you don’t perform an upgrade perfectly, you’ll have downtime. Tools like kops help a lot but you’ll still spend far more time than the $70/month it costs for a managed cluster.


I find that K3s is great for getting started. It has traefik included and its less of a learning curve to actually be productive, vs diving in with K8s and having to figure out way more pieces.


I've just started to look into it, but it seems like the project has been focusing on improving the onboarding experience since it has a reputation for being a huge pain to set up. Do you think it has gotten easier lately?


No. Not easier in my opinion. And some of the fires you only learn after getting burnt badly. [1]

Note: my experience was all with cloud-provided Kubernetes, never running my own. So it was already an order of magnitude easier. Can't even imagine rolling my own. [2]

[1] My personal favorite. Truly egregious, despite how amazing k8s is. https://github.com/kubernetes/kubernetes/issues/63371#issuec...

[2] https://github.com/kelseyhightower/kubernetes-the-hard-way


Rancher makes this quite painless; worth a look if you have to run on prem, or in an iaas cloud for some reason.


Out of curiosity why do you feel swarm fizzled out?

I’ve deployed swarm in a home lab and found it really simple to work with, and enjoyable to use. I haven’t tried k8, but I often see view points like yours stating that k8 is vastly superior.


Of course it doesn't have the advantage of #2, but I've found ECS to be far easier to grok and implement.


According to the article you are wrong about "infrastructure as code". Kubernetes is infrastructure as data, specifically YAML files. Puppet and Chef are infrastructure as code.

Edit: not sure why the down votes, I was just trying to point out what seems like a big distinction that the article is trying to make.


Maybe? I'm too lazy to formally verify if the YAML files k8s accepts are Turing complete. With kustomize they might very well be.

How about "infrastructure-as-some-sort-of-text-file-versioned-in-my-repository". It's a mouthful, but maybe it'll catch on.


They don't do loops or recursion. They don't even do iterative steps in the way that Ansible YAML has plays/tasks.

Yes, higher-level tools like Kustomize or Jsonnet or whatever else you use for templating the files are Turing-complete - but that's at the level of you on your machine generating input to Kubernetes, not at the level of Kubernetes itself. That's a valuable distinction - it means you can't have a Kubernetes manifest get halfway through and fail the way that you can have an Ansible playbook get halfway through and fail; there's no "halfway." If something fails halfway through your Jsonnet, it fails in template expansion without actually doing anything to your infrastructure.

(You can, of course, have it run out of resources or hit quota issues partway through deploying some manifest, but there's no ordering constraint - it won't refuse to run the "rest" of the "steps" because an "earlier step" failed, there's no such thing. You can address the issue, and Kubernetes will resume trying to shape reality to match your manifest just as if some hardware failed at runtime and you were recovering, or whatever.)


Infrastructure-as-config?


What’s the material difference between well-formatted data and a DSL? Why does this matter?


Weird that you're getting downvoted.

The difference between code and data is pretty big.

One implies an expectation that the user is going to write some kind of algorithm whereas the other is basically a config file.


I think you could think of "infrastructure as code" as he described it as a superset of "infrastructure as data". Both have the benefit of being able to be reproducibly checked into a repo. Declarative systems like Kubernetes/"infrastructure as data" just go even further in de-emphasizing the state of the servers and make it harder to get yourself into unreproducible situations.


Seems like a nitpick? Infra as data seems like a subset of infrastructure as code


Do you have a good tutorial for doing Django or a standard 3 tier web app on Kubernetes? We are using kubernetes at my workspace, but it seems way too complicated to consider for something like that. Maybe if I can bridge the gap between architectures it will help.


Out of curiosity, are you using terraform to deploy k8s, your app stack on k8s, or both?


To follow this - what do you feel K8S provides on top of terraform?

We used K8S on a large project and I felt like it really, really wasn't necessary.


Nothing. We use Terraform to provision a simple auto-scaling cluster with loadbalancers and certs, does exactly the same thing but there is no Docker and k8s. Few million lines less Go code turning yaml filed into seggfaults.


Consistency and standardized interfaces for AppOps regardless of the hyper-cloud I use. Kubernetes basically has an equivalent learning curve, but you only have to do it once


They operate at different layers. K8s sits on top of the infrastructure which terraform provisions. It's far more dynamic and operates at runtime, compared to terraform which you execute ad-hoc from an imperative tool (and so only makes sense for the low level things that don't change often).


So terraform is a higher-order, meta-Kubernetes. It's very rarely used, but who provisions the cluster itself? That's terraform.

So terraform creates the cluster, DNS and VPC. Then k8s runs pretty much everything.


How are you deploying the workloads into the cluster? Manual kubectl or Helm, GitOps with something like Flux, something else?


Spinnaker. Huge. Clunky. But excellent if you can justify it.


Alas, still rely on bash for that. Practically a one liner.

Mainly just kustomize piped into kube apply.

But, but, but. Having to a create a one-off database migration script imperatively.


Ooh!


Some others are Tekton, Argo, KNative You could also use Jenkins with K8s deploy plugin (from MS Azure Devops team)


Why would you want to provision your own k8s cluster, if you can use EKS, AKS or similar?


I don't. I use a cloud cluster. But that still has to be provisioned? You need to choose size, node pool, VPC, region, etc.


EKS, GKE and the like have a number of limitations. For example: they can be pretty far behind in the version of K8S they support (GKE is at 1.15 currently, EKS at 1.16; K8S 1.18 was released in at the end of March this year.


One example I worked on myself: when you need to train lots of ML models and need lots of video cards. Those would be damn expensive in the cloud!


imo, all swarm needed was a decent way to handle secrets and it would have been the k8s of its day


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: