Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes is our generation's Multics (oilshell.org)
589 points by genericlemon24 14 days ago | hide | past | favorite | 538 comments



I'd be curious what a better alternative looks like.

I'm a huge fan of keeping things simple (vertically scaling 1 server with Docker Compose and scaling horizontally only when it's necessary) but having learned and used Kubernetes recently for a project I think it's pretty good.

I haven't come across too many other tools that were so well thought out while also guiding you into how to break down the components of "deploying".

The idea of a pod, deployment, service, ingress, job, etc. are super well thought out and are flexible enough to let you deploy many types of things but the abstractions are good enough that you can also abstract away a ton of complexity once you've learned the fundamentals.

For example you can write about 15 lines of straight forward YAML configuration to deploy any type of stateless web app once you set up a decently tricked out Helm chart.. That's complete with running DB migrations in a sane way, updating public DNS records, SSL certs, CI / CD, having live-preview pull requests that get deployed to a sub-domain, zero downtime deployments and more.


> once you set up a decently tricked out Helm chart

I don't disagree but this condition is doing a hell of a lot of work.

To be fair, you don't need to do much to run a service on a toy k8s project. It just gets complicated when you layer on all production grade stuff like load balancers, service meshes, access control, CI pipelines, o11y, etc. etc.


> To be fair, you don't need to do much to run a service on a toy k8s project.

The previous reply is based on a multi-service production grade work load. Setting up a load balancer wasn't bad. Most cloud providers that offer managed Kubernetes make it pretty painless to get their load balancer set up and working with Kubernetes. On EKS with AWS that meant using the AWS Load Balancer Controller and adding a few annotations. That includes HTTP to HTTPS redirects, www to apex domain redirects, etc.. On AWS it took a few hours to get it all working complete with ACM (SSL certificate manager) integration.

The cool thing is when I spin up a local cluster on my dev box, I can use the nginx ingress instead and everything works the same with no code changes. Just a few Helm YAML config values.

Maybe I dodged a bullet by starting with Kubernetes so late. I imagine 2-3 years ago would have been a completely different world. That's also why I haven't bothered to look into using Kubernetes until recently.

> I don't disagree but this condition is doing a hell of a lot of work.

It was kind of a lot of work to get here, but it wasn't anything too crazy. It took ~160 hours to go from never using Kubernetes to getting most of the way there. This also includes writing a lot of ancillary documentation and wiki style posts to get some of the research and ideas out of my head and onto paper so others can reference it.


o11y = observability


You couldn't create a parody of this naming convention that's more outlandish than the way it's actually being used.


Yes you can! Accessibility gets abbreviated to a11y, which is about as inaccessible as it gets.

Only if you've never seen it before. The word "accessibility" is incredibly inaccessible to non-native speakers and native speakers with learning disabilities or dyslexia. There's some double characters in there but which ones? Also it sounds like there's an a or "uh" sound in there but somehow it's all "i"s except one is an "e"? "a11y" is four letters (well, two of them are digits but who's counting?) and clearly refers to one particular concept.

Likewise "i18n" (internationalization/internationalisation) and "l10n" (localization/localisation) avoids confusion of whether it's "ize" or "ise", which is literally the problem those concepts try to solve.

I can somewhat excuse "k8s" with "nobody can remember how kubernetes is spelled let alone pronounced" (Germans insist pronouncing the "kuber" part the same way "kyber/cyber" is pronounced in other Greek loanwords, with a German "ü" umlaut) but I admit that one is a stretch and "visual puns" like "k0s" ("minimal", you see?) and "k3s" (the digit 3 looks like half of an 8 so it's "lightweight", right?) are a bit beyond the pale for me.


>The word "accessibility" is incredibly inaccessible to non-native speakers

There are at least a dozen languages where the English word "accessibility" translates to the same word spelled slightly differently.


I'm not sure what your point is. I qualified my claim very explicitly and what you said doesn't contradict any of it.

I'm not saying it's difficult to understand. I'm saying it's an unwieldy word and "a11y" is easier to remember and write correctly.


You specifically called it out as being "inaccessible" (ie, difficult to understand) to non-native speakers (of English).

Also, "a11y" looks too much like the English word "ally". That, IMO, is more likely to cause reading difficulties, particularly with non-native speakers and people with dyslexia.


o11y? In my head it sounds like it's a move in "Tony Hawk: Pro K8er"


It's the Wingdings of naming conventions.

You don't like n7ms?

I was originally confused because I thought the debugger `ollydbg` was being referenced.

https://en.wikipedia.org/wiki/OllyDbg


You still have to do that prod grade stuff, K8s creates a cloud agnostic API for it. People can use the same terms and understand each other


> That's complete with DB migrations in a safe way

How?! Or is that more a "you provide the safe way, k8s just runs it for you" kind of thing, than a freebie?


Thanks, that was actually a wildly misleading typo haha. I meant to write "sane" way and have updated my previous comment.

For saFeness it's still on us as developers to do the dance of making our migrations and code changes compatible with running both the old and new version of our app.

But for saNeness, Kubernetes has some neat constructs to help ensure your migrations only get run once even if you have 20 copies of your app performing a rolling restart. You can define your migration in a Kubernetes job and then have an initContainer trigger the job while also using kubectl to watch the job's status to see if it's complete. This translates to only 1 pod ever running the migration while other pods hang tight until it finishes.

I'm not a grizzled Kubernetes veteran here but the above pattern seems to work in practice in a pretty robust way. If anyone has any better solutions please reply here with how you're doing this.


Hahaha, OK, I figured you didn't mean what I hoped you meant, or I'd have heard a lot more about that already. That still reads like it's pretty handy, but way less "holy crap my entire world just changed".


> You can define your migration in a Kubernetes job and then have an initContainer trigger the job while also using kubectl to watch the job's status to see if it's complete.

Much simpler way is to run migration in init container itself. Most SQL migration frameworks know about locks and transactions, so concurrent migrations wont run anyway


I thought about doing that for a while too.

I think the value in the init+job+watcher approach is you don't need to depend on a framework being smart enough to lock things which makes it suitable and safe to run with any tech stack worry free. It also avoids potential edge cases if a framework's locking mechanism fails, and an edge case in this scenario could be really bad.

But it does come at the cost of a little more complexity (a 30 line YAML job and then ClusterRole/ClusterRoleBinding resources for RBAC stuff on the watcher), but fortunately that's only a 1 time thing that you need to set up.


It simpler than that for simple scenarios. `kubectl run` can set you up with a standard deployment + service. Then you can describe the resulting objects, save the yaml, and adapt/reuse as you need.


> For example you can write about 15 lines of straight forward YAML configuration to deploy any type of stateless web app once you set up a decently tricked out Helm chart.

I understand you might outsource the Helm chart creation but this sounds like oversimplifying a lot, to me. But maybe I'm spoiled by running infra/software in a tricky production context and I'm too cynical.


It's not too oversimplified. I have a library chart that's optimized for running a web app. Then each web app uses that library chart. Each chart has reasonable default values that likely won't have to change so you're left only having to change the options that change per app.

That's values like number of replicas, which Docker image to pull, resource limits and a couple of timeout related values (probes, database migration, etc.). Before you know it, you're at 15ish lines of really straight forward configuration like `replicaCount: 3`.


> I'd be curious what a better alternative looks like.

https://github.com/purpleidea/mgmt/

It's just not finished yet. with < 0.01% of the funding kube has, it has many times more design and elegance. Help us out. Have a look and tell me what you think. =D


My two cents is that docker compose is an order of magnitude simpler to troubleshoot or understand than Kubernetes but the problem that Kubernetes solves is not that much more difficult.


As a Kubernetes outsider, I get confused why so much new jargon had to be introduced. As well as so many little new projects coupled to Kubernetes with varying degrees of interoperability. It makes it hard to get a grip on what Kube really is for newcomers.

It also has all the hallmarks of a high-churn product where you need to piece together your solution from a variety of lower-quality information sources (tutorials, QA sites) rather than a single source of foolproof documentation.


> I get confused why so much new jargon had to be introduced.

Consider the source of the project for your answer (mainly, but not entirely, bored engineers who are too arrogant to think anybody has solved their problem before).

> It also has all the hallmarks of a high-churn product where you need to piece together your solution from a variety of lower-quality information sources (tutorials, QA sites) rather than a single source of foolproof documentation.

This describes 99% of open source libraries used.The documentation looks good because auto doc tools produce a prolific amount of boilerplate documentation. In reality the result is documentation that's very shallow, and often just a re-statement of the APIs. The actual usage documentation of these projects is generally terrible, with few exceptions.


> Consider the source of the project for your answer (mainly, but not entirely, bored engineers who are too arrogant to think anybody has solved their problem before).

This seems both wrong and contrary to the article (which mentions that k8s is a descendant of Borg, and in fact if memory serves many of the k8s authors were borg maintainers). So they clearly were aware that people had solved their problem before, because they maintained the tool that had solved the problem for close to a decade.


Kubernetes docs are pretty good, detailed, and kept up to date - a lot more than just API auto-documentation.

I find it's low quality libraries that tend to have poor documentation. Perhaps that's 99% of open source libraries.


I second this. I like "silent" new tech, which doesn't need to introduce dozens of new "concepts".

- containers focus on what you can do, easy to understand and you can start in 5 minutes

- kubernetes is the opposite, where verbose tutorials lose time explaining me how it works, rather than what i do with it.


I always find it surprising that I have yet to see or touch Kubernetes (and I work as an SRE with container workloads for several years now), and yet HN threads about it are full of people who apparently think it's the only possible solution and are flabbergasted that people don't pray to it nightly.

https://news.ycombinator.com/item?id=27910185

https://news.ycombinator.com/item?id=27910481 - weird comparison to systemd

https://news.ycombinator.com/item?id=27910553 - another systemd comparison

https://news.ycombinator.com/item?id=27913239 - comparing it to git


I think one part of this the lack of accepted nomenclature in CS - naming convention is typically not enforced, unlike if you'd have to produce an engineering drawing for it and have it conform to a standard.

For engineering, the common way is to use a couple of descriptive words + basic noun so things do get boring quite quickly but very easy to understand, say something like Google 'Cloud Container Orchestrator' instead of Kubernetes.


If only branding wasn't involved.

The Kubernetes documentation site is the source of truth, and pretty well written, though obviously no set of docs is perfect.

The concepts and constructs do not usually change in breaking ways once they reach beta status. If you learned Kubernetes in 2016 as an end user, there are certainly more features but the core isn’t that different.


So the basic problem with *nix is its permission model. If we had truly separable security/privilege/resource domains then Linux wouldn't have needed containers and simple processes and threads could have sufficed in place of Borg/docker/Kubernetes.

There's a simpler and more powerful security model; capabilities. Capabilities fix 90% of the problems with *nix.

There's currently no simple resource model. Everything is an ad-hoc human-driven heuristic for allocating resources to processes and threads, and is a really difficult problem to solve formally because it has to go beyond algorithmic complexity and care about the constant factors as well.

The other *nix problem is "files". Files were a compromise between usability and precision but very few things are merely files. Devices and sockets sure aren't. There's a reason the 'file' utility exists; nothing is really just a file. Text files are actually text files + a context-free grammar (hopefully) and parser somewhere, or they're human-readable text (but probably with markup, so again a parser somewhere).

Plenty of object models have come and gone; they haven't simplified computers (much less distributed computers), so we'll need some theory more powerful than anything we've had in the past to express relationships between computation, storage, networks, and identities.


Containers never solved the permission model. They solved the packaging and idempotency problem.

I really dislike when people assume containers give them security, it’s the wrong thing to think about.

Containers allowed us to deploy reproducibly, that’s powerful.


Absolutely true.

Docker replaced .tar.gz and .rpm, not chroots.

Most of the time the chroot functionality of Docker is a hindrance, not a feature. We need chroots because we still haven't figured out packaging properly.

(Maybe Nix will eventually solve this problem properly; some sort of docker-compose equivalent for managing systemd services is lacking at the moment.)


Er, just as a historical note, one of the primary uses of chroots was for packaging. Just like how Docker does it. That, in a sense, was even the original motivation. The security usage of chroots was a later innovation.

> Containers never solved the permission model. They solved the packaging and idempotency problem

Disagree. Containers are primarily about separation and decoupling. Multiple services on one server often have plenty of ways to interact and see each other and are interdependent in non-trivial ways (e.g. if you want to upgrade the OS, you upgrade it for all services together). Services running each in its own container provides separation by default.

OTOH, containers as a technology has nothing to do with packaging, reproducibility and deployment. Just these changes arrived together (e.g. with Docker) so they are often associated, but you can have e.g. LXC containers that can be managed in the same way as traditional servers (by ssh into a container).


LXC, freebsd jails & Solaris zones et al. are not the same as docker containers though.

The former were built with security in mind. The latter was most assuredly not.


I mean, containers can provide isolation. Linux has had a hard time getting that to be reliable because it started with the wrong model: building containers subtractively rather than additively. Though even starting with the right model, until you have isolation for every last bit of shared context that the OS provides (harder to identify than it may seem at first blush!) you won't have a complete solution. And yes, software-based containers will tend to have some leakage. Even sharing hardware with hardware isolation features might not be enough (hello row hammer).

It would be good to have containers aim to provide the maximum possible isolation.


> I really dislike when people assume containers give them security, it’s the wrong thing to think about.

to be fair, there is lots of published text around suggesting that this _is_ the case. many junior to semi-experienced engineers i've known have at some point thought it's plausible to "ssh into" a container. they're seen as light-weight VMs, not as what they are - processes.

> Containers allowed us to deploy reproducibly, that’s powerful.

and it was done in the most "to bake an apple pie from scratch, you must first create the universe" approach.


But you can ssh into a container.

You just need to install sshd and launch it. You also need to create a user and set a password if you want to actually log in.

Why? Because containers aren't a single process. It's a group of processes sharing a namespace.

And you can totally use a container as a light-weight VM. While most containers have bash or a your application as pid 1, there is nothing stopping you launching a proper initrd as pid 1 and it will act much like a proper OS.

Though, just because you can, doesn't mean you should.


I think you mean init, not initrd. An initrd is a RAM disk image loaded by Linux containing kernel file system and network drivers and is typically used to help minimize the size of the main kernel image.

It is possible to do that though. I'm perhaps getting too caught up on 'plausible'.

> There's a simpler and more powerful security model; capabilities. Capabilities fix 90% of the problems with *nix.

What do you think about using filedescriptors as capabilties? Capsicum (for FreeBSD, I think) extends this notion quite a bit. Personally I feel it is not quite "right", but I haven't sat down and thought hard about what is missing.

> we'll need some theory more powerful than anything we've had in the past to express relationships between computation, storage, networks, and identities.

Do you have any particular things in mind which points in this direction? I would like to understand what the status quo is.


I haven't looked at capsicum specifically, but from the simple overview I read it sounds like it is more similar to dropping root privileges when demonizing and not the basis for a whole-OS security model. E.g. there isn't (in my limited reading) a way to grant a new file descriptor to a process after it calls cap_enter. Consider a web browser that wants to download or upload a file; there should be a way for the operator to grant that permission to the browser from another process (the OS UI or similar) after it starts running.

To be effective capabilities also need a way to be persistent so that a server daemon doesn't have to call cap_enter but can pick up its granted capabilities at startup. Capsicum looks like a useful way to build more secure daemons within Unix using a lot of capability features.

I also think file descriptors are not the fundamental unit of capability. Capabilities should also cover processes, threads, and the objects managed by various other syscalls.

> Do you have any particular things in mind which points in this direction? I would like to understand what the status quo is.

Unfortunately I don't have great suggestions. The most secure model right now is seL4, and its capability model covers threads, message-passing endpoints, and memory allocation(subdivision) and retyping as kernel memory to create new capabilities and objects. The kernel is formally verified but afaik the application/user level is not fleshed out as a convenient development environment nor as a distributed computing environment.

For distributed computing a capability model would have to satisfy and solve distributed trust issues which probably means capabilities based on cryptographic primitives, which for practical implementations would have to extend full trust between kernels in different machines for speed. But for universality it should be possible to work with capabilities at an abstraction level that allows both deep-trust distributed computers and more traditional single-machine trust domains without having to know or care which type of capabilities to choose when writing the software, only when running it.

I think a foundation for universal capabilities needs support for different trust domains and a way to interoperate between them.

   1. Identifying the controller for a particular capability, which trust domain it is in, and how to access it.
   2. Converting capabilities between trust domains as the objects to which they refer move.
   3. Managing any necessary identity/cryptographic tokens necessary to cross trust domains.
   4. Controlling the ability to grant or use capabilities across trust domains.
A simple example; a caller wants to invoke a capability on a utility process which produces an output, to which the caller wants to receive a capability to read the output.

   The processes may not live on the same machine.
   The processes may not be in the same trust domain.
   The resulting object may be on a third machine or trust domain.
   The caller may have inherited privacy enforcement on all owned capabilities that necessitates e.g. translating the binary code of the second process into a fully homomorphically encrypted circuit which can run on a different trust domain while preserving privacy and provisioning the necessary keys for this in the local trust domain so that the capability to the new object can actually read it.
   The process may migrate to a remote machine in a different trust domain in the middle of processing, in which case the OS needs to either fail the call (making for an unfortunately complicated distributed computer) or transparently snapshot or rollback the state of the process for migration, transmit it and any (potentially newly encrypted) data, and update the capabilities to reflect the new location and trust domain.

   Basically if the capability model isn't capable of solving these issues for what would be very simple local computing then it's never going to satisfy the OP's desire for a more simple distributed computation model.
I think it's also clear why *nix is woefully short of being able to accomplish it. *\nix is inherently local and has a single trust domain and forces userland code to handle interaction with other trust domains except in the very limited model of network file systems (and in the case of NFS essentially an enforced single trust domain with synchronized user/group IDs)

Windows has capabilities. It's the combination of handles (file, process, etc.) and access tokens.

But you'll note no one is really deploying windows workloads to the cloud. Why? Well, because you'd still have to build a framework for managing all those permissions, and it hasn't been done. Also, you might end up with SVCHOST problem, where you host many different services/apps/whatever in one very threaded process because you can.

Capabilities aren't necessarily simpler. Especially if you can delegate them without controls -- now you have no idea what the actual running permissions are, only the cold start baseline.

No, I think the permissions thing is a red herring. Very much on the contrary, I think workload division into coarse-grained containers are great for permissions because fine-grained access control is hard to manage. Of course, you can't destroy complexity, only move it around, so if you should end up with many coarse-grained access control units then you'll still have a fine-grained access control system in the end.

Files aren't really a problem either. You can add metadata to files on Linux using xattrs (I've built a custom HTTP server that takes some response headers for static resources, like Content-Type, from xattrs). The problem you're alluding to is duck-typing as opposed to static typing. Yes, it's a problem -- people are lazy, so they don't type-tag everything in highly lazy typing systems. So what? Windows also has this problem, just a bit less so than Unix. Python and JS are all the rage, and their type systems are lazy and obnoxious. It's not a problem with Unix. It's a problem with humans. Lack of discipline. Honestly, there are very few people who could use Haskell as a shell!

> Plenty of object models have come and gone;

Yeah, mostly because they suck. The right model is Haskell's (and related languages').

> so we'll need some theory more powerful than anything we've had in the past ...

I think that's Haskell (which is still evolving) and its ecosystem (ditto).

But at the end of the day, you'll still have very complex metadata to manage.

What I don't understand is how all your points tie into Kubernetes being today's Multics.

Kubernetes isn't motivated by Unix permissions sucking. We had fancy ACLs in ZFS in Solaris and still also ended up having Zones (containers). You can totally build an application-layer cryptographic capability system, running each app as its own isolated user/container, and to some degree this is happening with OAuth and such things, but that isn't what everyone is doing, all the time.

Kubernetes is most definitely not motivated by Unix files being un-typed either.

I hope readers end up floating the other, more on-topic top-level comments in this thread back to the top.


The alternatives to Kubernetes are even more complex. Kubernetes takes a few weeks to learn. To learn alternatives, it takes years, and applications built on alternatives will be tied to one cloud.

See prior discussion here: https://news.ycombinator.com/item?id=23463467

You'd have to learn AWS autoscaling group (proprietary to AWS), Elastic Load Balancer (proprietary to AWS) or HAProxy, Blue-green deployment, or phased rollout, Consul, Systemd, pingdom, Cloudwatch, etc. etc.


Kubernetes uses all those underlying AWS technologies anyway (or at least an equivalently complex thing). You still have to be prepared to diagnose issues with them to effectively administrate Kubernetes.


At least with building to k8s you can shift to another cloud provider if those problems end up too difficult to diagnose or fix. Moving providers with a k8s system can be a weeks long project rather than a years long project which can easily make the difference between surviving and closing the doors. It's not a panacea but it at least doesn't make your system dependent on a single provider.


If you can literally pick up and shift to another cloud provider just by moving Kubernetes somewhere else, you are spending mountains of engineering time reinventing a bunch of different wheels.

Are you saying you don't use any of your cloud vendor's supporting services, like CloudWatch, EFS, S3, DynamoDB, Lambda, SQS, SNS?

If you're running on plain EC2 and have any kind of sane build process, moving your compute stuff is the easy part. It's all of the surrounding crap that is a giant pain (the aforementioned services + whatever security policies you have around those).


I use MongoDB instead of DynamoDB, and Kafka instead of SQS. I use S3 (the Google equivalent since I am on their cloud) through Kubernetes abstractions. In some rare cases I use the cloud vendor's supporting services but I build a microservice on top of it. My application runs on Google cloud and yet I use Amazon SES (Simple Email Service) and I do that by running a small microservice on AWS.


Sure, you can use those things. But now you also have to maintain them. It costs time, and time is money. If you don't have the expertise to administrate those things effectively, it may not be a worthwhile investment.

Everyone's situation is different, of course, but there is a reason that cloud providers have these supporting services and there is a reason people use them.


> But now you also have to maintain them.

In my experience it is less work than keeping up with cloud provider's changes [1]. You can stay with a version of Kafka for 10 years if it meets your requirements. When you use a cloud provider's equivalent service you have to keep up with their changes, price increases and obsolescence. You are at their mercy. I am not saying it is always better to set up your own equivalent using OSS, but I am saying that makes sense for a lot of things. For example Kafka works well for me, and I wouldn't use Amazon SQS instead, but I do use Amazon SES for emailing.

[1] https://steve-yegge.medium.com/dear-google-cloud-your-deprec...


While in general I agree with your overall argument, when it comes to:

> cloud provider's equivalent service you have to keep up with their changes, price increases and obsolescence

AWS S3 and SQS have both gone down significantly in price over the last 10 years and code written 10 years ago still works today with zero changes. I know because I have some code running on a Raspberry Pi today that uses an S3 bucket I created in 2009 and haven't changed since*.

(of course I wasn't using an rPi back then, but I moved the code from one machine to the next over the years)


But "keeping up with changes" applies just as much to Kubernetes, and I would argue it's even more dangerous because an upgrade potentially impacts every service in your cluster.

I build AMIs for most things on EC2. That interface never breaks. There is exactly one service on which provisioning is dependent: S3. All of the code (generally via Docker images), required packages, etc are baked in, and configuration is passed in via user data.

EC2 is what I like to call a "foundational" service. If you're using EC2 and it breaks, you wouldn't have been saved by using EKS or Lambda instead, because those use EC2 somewhere underneath.

Re: services like SQS, we could choose to roll our own but it's not really been an issue for us so far. The only thing we've been "forced" to move on is Lambda, which we use where appropriate. In those cases, the benefits outweigh the drawbacks.


It’s time and knowledge.

It can be simple but first you have to learn it.

Given that life is finite and you want to accomplish some objective with you company (and it’s not training dev ops professionals), it’s quite interesting having the ability to outsource a big part of the problems needed to be solved to get there.

Given this perspective, much better to use managed services. Let’s you focus on the code (and maintenance) specific to your problem.


And don't you have specific yaml for "AWS LB configuration option" and stuff? The concepts in different cloud providers are different. I can't image it's possible to be portable without some jquery-type layer expressing concepts you can use and that are built out of the native concepts. But I'd bet the different browsers were more similar in 2005 than the different cloud providers are in 2021.


Sure, there is configuration that goes into using your cloud provider's "infrastructure primatives". My point is that Kubernetes is often using those anyway, and if you don't understand them you're unprepared to respond in the case that your cloud provider has an issue.

In terms of the effort to deploy something new, for my organization it's low. We have a Terraform module creates the infrastructure, glues the pieces together, tags stuff, makes sure everything is configured uniformly. You specify some basic parameters for your deployment and you're off to the races.

We don't need to add yet more complexity with a Kubernetes-specific cost tracking software, AWS does it for us automatically. We don't have to care about how pods are sized and how those pods might or might not fit on nodes. Autoscaling gives us consistently sized EC2 instances that, in my experience, have never run into issues because we have a bad neighbor. Most importantly of all, I don't have upgrade anxiety because there are a ton of services stacked on one Kubernetes cluster which may suffer issues if an upgrade does not go well.


> At least with building to k8s you can shift to another cloud provider if those problems end up too difficult to diagnose or fix.

You're saying that the solution to k8s is complicated and hard to debug is to move to another cloud and hope that fixes it?


> You're saying that the solution to k8s is complicated and hard to debug is to move to another cloud and hope that fixes it?

Not in the slightest. I'm saying that building a platform against k8s let's you migrate between cloud providers because the cloud provider's system might be causing you problems. These problems are probably related to your platform's design and implementation which is causing an impedance mismatch with the cloud provider.

This isn't helpful knowledge when you've only got four months of runway and fixing the platform or migrating from AWS would take six months or a year. It's not like switching a k8s-based system is trivial but it's easier than extracting a bunch of AWS-specific products from your platform.


It takes almost as much time and effort to move K8s as it does to reinvent one cloud implementation as another cloud implementation, and your system engineers still have to learn an entirely new system of IaaS/PaaS. You don't really save anything. The only thing K8s does for you is allow the developers' operation of the system to be the same after it's migrated.

> The only thing K8s does for you is allow the developers' operation of the system to be the same after it's migrated.

I mean, yeah, that’s exactly what’s required to happen, and it’s a good thing because only your system engineers need to do most of the legwork. If you have a team of system engineers, you probably have a much bigger cohort of application engineers.


Indeed. When we did a cloud migration we first moved all our apps to a (hosted) k8s first, and then to a cloud k8s cluster. This made the migration so much easier.

Only the k8s admins need to know that though, not the users of it.


"Only the k8s admins" implies you have a team to manage it.

A lot of things go from not viable to viable if you have the luxury of allocating an entire team to it.


Fair point. But this is where the likes of EKS and GKE come in. It takes away a lot of the pain that comes from managing K8s.

That hasn't been my experience. I use Kubernetes on Google cloud (because they have the best implementation of K8s), and I have never had to learn any Google-proprietary things.


Kubernetes on AWS is always broken somewhere from experience as well.

Oh it's Wednesday, ALB controller has shat itself again!


cloud agnosticism is, in my experience, a red herring. It does not matter and the effort required to move from one cloud to another is still non-trivial.

I like using the primitives the cloud provides, while also having a path to - if needed - run my software on bare metal. This means: VMs, decoupling the logging and monitoring from the cloud svcs (use a good library that can send to cloudwatch for eg. prefer open source solutions when possible), do proper capacity planning (and have the option to automatically scale up if the flood ever comes), etc.


> The alternatives to Kubernetes are even more complex. Kubernetes takes a few weeks to learn.

Learning Heroku and starting using it takes maybe an hour. It's more expensive and you won't have as much control as with Kubernetes, but we used it in production for years for fairly big microservice based project without problems.


This feels like a post ranting against SystemD written from someone who likes init.

I understand that K8 does many things but its also how you look at the problem. K8 does one thing well, manage complex distributed systems such as knowing when to scale up and down if you so choose and when to start up new pods when they fail.

Arguably, this is one problem that is made up of smaller problems that are solved by smaller services just like SystemD works.

Sometimes I wonder if the Perlis-Thompson Principle and the Unix Philosophy have become a way to force a legalistic view of software development or are just out-dated.


I don't find the comparison to systemd to be convincing here.

The end-result of systemd for the average administrator is that you no longer need to write finicky, tens or hundreds of line init scripts. They're reduced to unit files which are often just 10-15 lines. systemd is designed to replace old stuff.

The result of Kubernetes for the average administrator is a massively complex system with its own unique concepts. It needs to be well understood if you want to be able to administrate it effectively. Updates come fast and loose, and updates are going to impact an entire cluster. Kubernetes, unlike systemd, is designed to be built _on top of_ existing technologies you'd be using anyway (cloud provider autoscaling, load balancing, storage). So rather than being like systemd, which adds some complexity and also takes some away, Kubernetes only adds.


> So rather than being like systemd, which adds some complexity and also takes some away, Kubernetes only adds.

Here are some bits of complexity that managed Kubernetes takes away:

* SSH configuration

* Key management

* Certificate management (via cert-manager)

* DNS management (via external-dns)

* Auto-scaling

* Process management

* Logging

* Host monitoring

* Infra as code

* Instance profiles

* Reverse proxy

* TLS

* HTTP -> HTTPS redirection

So maybe your point was "the VMs still exist" which is true, but I generally don't care because the work required of me goes away. Alternatively, you have to have most/all of these things anyway, so if you're not using Kubernetes you're cobbling together solutions for these things which has the following implications:

1. You will not be able to find candidates who know your bespoke solution, whereas you can find people who know Kubernetes.

2. Training people on your bespoke solution will be harder. You will have to write a lot more documentation whereas there is an abundance of high quality documentation and training material available for Kubernetes.

3. When something inevitably breaks with your bespoke solution, you're unlikely to get much help Googling around, whereas it's very likely that you'll find what you need to diagnose / fix / work around your Kubernetes problem.

4. Kubernetes improves at a rapid pace, and you can get those improvements for nearly free. To improve your bespoke solution, you have to take the time to do it all yourself.

5. You're probably not going to have the financial backing to build your bespoke solution to the same quality caliber that the Kubernetes folks are able to devote (yes, Kubernetes has its problems, but unless you're at a FAANG then your homegrown solution is almost certainly going to be poorer quality if only because management won't give you the resources you need to build it properly).


Respectfully, I think you have a lot of ignorance about what a typical cloud provider offers. Let's go through these each step-by-step.

> SSH configuration

Do you mean the configuration for sshd? What special requirements would have that Kubernetes would help fulfill?

> Key management

Assuming you mean SSH authorized keys since you left this unspecified. AWS does this with EC2 instance connect.

> Certificate management (via cert-manager)

AWS has ACM.

> DNS management (via external-dns)

This is not even a problem if you use AWS cloud primatives. You point Route 53 at a load balancer, which automatically discovers instances from a target group.

> Auto-scaling

AWS already does this via autoscaling.

> Process management

systemd and/or docker do this for you.

> Logging

AWS can send instance logs to CloudWatch. See https://docs.aws.amazon.com/systems-manager/latest/userguide....

> Host monitoring

In what sense? Amazon target groups can monitor the health of a service and automatically replace instances that report unhealthy, time out, or otherwise.

> Infra as code

I mean, you have to have a description somewhere of your pods. It's still "infra as code", just in the form prescribed by Kubernetes.

> Instance profiles

Instance profiles are replaced by secrets, which I'm not sure is better, just different. In either case, if you're following best practices, you need to configure security policies and apply them appropriately.

> Reverse proxy

AWS load balancers and target groups do this for you.

> HTTPS

AWS load balancers, CloudFront, do this for you. ACM issues the certificates.

I won't address the remainder of your post because it seems contingent on the incorrect assumption that all of these are "bespoke solutions" that just have to be completely reinvented if you choose not to use Kubernetes.


> I won't address the remainder of your post because it seems contingent on the incorrect assumption that all of these are "bespoke solutions" that just have to be completely reinvented if you choose not to use Kubernetes.

You fundamentally misunderstood my post. I wasn't arguing that you had to reinvent these components. The "bespoke solution" is the configuration and assembly of these components ("cloud provider primitives" if you like) into a system that suitably replaces Kubernetes for a given organization. Of course you can build your own bespoke alternative--that was the prior state of the world before Kubernetes debuted.


That's not really any different for Kubernetes.

You still need to figure out where your persistent storage is.

You still have to send logs somewhere for aggregation.

You have the added difficulty of figuring out cost tracking in Kubernetes since there is not a clear delineation between cloud resources.

You have to configure an ingress controller.

You want SSL? Gotta set that up, too.

You have to figure out how pods are assigned to nodes in your cluster, if separation of services is at all a concern (either for security or performance reasons).

Kubernetes is no better with the creation of "bespoke solutions" than using what your cloud provider offers.

Compare this tutorial for configuring SSL for Kubernetes services to an equivalent for configuring SSL on an AWS load balancer. Is Kubernetes really adding value here?

https://blog.karmacomputing.co.uk/kubernetes-cluster-with-ss... https://aws.amazon.com/premiumsupport/knowledge-center/assoc...


Kubernetes is far better for each of the above tasks because it is a consistent approach and set of abstractions rather than looking through the arbitrary "everything store" of the cloud providers. I really don't have any interest in relying on 15 different options from cloud providers, I want to get going with a set of extensible, composable abstractions and control logic. Software should not be tied to the hardware I rent or the marketing whims of said entity.

Yes, there is choice and variety among Kubernetes extensions, but they all have fundamental operational assumptions that are aligned because they sit inside the Kubernetes control and API model. It is a golden era to have such a rich set of open and elegant building blocks for modern distributed systems platform design and operations.


Well, first of all, note how much shorter your list is than the original. So vanilla Kubernetes is already taking care of lots of things for us (SSH configuration, process management, log exfiltration, etc). Moreover, we're not talking about vanilla Kubernetes, but managed Kubernetes (I've been very clear and explicit about this) so most of your points are already handled.

> You still need to figure out where your persistent storage is.

Managed Kubernetes comes with persistent storage solutions out of the box. I don't know what you mean by "figure out where it is". On EKS it's EFS, on GKE it's FileStore, and of course you can use other off-the-shelf solutions if you prefer, but there are defaults that you don't have to laboriously set up.

> You still have to send logs somewhere for aggregation.

No, these too are automatically sent to CloudWatch or equivalent (maybe you have to explicitly say "use cloudwatch" in some configuration option when setting up the cluster, but still that's a lot different than writing ansible scripts to install and configure fluentd on each host).

> You have the added difficulty of figuring out cost tracking in Kubernetes since there is not a clear delineation between cloud resources.

This isn't true at all. Your cloud provider still rolls up costs by type of resource, and just like with VMs you still have to tag things in order to roll costs up by business unit.

> You have to configure an ingress controller.

Nope, this also comes out of the box with your cloud provider. It hooks into the cloud provider's layer 7 load balancer offering. It's also trivial to install other load balancer controllers.

> You want SSL? Gotta set that up, too. ... Compare this tutorial for configuring SSL for Kubernetes services to an equivalent for configuring SSL on an AWS load balancer. Is Kubernetes really adding value here?

If you use cert-manager and external-dns, then you'll have DNS and SSL configured for every service you ever create on your cluster. By contrast, on AWS you'll need to manually associate DNS records and certificates with each of your load balancers. Configuring LetsEncrypt for your ACM certs is also quite a lot more complicated than for cert-manager.

> Kubernetes is no better with the creation of "bespoke solutions" than using what your cloud provider offers.

I hope by this point it's pretty clear that you're mistaken. Even if SSL/TLS is no easier with Kubernetes than with VMs/other cloud primitives, we've already addressed a long list of things you don't need to contend with if you use managed Kubernetes versus cobbling together your own system based on lower level cloud primitives. And Kubernetes is also standardized, so you can rely on lots of high quality documentation, training material, industry experience, FAQ resources (e.g., stack overflow), etc which you would have to roll yourself for your bespoke solution.


Right, I really dislike systemd in many ways ... but I love what it enables people to do and accept that for all my grumpyness about it, it is overall a net win in many scenarios.

k8s ... I think is often overkill in a way that simply doesn't apply to systemd.


If you have to manage a large distributed software code base or set of datacenters, Kubernetes is a win in that it provides a consistent, elegant solution to a nearly universal set of problems.

Systemd comparatively feels like a complete waste of time given the heat it has generated for the benefit.


> The end-result of systemd for the average administrator is that you no longer need to write finicky, tens or hundreds of line init scripts.

Wouldn't the hundreds of lines of finicky, bespoke Ansible/Chef/Puppet configs required to manage non-k8s infra be the equivalent to this?


In my work, absolutely yes. Using Kubernetes has saved us sooo much nonsense. Yes we have a mix of Terraform and k8s manifests to deploy to Azure Kubernetes Service, but it works out pretty well in the end.

Honestly most of the annoyance is Azure stuff. Kubernetes stuff is pretty joyful and, unlike Azure, the documentation sometimes even explains how it works.


I can't say I have had the same experience.

Kubernetes cluster changes potentially create issues for all services operating in that cluster.

Provisioning logic that is baked into an image means changes to one service have no chance of affecting other services (app updates that create poor netizen behavior, notwithstanding). Rolling back an AMI is as trivial as setting the AMI back in the launch template and respinning instances.

There is a lot to be said for being able to make changes that you are confident will have a limited scope.


Does Kubernetes infrastructure also not require some form of configuration?

Yes, there is a trade off here. You are trading a staggeringly complex external dependency for a little bit of configuration you write yourself.

The Kubernetes master branch weighs in at ~4.6 million lines of code right now. Ansible sits at ~286k on their devel branch (this includes the core functionality of Ansible but not every single module). You could choose not to even use Ansible and just write a small shell script that builds out an image which does something useful in less than 500 lines of your own code, easily.

Kubernetes does useful stuff and may take some work off your plate. It's also a risk. If it breaks, you get to keep both of the pieces. Kubernetes occupies the highly unenviable space of having to do highly available network clustering. As a piece of software, it is complex because it has to be.

Most people don't need the functionality provided by Kubernetes. There are some niceties. But if I have to choose between "this ~500 line homebrew shell script broke" and "a Kubernetes upgrade went wrong" I know which one I am choosing, and it's not the Kubernetes problem.

Managed Kubernetes, like managed cloud services, mitigate some of those issues. But you can still end up with issues like mismatched node sizes and pod resource requirements, so there is a bunch of unused compute.

TL;DR of course there are trade-offs, no solution is magic.


Fair, I was just pointing out that there was more to the analogy. Systemd, like init, also requires configuration, though it is more declarative than imperative, similar to k8s. Some people may prefer this style and consider it easier to manage, however, I my opinions here are not that strong

Kubernetes removes the complexity of keeping a process (service) available.

There’s a lot to unpack in that sentence, which is to say there’s a lot of complexity it removes.

Agree it does add as well.

I’m not convinced k8s is a net increase in complexity after everything is accounted for. Authentication, authorization, availability, monitoring, logging, deployment tooling, auto scaling, abstracting the underlying infrastructure, etc…


> Kubernetes removes the complexity of keeping a process (service) available.

Does it really do that if it you just use it to provision an AWS load balancer, which can do health checks and terminate unhealthy instances for you? No.

Sure, you could run some other ingress controller but now you have _yet another_ thing to manage.


Do AWS load balancers distinguish between "do not send traffic" and "needs termination"?

Kubernetes has readiness checks and health checks for a reason. The readiness check is a gate for "should receive traffic" and the health check is a gate for "should be restarted".


If that’s all you use k8s for, you don’t need it.

Myself I need a to setup a bunch of other cloud services for day 2 operations.

And I need to do it consistently across clouds. The kind of clients I serve won’t use my product as a SaaS due to regulatory/security reasons.


Multi-cloud is one of the few compelling use cases I can think of for Kubernetes.

That said, there are relatively few organizations that actually require it.


> K8 does one thing well, manage complex distributed systems such as knowing when to scale up and down if you so choose and when to start up new pods when they fail.

K8S does very simple stateless case well, but anything more complicated and you are on your own. Statefull services is still a major pain especially thus with leader elections. There is not feedback to K8S about application state of the cluster, so it can't know which instancess are less disruptive to shut down or which shard needs more capacity.


> I understand that K8 does many things but its also how you look at the problem. K8 does one thing well, manage complex distributed systems such as knowing when to scale up and down if you so choose and when to start up new pods when they fail.

Also, in the sense of "many small components that each do one thing well", k8s is even more Unix-like than Unix in that almost everything in k8s is just a controller for a specific resource type.


I'm not sure that "fewer concepts" is a win. "Everything is a file" went too far with Linux, where you get status from the kernel by reading what appears to be various text files. But that runs into all the complexities of maintaining the file illusion. What if you read it in small blocks? Does it change while being read? If not, what if you read some of it and then just hold the file handle. Are you tying up kernel memory? Holding important locks? Or what?

Orchestration has a political and business problem, too. How does Amazon feel about something that runs most jobs on your own bare metal servers and rents extra resources from AWS only during overload situations? This appears to be the financially optimal strategy for compute-bound work such as game servers. Renting bare iron 24/7 at AWS prices is not cost effective.


> "Everything is a file" went too far with Linux

Having had a play with a few variants on this theme, I think kernel based abstractions are the mistake here. It's too low level and too constrained by the low-level details of the API, as you've said yourself.

If you look at something like PowerShell, it has a variant of this abstraction that is implemented in user mode. Within the PowerShell process, there are provider plugins (DLLs) that implement various logical filesystems like "environment variables", "certificates", "IIS sites", etc...

These don't all implement the full filesystem APIs! Instead they have various subsets. E.g.: for some providers only implement atomic reads and writes, which is what you want for something like kernel parameters, but not generic data files.


I feel like we've already seen some alternatives and the industry, thus far, is still orienting towards k8s.

Hashicorp's stack, using Nomad as an orchestrator, is much simpler and more composable.

I've long been a fan of Mesos' architecture, which I also think is more composable than the k8s stack.

I just find it surprising an article that is calling for an evolution of the cluster management architecture fails to investigate the existing alternatives and why they haven't caught on.


We had someone explore K8s vs Nomad and they said K8s because nomad docs are bad. They got much further with K8s in the same timeboxed spike


Setting up the right parameters/eval criteria to exercise inside of a few week timebox (I'm assuming this wasn't a many month task) is extremely difficult to do for a complex system like this. At least, to me it is--maybe more ops focused folks can do it quicker.

Getting _something_ up and running quickly isn't necessarily a good indicator of how well a set of tools will work for you over time, in production work loads.


It was more about migrating the existing microservices than some example app, runs in docker compare today. Getting the respective platforms up was not the issue. I don't think weeks were spent, but they were able to migrate a complex application to K8s in less than a week. Couldn't get it running in Nomad, which was tried first due to its supposed simplicity over K8s.


Several years ago -- so pre-K8s too -- I was tasked with setting up a Nomad cluster and failed miserably. Nomad and Consul are designed to be worked together but also designed distinctly enough that it was a bloody nightmare trying to figure out what order of priority things needed to be spun up and how they all interacted with each other. The documentation was more like a man page where you'd get a list of options but very little guidance on how to set it up, unlike K8s who's documentation has a lot of walk-through material.

Things might have improved massively for Nomad since but I honestly have no desire to learn. Having used other Hashicorp tools since, I see them make the same mistakes time and time again.

Now I'm not the biggest fan of K8s either. I completely agree that they're hugely overblown for most purposes despite being sold as a silver bullet for any deployment. But if there's one thing K8s does really well it's describing the different layers in a deployment and then wrapping that up in a unified block. There's less of the "this thing is working but is this other thing" when spinning up a K8s cluster.


For me when exploring K8s vs Nomad, Nomad looked like a clear choice. That was until I had to get Nomad + Consul running. I found it all really difficult to get running in a satisfactory manner. I never even touched the whole Vault part of the setup because it was all overwhelming.

On the other side K8s was a steep learning curve with lots of options and 'terms' to learn but never was a point into the whole exploration where I was stuck. The docs are great. the community is great and the number of examples available allows us to mix n match lots of different approaches.


There is a trap in distributed system design - seeking to scale-up from a single-host perspective. An example - we have apache and want to scale it up, so we put it in a container and generate its configuration so we can run several of them in parallel.

This leads to unnecessarily heavy systems - you do not need a container to host a server socket.

Industry puts algorithms and Big O on a pedestal. Most software projects start as someone building algorithms, with deployment and interactions only getting late attention. This is a bit like building the kitchen and bathroom before laying the foundations.

Algorithm centric design creates mathematically elegant algorithms that move gigabytes of io across the network for every minor transaction. Teams wrap commodity resource schedulers around carefully tuned worker nodes, and discover their performance is awful because the scheduler can’t deal in the domain language of the big picture problem.

I think it is interesting that the culture of Big O interviews and k8s both came out of Google.


Do you have any examples/ideas of what a non algorithm-first approach might look like?

Not sure if this is helpful, there are some notes at cthulix.com.

The problem is the devops culture that has burdened development teams with having to juggle a lot of complexity. The solution is having some separation of concerns. Development teams should not have to spend a lot of time on devops. That's something that should just work that you buy from someone. You pay for the privilege of doing more interesting things.

Kubernetes becomes a problem when you have people who are not operations people with many years of experience with this stuff trying to do this while learning how to do it at the same time. The related problem is that having people spend time on this is orders of magnitudes more expensive than it is to run an actual cluster, which is also not cheap.

A week of devops time easily equates months/years of cloud hosting time for a modestly sized setup using e.g. Google Cloud Run. And lets face it, it's never just a week. Many teams have full time dev ops people costing 100-200$K/year, each. Great if you are running a business generating millions of revenue. Not so great if you are running a project that has yet to generate a single dollar of revenue and is a long time away from actually getting there. That describes most startups out there.

I actually managed to stay below the Cloud Run freemium layer for a while making it close to free. Took me 2 minutes to setup CI/CD. Comes with logging, auto scaling, alerting, etc. Best of all, it freed me up to do more interesting things. Technically I'm using Kubernetes. Except of course I'm not. I spent zero time fiddling with kubernetes specific config. All I did was tell Google Cloud run to go create me a CI/CD pipeline from this git repository and scale it. 3 minute job to click together. Service was up and running right after the build succeeded. Great stuff. That's how devops should be: spend a minimum of time on it in exchange for acceptable results.


"Development teams should not have to spend a lot of time on devops. That's something that should just work that you buy from someone."

This is the fundamental disagreement. DevOps was a reaction to developers that build software that was nearly impossible to operate because they treated Ops as servants that paid to do the dirty work, rather than peers with a set of valuable skills that cover a scope beyond what many Dev teams have. And it was a reaction to Ops being ground down into becoming the "department of no", when really they should be at the table with the development team as a way towards a collaborative reality check. A model where one team gets to completely ignore the complexities of operational reality is a broken, inhumane, and unsustainable model.

That said, it's also unsustainable to expose all complexity to dev teams that don't have the skills or incentive to manage this. Progressive disclosure and composable abstractions are the tool to remedy this. Kubernetes was never intended to be exposed directly to app developers, it was a system developer's platform toolkit. Exposing it is misunderstanding + laziness on the part of some operations teams. The intent was always to build higher PaaS-like abstractions such as Knative (which is what Google Cloud Run is based on).


As a frontend developer, I love to run applications in production, being able to get a terminal to my server, setup metrics, and do all these devopsy things.

But it is a totally different experience from doing this with Appengine, Heroku, Tsuru, etc... than with a custom in house built kubernetes plus a thousand custom home made tools and 10 different repositories with custom undocumented YAML files and another 3000 "gotchas" of things that don't work yet, we're on it, we need to migrate to the new version,etc.

So I symphatize with the parent comment in the sense that, in this custom built mountain of stuff, I don't want to do deveops... if you give me an easy to use, well tested, well documented, stable production infrastructure as the ones I mentioend, then I'm all in.

I also agree with you on your last paragraphs about not exposing the raw thing to the developers. This is the key.

The problem is when the systems gurus want you to understand to the same level everything they understand, your frontend coworkers want you to be on the latest of every library, your product manager wants you to perfectly understand the product, your manager expect you to be the best at dealing with people, and you still have to smile and be happy about team building... oh, and don't forget the Agile Coach expecting you to also be good at all the team dynamics and card games.

I'm all in in operating the applications my team builds. Having to operate custom in house kubernetes clusterfucks is not my job.


100%. I spent 5+ years of my life helping cloud foundry take off, and saw the enormous benefits of having your own private Heroku.

But the market overwhelmingly decided it wanted to play with a lower level foundation (those CF instances mostly are still chugging along running hundreds of thousands of containers, but they’re in their own world… “legacy”?).

Let’s own it and not delude ourselves that the current state of Kubernetes is the end state. It’s like saying the Linux syscall interface is too complex for app developers. Well yes! It’s for system developers. We as an industry are working to improve that.


Treating ops as a separate janitorial service and how that goes south is nicely captured in this article:

https://machinesplusminds.blogspot.com/2012/08/the-carpets-a...


> Great if you are running a business generating millions of revenue.

It's not even great in that situation. Millions in profit, perhaps, but that $200k+ would probably better be spent elsewhere - enhancing functionality, increasing sales, support, etc.


One point where the analogy fails, is that Multics was never particularly popular. Although it was historically influential (especially but not purely through its influence on Unix), it was only ever a small player in the market. It was positioned as an operating system for high-end multi-million dollar mainframes, but in that market IBM was king (with thousands of sites), Multics wasn't even near being second place (with a mere 80 sites at its peak). Even for its vendor, GE/Honeywell, it was an also-ran – Honeywell ended up preferring GCOS as the solution for that market, which is part of why it killed Multics off. GCOS was no doubt technically inferior, but it was a simpler system which made more frugal use of system resources.

By contrast, k8s is wildly popular. I have no idea how many installations of it exist in the world, but it probably numbers into the millions.


I'm pretty biased since I gave k8s trainings and operate several kubes for my company and clients.

I'll take two pretty different contexts to illustrate why for me k8s makes sense.

1- I'm part of the cloud infrastructure team (99% AWS, a bit of Azure) for a pretty large private bank. We are in charge of security and conformity of the whole platform while trying to let teams be as autonomous as possible. The core services we provide are a self-hosted Gitlab along with ~100 CI runners (Atlantis and Gitlab-CI, that many for segregation), SSO infrastructure and a few other little things. Team of 5, I don't really see a better way to run this kind of workload with the required SLA. The whole thing is fully provisioned and configured via Terraform along with it's dependencies and we have a staging env that is identical (and the ability to pop another at will or to recreate this one). Plenty of benefits like almost 0 downtime upgrades (workloads and cluster), on-the-shelf charts for plenty of apps, observability, resources optimization (~100 runners mostly idle on a few nodes), etc.

2- Single VM projects (my small company infrastructure and home server) for which I'm using k3s. Same benefits in terms of observability, robustness (at least while the host stays up...), IaC, resources usage. Stable minimalists hardened host OS with the ability to run whatever makes sense inside k3s. I had to setup similarly small infrastructures for other projects recently with the constraint of relying on more classic tools so that it's easier for the next ops to take over, I end up rebuilding a fraction of k8s/k3s features with much more efforts (did that with docker and directly on the host OS for several projects).

Maybe that's because I know my hammer well enough for screws to look like nails but from my perspective once the tool is not an obstacle k8s standardized and made available a pretty impressive and useful set of features, at large scale but arguably also for smaller setups.


99% AWS? You can do Gitlab runners and pretty much everything else with ECS+Fargate. You wouldn't even need to maintain any nodes, clusters, etc!

We have both Nomad (Consul + Vault + Nomad) and Kubernetes (hosted and on prem) running, both excel at different things.

I love Nomad's flexibility and ease of use, a simple hcl file and I (and all the devs) can debug and understand what is going with the deployment without wasting a whole sprint, debugging and understanding the systems is trivial. However I agree parts of the documentation should be fixed and can confuse people who want to start up and it's also relatively "new" insofar that there is a small but growing community around it. I love Kubernetes because of the community, if there's a Helm chart for a service, it's going to work in 80% of the cases. If however there are bugs in the helm chart, or something is quite not on the beaten path, then good luck. Most of the time wasted on Kubernetes was the inexperience of the operators and also the esoteric bugs that can happen now and then. Building on top of things that have been done before is a great way to win time and flexibility but it shouldn't be an excuse to not understand them (helm charts as an example).

In both cases, you always need an ops team to take care of the clusters. For Nomad, 2/3 people are enough. For Kubernetes you will need 5+ people depending on the size and locality of the cluster, if you want to do things right, that is. If your dev team is managing them it's already game over and just a question of time until you made yourself more real problems than you initially had.

What bugs me the most however is the cargo culting around the tools serving as a "beating around the bush" technique to not do actual work. They're just that, tools, if you have to deploy a rails or django app with an sqlite database just do it on metal with a two liner "ci/cd" and grow from there. If it gets bigger, sure, go for Kubernetes to manage the deployments and auto scale, but be damn sure that you can debug anything that goes wrong within minutes/hours. If things go wrong and there's no hit on your googled error code you essentially fall from your highest level of abstraction and are at the mercy of consultants that will both waste your time in writing requirements and waste your money by taking too much time than was initially planned and agreed upon (my experience, sample size N=6).


One of the most relevant and amazing blogs I have read in recent times.

I have been working for a firm that have been onboarding multiple small scale startup or lifestyle businesses to kubernetes. My opinion is that if you have an ruby on rails or python app, you don't really need kubernetes. It is like bringing bazooka to a knife fight. However, I do think kubernetes has some good practice embedded in them, which I will always cherish.

If you are not operating at huge scale, both operations or/and teams, it actually comes at a high cost of productivity and tech debt. I wish there was an easier tech that would bridge going from VMs to bunch of VMs, bunch of containers to kubernetes.


> Kubernetes is our generation's Multics

Prove it. Create something simpler, more elegant and more principled that does the same job. (While you're at it, do the same for systemd which is often criticized for the same reasons.) Even a limited proof of concept would be helpful.

Plan9 and Inferno/Limbo were built as successors to *NIX to address process/environment isolation ("containerization") and distributed computing use cases from the ground up, but even these don't even come close to providing a viable solution for everything that Kubernetes must be concerned with.


I can claim electric cars will beat out hydrogen cars in the long run. I don't have to build an electric car to back up this assertion. I can look at the fundamental factors at hand and project out based on theoretical maximums.

I can also claim humans will have longer lifespans in the future. I don't need to develop a life extending drug before I can hold that assertion.

Kubernetes is complex. Society used to still work on simpler systems before we added layers of complexity. There are dozens of layers of abstraction above the level of transistors, it is not a stretch to think that there is a more elegant abstraction yet designed without having to "prove" themselves to zozobot234.


Claiming Kubernetes is Multics , and that UNIX is around the corner, is worthless claim without actual data or argument to back it up.

To me, Kubernetes is the new UNIX, centered around a small number of core ideas: controller loops, Pods, level-triggered events, and a fully open, well-standardized, and declarative, and extensible RESTful API.

The various clouds and predecessor cloud orchestrators were the infinitely complicated beasts.

OP just linked to a few rants about the complexity of the CNCF ecosystem (not Kubernetes), and extended cranky rant / thought exercise by the MetalLB guy. The latter is the closest to an actual argument against Kubernetes, but there’s a LOT of things to disagree with in that post .


What are the "fundamental factors at hand" with Kubernetes and software orchestration? How do you quantify these things?

> comments are intended to add color on the design of the Oil language and the motivation for the project as a whole.

Comments are also easier to write than code. He really does seem obligated to prove kubernetes is our generations multics, and that's a good thing.


The successor will probably be a more integrated platform where it provides a lot of stuff you've got to use sidecars, etc for.

Probably a language with good IPC (designed for real distributed systems that handle failover), some unified auth library, and built-in metrics and logging.

A lot of real-life k8s complexity is trying to accommodate many supplemental systems for that stuff. Otherwise it's a job scheduler and haproxy.



Nomad also doesn't have a lot of feature that are built into kubernetes, features that otherwise require other hashicorp tools. So now you have a vault cluster, a consul cluster, a nomad cluster, then hcl to manage it all, probably a terraform enterprise cluster. So what have you gained? Besides the same amount of complexities with fewer features.


I think Nomad sounds like the direction the OP blog post is proposing to move in: a set of largely independent tools which can each address some aspect of the problem kubernetes is trying to solve.

> a set of largely independent tools which can each address some aspect of the problem kubernetes is trying to solve.

But Kubernetes is already this. Sure the core is a lot bigger than something like Nomad, but the some of it is replaceable, and there are plenty of simpler alternatives to those built in.

And anyway, my point still stands. What's the point of having 20 different independent systems that address the aspects K8s is trying to solve versus one big system that addresses all the headaches? To me having 20 different systems that potentially have many fundamental differences is more complex than a single system that has the same design philosophies and good integration across the board.


AWS's Cloud primitives are certainly better. Of course it's not FOSS, though it proves orchestration can be done simpler.

https://ably.com/blog/no-we-dont-use-kubernetes

For local development (a must imo), just rock a docker-compose.yml that emulates your Cloud orchestrated with terraform/cloudformation.


This is absolutely not an alternative, not even close. AWS is exactly that: Amazon Web Services. Do you need to host your stuff somewhere else one day? Good luck re-inventing everything from scratch.

I am sort of k8s hater myself, because I've seen very simple and straight-forward production pipelines, reasonably well understood by admins, turn into over-complicated shit with buggy deploy pipelines literally 10 times slower that no one really understands. All of this to manage maybe 10 nodes per service. All of that said, I cannot deny that these new solutions are something that previous generation of ansible scripts and AWS primitives were not. Now we can move all of it to pretty much any infrastructure without changing much. And as much as I hate it, I don't really have an answer to "what else, if not kubernetes?" that doesn't feel a little bit dishonest. I seriously would like to hear one.


Comment on your first point— I have done the work you speak of (porting AWS-specific code to other cloud providers). It is absolutely possible and relatively painless if you design for that feature at the outset. Almost all of the lower level AWS services have a counterpart in the other ecosystems.

So if you build the right interface abstractions around those components, it gets you a long way.


if you are running say a monolith in container in Fargate fronted by ALB that talks RDS PG or Aurora there is not much complexity in moving that anywhere

Needs to have a really serious branding first.

Like Yolodyne Cybernetrix


I feel like k8s sits in the same space as git. One of those tools that is ridiculously complex, obtuse, un-userfriendly but at the same time worth sucking it all up because the win from consolidating your knowledge into something that is an industry standard is far greater than whatever particular things one doesn't like about how it works.

It is a fascinating dynamic however that generates these outcomes where a large numbers of people collectively settle on something that the majority of them seem to hate.


> A distributed OS that follows the Perlis-Thompson Principle would have fewer concepts.

Kubernetes is a relatively simple system with few concepts. You have manifests stored in etcd, behind the API server, and various controllers that act on these manifests. Some controllers (Deployment, StatefulSet, etc.) come standard out of the box, some are custom and added later. The basic unit of computation is a Pod, and DNS is provided with Services. Cluster administrators need to worry about the networking and storage layers, not cluster users. Honestly, that's pretty much it! Really not so complicated.

Now, does that help you write a manifest for the Deployment controller? No, and neither does it help you autoscale the Deployment via writing a manifest for the HorizontalPodAutoscaler controller, or setting up a load balancer by writing a manifest for the Ingress controller. But I wouldn't call the UNIX model complex because Linux distributions and package managers add complexity.


Kubernetes gets a lot of shade, and rightfully so. It’s a tough problem. I do hope we get a Ken Thompson or Rich Hickey-esque solution at some point.


I see the shade thrown at k8s... but honestly I don't know how much of it is truly deserved.

k8s is complex not unnecessarily, but because k8s is solving a large host of problems. It isn't JUST solving the problem of "what should be running where". It's solving problems like "how many instances should be where? How do I know what is good and what isn't? How do I route from instance A to instance b? How do I flag when a problem happens? How do I fix problems when they happen? How do I provide access to a shared resource or filesystem?"

It's doing a whole host of things that are often ignored by shade throwers.

I'm open to any solution that's actually simpler, but I'll bet you that by the time you've reached feature parity, you end up with the same complex mess.

The main critique I'd throw at k8s isn't that it's complex, it's that there are too many options to do the same thing.


I think part of the shade throwing is k8s has a high lower bound of scale/complexity "entry fee" where is actually makes sense. If your scale/complexity envelope is below that lower bound, you're fighting k8s, wasting time, or wasting resources.

Unfortunately unless you've got a lot of k8s experience that scale/complexity lower bound isn't super obvious. It's also possible to have your scale/complexity accelerate from "k8s isn't worthwhile" to "oh shit get me some k8s" pretty quickly without obvious signs. That just compounds the TMTOWTDI choice paralysis problems.

So you get people that choose k8s when it doesn't make sense and have a bad time and then throw shade. They didn't know ahead of time it wouldn't make sense and only learned through the experience. There's a lot of projects like k8s that don't advertise their sharp edges or entry fee very well.


> I think part of the shade throwing is k8s has a high lower bound of scale/complexity "entry fee" where is actually makes sense. If your scale/complexity envelope is below that lower bound, you're fighting k8s, wasting time, or wasting resources.

Maybe compared to Heroku or similar, but compared to a world where you're managing more than a couple of VMs I think Kubernetes becomes compelling quickly. Specifically, when people think about VMs they seem to forget all of the stuff that goes into getting VMs working which largely comes with cloud-provider managed Kubernetes (especially if you install a couple of handy operators like cert-manager and external-dns): instance profiles, AMIs, auto-scaling groups, key management, cert management, DNS records, init scripts, infra as code, ssh configuration, log exfiltration, monitoring, process management, etc. And then there's training new employees to understand your bespoke system versus hiring employees who know Kubernetes or training them with the ample training material. Similarly, when you have a problem with your bespoke system, how much work will it be to Google it versus a standard Kubernetes error?

Also, Kubernetes is really new and it is getting better at a rapid pace, so when you're making the "Kubernetes vs X" calculation, consider the trend: where will each technology be in a few years. Consider how little work you would have to do to get the benefits from Kubernetes vs building those improvements yourself on your bespoke system.


Honestly, the non-k8s cloud software is also getting excellent. When I have a new app that I can't containerize (network proxies mostly) I can modify my standard terraform pretty quickly and get multi-AZ, customized AMIs, per-app user-data.sh, restart on failures, etc. with private certs and our suite of required IPS daemons, etc. It's way better than pre-cloud things. K8s seems also good for larger scale and where you have a bunch of PD teams wanting to deploy stuff with people that can generate all the YAML/annotations etc. If your deploy #s scale with the number of people that can do it, then k8s works awesomely. If you have just 1 person doing a bunch of stuff, simpler things can let that 1 person manage and create a lot of compute in the cloud.


K8 is the semi truck of software, great for semi scale things, but often used when a van would just do fine.


To me, usefulness is less to do with scale and more to do with number of distinct services.

If you have just a single monolith app (such as a wordpress app) then sure, k8s is overkill. Even if you have 1000 instances of that app.

It's once you start having something like 20+ distinct services that k8s starts paying for itself.


Especially with 10 distinct development teams that all have someone smart enough to crank out some YAML with their specific requirements.


Kubernetes is an aircraft carrier, where most people just need a skiff.

> how many instances should be where?

Are you referring to instances of your application, or EC2 instances? If instances of your application, in my experience it doesn't really do much for you unless you are willing to waste compute resources. It takes a lot of dailing in to effectively colocate multiple pods and maximize your resource utilization. If you're referring to EC2 instances, well AWS autoscaling does that for you.

Amazon and other cloud providers have the advantage of years of tuning their virtual machine deployment strategies to provide maximum insulation from disruptive neighbors. If you are running your own Kubernetes installation, you have to figure it out yourself.

> How do I know what is good and what isn't?

Autoscaling w/ a load balancer does this trivially with a health check, and it's also self-healing.

> How do I route from instance A to instance b?

You don't have to know or care about this if you're in a simple VPC. If you are in multiple VPCs or a more complex single VPC setup, you have to figure it out anyway because Kubernetes isn't magic.

> How do I flag when a problem happens?

Probably a dedicated service that does some monitoring, which as far as I know is still standard practice for the industry. Kubernetes doesn't make that go away.

> How do I fix problems when they happen?

This is such a generic question that I'm not sure how you felt it could be included. Kubernetes isn't magic, your stuff doesn't always just magically work because Kubernetes is running underneath it.

> How do I provide access to a shared resource or filesystem?

Amazon EFS is one way. It works fine. Ideally you are not using EFS and prefer something like S3, if that meets your needs.

> It's doing a whole host of things that are often ignored by shade throwers.

I don't think they're ignored, I think that you assume they are because they are because those things aren't talked about. They aren't talked about because they aren't an issue with Kubernetes.

The problem with Kubernetes is that it is a massively complex system that needs to be understood by its administrators. The problem it solves overlaps nearly entirely with existing solutions that it depends on. And it introduces its own set of issues via complexity and the breakneck pace of development.

You don't get to just ignore the underlying cloud provider technology that Kubernetes is interfacing with just because it abstracts those away. You have to be able to diagnose and respond to cloud provider issues _in addition_ to those that might be Kubernetes-centric.

So yes, Kubernetes does solve some problems. Do the problems it solves outweigh the problems it introduces? I am not sure about that. My experience to Kubernetes is limited to troubleshooting issues with Kubernetes ~1.6, which we got rid of because we regularly ran into annoying problems. Things like:

* We scaled up and then back down, and now there are multiple nodes running 1 pod and wasting most of their compute resources.

* Kubernetes would try to add routes to a route table that was full, and attempts to route traffic to new pods would fail.

* The local disk of a node would fill up because of one bad actor and impact multiple services.

At my workplace, we build AMIs that bake-in their Docker image and run the Docker container when the instance launches. There are some additional things we had to take on because of that, but the total complexity is far less than what Kubernetes brings. Additionally, we have the side benefit of being insulated from Docker Hub outages.


I think a large part of the problem is that systems like Kubernetes are designed to be extensible with a plugin architecture in mind. Simple applications usually have one way of doing things but they are really good at it.

This begs to question if there is a wrong or right way of doing things and if a single system can adapt fast enough to the rapidly changing underlying strategies, protocols, and languages to always be at the forefront of what is considered best practices in all levels of development and deployment.

These unified approaches usually manifest themselves as each cloud providers best practice playbooks, but each public cloud is different. Unless something like Kuberenetes can build a unified approach across all cloud providers or self hosting solutions then it will always be overly complex because it will always be changing for each provider to maximize their interests in adding their unique services.


Having used Kubernetes for a while, I'm of the opinion that it's not so much complex as it is foreign, and when we learn Kubernetes we're confronted with a bunch of new concepts all at once even though each of the concepts are pretty simple. For example, people are used to Ansible or Terraform managing their changes, and the "controllers continuously reconciling" takes a bit to wrap one's head around.

And then there are all of the different kinds of resources and the general UX problem of managing errors ("I created an ingress but I can't talk to my service" is a kind of error that requires experience to understand how to debug because the UX is so bad, similarly all of the different pod state errors). It's not fundamentally complex, however.

The bits that are legitimately complex seem to involve setting up a Kubernetes distribution (configuring an ingress controller, load balancer provider, persistent volume providers, etc) which are mostly taken care of for you by your cloud provider. I also think this complexity will be resolved with open source distributions (think "Linux distributions", but for Kubernetes)--we already have some of these but they're half-baked at this point (e.g., k3s has local storage providers but that's not a serious persistence solution). I can imagine a world where a distribution comes with out-of-the-box support for not only the low level stuff (load balancers, ingress controllers, persistence, etc) but also higher level stuff like auto-rotating certs and DNS. I think this will come in a few years but it will take a while for it to be fleshed out.

Beyond that, a lot of the apparent "complexity" is just ecosystem churn--we have this new way of doing things and it empowers a lot of new patterns and practices and technologies and the industry needs time and experience to sort out what works and what doesn't work.

To the extent I think this could be simplified, I think it will mostly be shoring up conventions, building "distributions" that come with the right things and encourage the right practices. I think in time when we have to worry less about packaging legacy monolith applications, we might be able to move away from containers and toward something more like unikernels (you don't need to ship a whole userland with every application now that we're starting to write applications that don't assume they're deployed onto a particular Linux distribution). But for now Kubernetes is the bridge between old school monoliths (and importantly, the culture, practices, and org model for building and operating these monoliths) and the new devops / microservices / etc world.


I have borg experience and my experience with k8s was extremely negative. Most of my time was spent diagnosing self-inflicted probmems by the k8s framework.

I've been trying nomad lately and it's a bit more direct.


I think that's because Borg comes with a team of engineers who keep it running and make it easy.

I've had a similar experience with Cassandra. Using Cassandra at Netflix was a joy because it always just worked. But there was also a team of engineers who made sure that was the case. Running it elsewhere was always fraught with peril.


yes several of the big benefits are: the people who run borg (and the ecosystem) are well run (for the most part). And, the ability to find them in chat and get them to fix things for you (or explain some sharp edge).


I have borg experience and I think Kubernetes is great. Before borg, I would basically never touch production -- I would let someone else handle all that because it was always a pain. When I left Google, I had to start releasing software (because every other developer is also in that "let someone else handle it" mindset), and Kubernetes removed a lot of the pain. Write a manifest. Change the version. Apply. Your new shit is running. If it crashes, traffic is still directed to the working replicas. Everyone on my team can release their code to any environment with a single click. Nobody has ever ssh'd to production. It just works.

I do understand people's complaints, however.

Setting up "the rest" of the system involves making a lot of decisions. Observability requires application support, and you have to set up the infrastructure yourself. People generally aren't willing to do that, and so are upset when their favorite application doesn't work their favorite observability stack. (I remember being upset that my traces didn't propagate from Envoy to Grafana, because Envoy uses the Zipkin propagation protocol and Grafana uses Jaeger. However, Grafana is open source and I just added that feature. Took about 15 minutes and they released it a few days later, so... the option is available to people that demand perfection.)

Auth is another issue that has been punted on. Maybe your cloud provider has something. Maybe you bought something. Maybe the app you want to run supports OIDC. To me, the dream of the container world is that applications don't have to focus on these things -- there is just persistent authentication intrinsic to the environment, and your app can collect signals and make a decision if absolutely necessary. But that's not the way it worked out -- BeyondCorp style authentication proxies lost to OIDC. So if you write an application, your team will be spending the first month wiring that in, and the second month documenting all the quirks with Okta, Auth0, Google, Github, Gitlab, Bitbucket, and whatever other OIDC upstreams exist. Big disaster. (I wrote https://github.com/jrockway/jsso2 and so this isn't a problem for me personally. I can run any service I want in my Kubernetes cluster, and authenticate to it with my FaceID on my phone, or a touch of my Yubikey on my desktop. Applications that want my identity can read the signed header with extra information and verify it against a public key. But, self-hosting auth is not a moneymaking business, so OIDC is here to stay, wasting thousands of hours of software engineering time a day.)

Ingress is the worst of Kubernetes' APIs. My customers run into Ingress problems every day, because we use gRPC and keeping HTTP/2 streams intact from client to backend is not something it handles well. I have completely written it off -- it is underspecified to the point of causing harm, and I'm shocked when I hear about people using it in production. I just use Envoy and have an xDS layer to integrate with Kubernetes, and it does exactly what it should do, and no more. (I would like some DNS IaC though.)

Many things associated with Kubernetes are imperfect, like Gitops. A lot of people have trouble with the stack that pushes software to production, and there should be some sort of standard here. (I use ShipIt, a Go program to edit manifests https://github.com/pachyderm/version-bump, and ArgoCD, and am very happy. But it was real engineering work to set that up, and releasing new versions of in-house code is a big problem that there should be a simple solution to.)

Most of these things are not problems brought about by Kubernetes, of course. If you just have a Linux box, you still have to configure auth and observability. But also, your website goes down when the power supply in the computer dies. So I think Kubernetes is an improvement.

The thing that will kill Kubernetes, though, is Helm. I'm out of time to write this comment but I promise a thorough analysis and rant in the future ;)


Helm's biggest problem is...

Let me rephrase that. ONE of Helm's biggest problems is that it uses text-based templating, instead of some sort of templating system that understands the thing it's actually trying to template.

This makes some things much MUCH harder than they should need to be.

It makes it really hard to have your configuration bridge things like "you have this much RAM" or "this is the CPU you have" to flags or environment variables that your code can understand.

It also makes it hard to compose configuration.

As much as I don't like BCL, it is depressingly good at being a job configuration language for "run things in the cloud".


I think you actually touch on three good points here. One is that "foo: {{ var }}" is not a hygienic template. If var is equal to "bar\nbaz: quux", you've injected hard-to-debug additional keys into the output. The next is that there are common pieces that Kuberenetes defines, and they are all demoted to map[string]interface{}. For example, a lot of charts have "resources" attached to applications, and those are (in Go land), v1.ResourceRequirements. But it could be anything in Helm, it's just a JSON object. So helm itself can't say "you typed 1000M cpu, but probably meant 1000m cpu". And finally, each chart has total latitude to name anything whatever it wants. One chart could say "myapp: { cpu: 42 }" and another configures that as "yourapp: { resources: { requests: { cpu: 42 } } } }". You get to learn Kubernetes all over every time for each app. With zero documentation, usually, except a values.yaml to cut-n-paste from. (My success rate is low. Every Helm app I've installed has required me to read the source code to get it to do what I want. But, other people have better luck, to be fair.)

On top of all that, the value that Helm delivers to people is "you don't have to read the documentation for Deployment to make a Deployment". But then you have to debug that, and you have another layer of complexity bundled on top of your already weak understanding of the core.

Like I get that Kubernetes asks you a lot of questions just to run a container. But they are all good questions, and the answers are important. Just answer the questions and be happy. (Yes, you need to know approximately how much memory your application uses. You needed to know that in the old pet computer era too -- you had to pick some amount to buy at the memory store. Now it's just a field in a YAML file, but the answer is just as critical. A helm chart can set guesses, and if that makes you feel better, maybe that's the value it delivers. But one day, you'll find the guess is wrong, and realize you didn't save any time.)


And crucially, once you have given a resource limit, there's no way to (trivially) feed that back into an environment variable or flag to signal that to the app runtime (which, IIRC, is Really Handy for Java-based apps and can seriously improve the performance of Go-based ones).

Twice today I had to explain to coworkers than "auth is one of the hardest problems in computer science".

For gRPC and HTTP/2: you're doing end to end gRPC (IE, the TCP connection goes from a user's browser all the way to your backend, without being terminated or proxied)?


I don't think I have raw HTTP/2 streams from user to service anywhere. My preference is to have Envoy in the middle doing routing/statistics, and so the TCP session is not preserved from frontend to backend. Each request/response could be handled by a different backend instance. (I don't think Envoy strictly requires this, however; upgrade/websockets work somehow. But maybe only on HTTP/1.1.) This is generally what people want their load balancer to do; a common complaint is that gRPC opens long-lived streams (channels, actually, using their term), and so one client can overload one backend, when the other 100 replicas could happily handle their request/replies. (gRPC's mechanism for state between requests and replies is server stream/client stream/bidirectional stream, which is different than channels. The individual messages in streams can't be split between backends, and so the load balancer won't interfere with that.)

At work we have a service that communicates to clients over gRPC (the CLI app is a gRPC client). We typically deploy that as two ports on the load balancer, one for gRPC and the other for HTTPS. Again, the TCP connection isn't actually preserved while transiting the load balancer, but it's logically a L4 operation -- one client channel is one server channel. If the backend becomes unhealthy, you'll have to open a new channel to the load balancer to get a different backend. (This doesn't really come up for us, because people mostly run a single replica of the service.)


There are some attempts to gradually find alternatives to Helm while remaining compatible with it. See https://carvel.dev/ for example.

There is a lot of innovation possible in this space.


> The thing that will kill Kubernetes, though, is Helm. I'm out of time to write this comment but I promise a thorough analysis and rant in the future ;)

Too much of a cliffhanger! Now I want to know your pow :)


Ever since Microsoft acquired the company behind Helm and https://news.ycombinator.com/item?id=11922299 (try clicking the article link), it has been used as a showcase when onboarding azure customers, to somehow prove that "yeah azure is hip and we love open source".

So, yes, we need to know.


I don't know why anyone uses Helm. I've done a fair amount of stuff with k8s and never saw the need. The builtin kustomize for is simple and flexible enough.

I use Helm because I haven't found another tool that deletes resources in the cluster when I delete them from the yaml. kubectl --prune is unstable and super buggy. I would love to ditch Helm. Is there a tool I should know about that covers this?

Take a look at Kapp on https://carvel.dev/ for this, possibly.

kpt live apply prunes resources

Ditto.

Granted, I have to assume that borg-sre, etc. etc. are doing a lot of the necessary basic work for us, but as far as the final experience goes?

95% of cases could be better solved by a traditional approach. NixOps maybe.


If Antoine de Saint-Exupery was right that: "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." then IT as an industry is heading further and further away from perfection at an exponentially accelerating rate.

The only example I can think of where a modern community is actively seeking to simplify things is Clojure. Rich Hickey is very clear on the problem of building more and more complicated stuff and is actively trying to create software by composing genuinely simpler parts.


I'd argue that perfection achievement is not a linear process. Sometimes you have to add way too many things before you can remove all of the useless things.

Nobody is puppeteering some grand master plan, we're on a journey of discovery. When we're honest with ourselves, we realize nobody knows what will stick and what won't.


Absolutely, but dogma and "best-practices" anchor design discussions around today's norms. People get very defensive about tools they've invested in and that kind of dogma stunts imagination for different and better solutions.

Discovery is very rarely an accidental process so we can't take for granted that it will be inevitable.

I think it's important to recognize that most people are not interested in discovery at all. Practitioners are often not explorers, and that's okay. They may find incremental improvements through their practice, but paradigm shifting innovation comes from those willing to swim against the stream of popular opinion.

Discovery has to be an intentional pursuit of those brave enough to imagine a future beyond Multics/Kubernetes/etc despite the torrent of opinionated naysayers telling them they are foolish for even trying.


I guess I completely disagree. Discovery is nothing except a series of accidents and happenstance.

Nobody gets anything difficult right on the first try, and there’s an arrogance in thinking we could.


If you understand the quote to mean that the process of achieving perfection can only consist of removing things rather than adding them, how do you know whether you've really achieved perfection or just reached a local optimum?

Jonathan Blow has also been vocal on that regard.

Consider looking into Fuchsia's component framework for thoughts on what a distributed application looks like inside an operating system. https://fuchsia.dev/fuchsia-src/concepts/components/v2/intro...

Okay, right off the bat, the author is already giving himself answers:

> Essentially, this means that it [k8s] will have fewer concepts and be more compositional.

Well, that's already the case ! At its base, k8s is literally a while loop that converges resources to wanted states.

You CAN strip it down to your liking. However, as it is usually distributed, it would be useless to distribute it with nothing but the scheduler and the API ...

I do get the author's point. At a certain point it becomes bloated. But I find that when used correctly, it is adequately complex for the problems it solves.


After reading the title I worried this was going to be yet another k8s bashing post. Pleasantly surprised to see this take because it’s a refreshing look at kube and I strongly agree. I think it’s the absolute best way to deploy large systems today, especially if you’re a polyglot organization. But it can be tough to grok without lots of labbing and experimentation - it’s hard to approach.

We are really at the infancy of containerization. Kube is a springboard for doing the next big thing.


It looks to be getting more complex too. I understand the sales pitch for a service mesh like Istio, but now we're layering something fairly complicated on top of K8S. Similar for other aspects like bolt on secrets managers, logging, deployment, etc, run through even more abstractions.



Whatever Kubernetes flaws, the analogy is clearly wrong. Multics was never a success and never had wide deployment so Unix never had to compete with it. Once an OS is widely deployed, efforts to get rid of it have a different dynamic (see the history of desktop computing, etc). Especially, getting rid of any deployed, working system (os, application, language, chip-instruction-set, etc) in the name of simplicity is inherently difficult. Everyone agrees things should be stripped down to a bare minimum but no one agrees on what that bare minimum is.


Agreed; I think a better analogy for Kubernetes is XML. So many wasted meetings about where to split up namespaces and should every last thing be an attribute or a subtag; none of that added business value. JSON took all those decisions off the table. And yes, huge industrials validly complained that JSON didn't cover X or Y or Z, but for most users JSON is a much better solution then XML.

Kubernetes reminds me a lot of XML; there are too many decision points adding unnecessary complexity for the average user's needs. Too many foot guns. Too many unintuitive things.

People keep on describing it as "declarative", which seems to be about as true as saying that Java is a functional language. Hopefully someday we'll have something actually declarative, and much more intuitive, something more like AWS's CDK.


How is Kubernetes not declarative?

I don’t disagree about the exposed complexity, that’s a fundamental decision Kubernetes made about openness and extensibility. Everything is on a level playing field, there are no private APIs.


As I recall, running "kubectl edit deployment..." doesn't do anything except edit the definition of the config. Instead, to have it take effect you seem to have to manually kill pods, and the new pods will come up with the edited config. If it were declarative, it should detect what needs to be changed, and automatically update accordingly. Same thing with editing a config. It's possible it was the funnel my local DevOps forced on me (and lacking needed permissions at every turn), but my experience was that if you removed deployments, configs, etc on the next deployment, nothing would be cleaned up and you had to manually remove. Again, that's not declarative.

In my experience Terraform and CDK are much more declarative; where you never issue commands to delete a pod or a load balancer or similar. Instead you describe what you want, and their engine figures out what it needs to add or remove or change to get to that state.


That’s not accurate, Kubectl edit (or an apply on an existing resource) does immediately detect what needs changing.

For example if you edit a deployment, it will create a new ReplicaSet and new pods and do a gradual rollout from the old one.

There’s corner cases where a controller won’t let you edit certain fields of a resource because they didn’t cover that case, but that’s relatively rare.

Deleting a pod , which IME isn’t too common day to day but can be useful to recover from some failure conditions (usually low level problems with node, Storage, or network), is also a demonstration of declarative reactions at work: if it was created by a controller it will be immediately recreated. Pods are meant to be ephemeral.

Terraform certainly is declarative but it isn’t typically used as an engine that enables high availability and autoscale by scanning its declarative state and comparing to the real world. This is what Kubernetes excels at - continually scanning and reacting to changes in the world. Terraform I have found to be tricky to run continuously, any out of band state change can lead to it blowing away your resources.


That's not been my experience at all. Have had to manually delete pods all the time. Is it possible that this was something fixed in newer versions?

Example case: DevOps pushed out a new version of Istio (without talking with anyone) and even though the container configs are referencing the new version of Istio, only half of the pods in the namespace got restarted, so we get paged because a number of services can't make any network connections with the other services. Had to manually delete all the pods, and then the new pods all came up with the right version of Istio and are able to communicate again.

On a side note: how is it at all acceptable to have a networking "mesh" that isn't backwards compatible? I can count on no hands the number of times that my fargate/lambda services couldn't communicate because half of my fleet is running a different version of VPC. Thus far my experience with Istio is that it has never added any business value (for projects I've been involved in), and only adds complexity, headaches, and downtime.

Back to the declarative thing: I'm fairly confident I've edited service configs, added service configs, edited the container image, and container environment variables, and never saw kubernetes restart anything automatically; had to manually delete.


Istio is a whole different and very advanced beast, maintained outside of the Kubernetes core, and not for the faint of heart.

The issue there is that it literally needs to rewrite the pod YAML to inject the sidecar envoy proxy. So say you want to upgrade Istio. Well Istio needs to change the Pod spec, and it doesn’t do this automatically. If you look at the upgrade instructions here: https://istio.io/latest/docs/setup/upgrade/in-place/#upgrade...

Step 6 is “After istioctl completes the upgrade, you must manually update the Istio data plane by restarting any pods with Istio sidecars:

$ kubectl rollout restart deployment”

Istio can be useful (most security teams want it for Auto-mTLS, it also could save you from firewall hell by using layer 7 authorization policies, and can do failover across DCs pretty well) but is crazy to use on its own as unsupported vanilla OSS without a distro like Solo, Tetrate, Tanzu, Kong, etc., or without significant automation to make upgrades transparent. Istio is often very frustrating to me because of cases like yours: it’s too easy to make a mess of it. There are much easier approaches that covers 80% (an ingress controller like Contour or ngnix + cert manager).

On editing configs, one area Kubernetes does NOT react to is ConfigMaps and Secrets being updated. Editing an Image or Env var in a ReplicaSet or Deployment will definitely trigger a pod recreate (I see this daily).

Though take a look at Kapp (https://carvel.dev/kapp/) which provides clearer rollout visibility and can version ConfigMaps + trigger reactions to them updating, also there is Reloader https://github.com/stakater/Reloader


It's called "Images and Feelings", but I quite dislike using a the Cloud Native Computing Foundation's quite busy map of services/offerings as evidence against Kubernetes. That lots of people have adopted this, and built different tools & systems around it & to help it is not a downside.

I really enjoy the Oil Blog, & was really looking forward when I clicked the link to having some good real criticism. But it feels to me like most of the criticism I see: highly emotional, really averse/afraid/reactionary. It wants something easier simpler, which is so common.

I cannot emphasize enough, just do it anyways. There's a lot of arguments from both sides about trying to assess what level of complexity you need, about trying to right size what you roll with. This outlook of fear & doubt & skepticism I think does a huge disservice. A can do, jump in, eager attitude, at many levels of scale, is a huge boon, and it will build skills & familiarity you will almost certainly be able to continue to use & enjoy for a long time. Trying to do less is harder, much harder, than doing the right/good/better job: you will endlessly hunt for solutions, for better ways, and there will be fields of possibilities you must select from, must build & assemble yourself. Be thankful.

Be thankful you have something integrative, be thankful you have common cloud software you can enjoy that is cross-vendor, be thankful there's so many different concerns that are managed under this tend.

The build/deploy pipeline is still a bit rough, and you'll have to pick/build it out. Kubernetes manifests are a bit big in size, true, but it's really not a problem, it really is there for basically good purpose & some refactoring wouldn't really change what it is. There's some things that could be better. But getting started is surprisingly easy, surprisingly not heavy. There's a weird emotional war going on, it's easy to be convinced to be scared, to join in with reactionary behaviors, but I really have seen nothing nearly so well composed, nothing that fits together so many different pieces well, and Kubernetes makes it fantastically easy imo to throw up a couple containers & have them just run, behind a load balancer, talking to a database, which coverages a huge amount of our use cases.


I like this title so much I am finally going to give this shell a try. One thing I notice right away is readline. Could editline also be an option. (There's two "editlines", the NetBSD one and an older one at https://github.com/troglobit/editline) Next thing I notice is the use of ANSI codes by default. Could that be a compile-time option or do we have to edit the source to remove it.

TBH I think the graphical web browser is the current generation's Multics. Something that is overly complex, corporatised, and capable of being replaced by something simpler.

I am not steeped in Kubernetes or its reason for being but it sounds like it is filling a void of shell know-how amongst its audience. Or perhaps it is addressing a common dislike of the shell by some group of developers. I am not a developer and I love the shell.

It is one thing that generally does not change much from year to year. I can safely create things with it (same way people have made build systems with it) that last forever. These things just keep running from one decade to the next no matter what the current "trends" are. Usually smaller and faster, too.


Kubernetes is designed similar to the shell: the APIs are a uniform interface, designed for stabilization, while resources are composable and extensible through the it.

If you use the stable APIs, your code will run for decades. My hypothetical deployment from 2016 will not need touching (beyond image updates for CVEs) to keep running in 2026 or 2036.


I think that all this boils down to a rather simple dilemma for modern cloud-native infrastructural platforms [in terms of developer experience, i.e., external APIs etc., not internal architecture; and this is not even limited to this class of systems - it is general concept for all software systems]: a) universal, highly configurable & complex (K8s) OR b) highly opinionated and [relatively] simple (e.g., Nomad/Waypoint, Heroku, Apollo, CapRover, Dokku, Porter, AWS Elastic Beanstalk, Digital Ocean's App Platform, Fly, Render). Obviously, there exists a middle-ground category as well: relatively simple, but still opinionated and moderately (???) or highly (e.g., OpenShift) configurable platforms. Thus, the optimal choice depends on relevant team's or organization's priorities with respect to those attributes (configurability, complexity, level & scope of opinionation) as well as level of organizational standardization for IT environments, economic factors, vendor lock-in considerations and, perhaps, something else that I forgot to mention).

No, Multics was easier to understand, easier to manage, and more reliable.

However Multics didn't offer automatic/elastic cloud scaling, which seems to be the main selling point of modern, usually very complicated, container orchestration systems, nor was it designed for building distributed systems.

However, if modern Linux had a Multics-style ring architecture, it could replace many of the uses for virtualization and containers.


Add the two cents of http://adamierymenko.com/ports.html

"Since we chose the path of virtualization and containerization we've allowed the multi-tenancy facilities in Unix to atrophy and it would take a little bit of work to bring them back into form."


I wish that were so.

Multics made a big splash in the literature but in terms of use it was an obscure os on an obscure mainframe. It had nothing on TOPS-20 or VM/CMS.

Unfortunately many of us are suffering with Kube.


Hi Andy: if you see this, I'm the other 4d polygon renderer! I read the kubernetes whitepaper after RC and ended up spending a lot of the last year on it. Maybe if I had asked you about working with Borg I could have saved myself some trouble. Glad to see you're still very active!

Hi :) Yeah I think it's an interesting topic, and I'm not saying anyone should necessarily be doing something different. But if it "feels wrong", then that's not too surprising to me :) I'd be interested in hearing about any k8s experiences.

Sure, you don't have to use k8s. You can roll your own solutions to what it solves.

Your own custom built solution will work, but what in 5 years? 10 years? When it all becomes legacy what then?

Will you find the talent who'll want to fix your esoteric environment, just like those COBOL devs?

Will anyone respond to your job posts to fix your snowflake environment. Will you pay above average wages to fix your snowflake ways of solving problems that k8s standardized?

I bet your C-Level is thinking this. What's to say they won't rip out all of your awesomeness and replace it with standard k8s down the line as its dominating the marketshare.

When you're laid off in the next recession, is your amazing problem-solving on your snowflake environment going to help you when everyone else is fully well versed with k8s?


Whoa dude, ease up on the cool aid

Personally, I think this is an extremely mild version of the dire situation that most teams working with legacy systems often find themselves in.

Is it really that complex compared to an operating system like Unix though? I mean there's nothing simple about Unix. To me the question is, is it solving a problem that people have in a reasonably simple way? And it seems like it definitely does. I think the hate comes from people using it where it's not appropriate, but then, just don't use it in the wrong place, like anything of this nature.

And honestly its complexity is way overblown. There's like 10 important concepts and most of what you do is run "kubectl apply -f somefile.yaml". I mean, services are DNS entries, deployments are a collection of pods, pods are a self contained server. None of these things are hard?


What’s complex about *nix? All you need to understand are device files, POSIX permissions and ACLs, cgroups, tcp/udp sockets, nginx/haproxy, thread/process scheduling, (virtual) memory, PAM, dbus, syslog, pipes, unix sockets, 30 filesystem options, nfs, userspace vs. kernel space, sysvinit or 10 flavors of systemd files, iptables/ufw, networkmanager, ssh, selinux, chroot, flatpack, snaps, rpm, deb, ansible/chef/puppet.

Oh deploying on the cloud? Cloudformation/AzureRM as well.

Pretty easy. No damn complex k8s needed.


The irony in your comment is tools like networkmanager, snaps, systemd are kubernetes like and severely disliked by experienced unix admins due to the needless complexity and usability of them.

Well, given that Multics was much more secure than UNIX ever was, and written on a proper systems programming language that everyone (except UNIX folks) is trying to get back to, probably isn't that bad after all.

> proper systems programming language that everyone (except UNIX folks) is trying to get back to

Wikipedia: Written in PL/I, Assembly language

????


I advise you to learn about the safety capabilities regarding strings, arrays, pointer manipulation and references, numerics and enumerations in PL/I versus C.

Additionally, you can go over to Multicians and read the security assessemt reports of Multics vs UNIX done by DoD, back in the day.


I agree a lot with his premise, that Kubernetes is too complex, but not at all with his alternative to go even lower level.

And the alternative of doing everything yourself isn't too much better either, you need to learn all sorts of cloud concepts.

The better alternative is a higher level abstraction that takes care of all of this for you, so an average engineer building an API does not need to worry about all these low level details, kind of like how serverless completely removed the need to deal with instances (I'm building this).


That sounds like knative

I haven't heard of that. Took a look and it still seems too low level. I think we need to think much bigger in this space. Btw were not approaching this from a Kubernetes angle at all.

my problem with k8s, is that you learn OS concepts, and then k8s / docker shits all over them.


Yes, this is a core part of the design issue and argument I'm making.

The new concepts are leaky abstractions -- they wrap the old ones badly. You still have to understand both to understand the system. Networking in k8s seems to really suffer from this.

And the new concepts and old concepts don't compose. They create combinatorial problems, i.e. O(M*N) amounts of glue code.


It's a double whammy, you get the complexity of Kubernetes, and then you get to exec into a docker image that has been stripped of any useful debugging tools under the guise of security.

Its even better when its a busybox based image for that linksys router/80s unix troubleshooting experience.


K8s abstracts away much more complexity than it exposes, which is the hallmark of a great api. History will surely view it amongst the greatest api’s of all time.

Anyone want to fill me in on what this "Perlis-Thompson Principle" is?

I still have to explain it properly, but there is a pretty good sketch on a recent blog post, linked from this comment. (You will probably end up chasing a lot of comment threads, but it's mostly there.)

https://news.ycombinator.com/item?id=27914632

It's an argument about avoiding O(M*N) glue code. O(M*N) amounts of code are expensive to write, and contain O(M*N) numbers of bugs.



I had to Google it and scroll a blog post.

This whole article is, well, a little silly. It says that Kubernetes will disappear and be replaced by something simpler, because it's very difficult to create reliable systems that use it.

But...there are tons of reliable systems at Google, all using Borg, and that has a lot of features Kubernetes doesn't have.

Stripping down Kubernetes doesn't reduce complexity. It just shifts it.


I don't agree. I worked at Google for over 10 years, during the time when SREs started to make as much or more money than SWEs. There's a reason for that.

I also disagree that the systems are reliable. From the outside most the stateless services are fast and reliable; the stateful ones less so. From the inside, no: Internal services were unreliable and slow. (This could have changed in the last 5 years, but there was a clear trend in one direction in my time there.) There were many more internal services on Borg than external ones.


i thought that kubernetes is our generations jcl (job control language on ibm mainframes) ; There is a remote similarity in how we are writing descriptors for tasks and then submit it for execution and wait till the mainframe has considered our specification. (suddenly feeling old because of this comparison ...)

Yup lol, I've had this same thought. It's like neo-Tuxedo which is basically a mainframe TPS for UNIX

https://en.wikipedia.org/wiki/Tuxedo_(software)


it's funny when you think of it, most of all this distributed system magic was already there on the old mainframe, in some form. And it was there for ages...

Eh. Kubernetes is complex, but I think a lot of that is that computing is complex, and Kubernetes doesn't hide it.

Your UNIX system runs many daemons you don't have to care about. Whereas something like lockserver configuration is still a thing you have to care about if you're running Kubernetes.


Related: https://www.youtube.com/watch?v=3Ea3pkTCYx4

Key insight can be summarized as "code the perimeter"


(author here) Yes exactly! This is what I'm calling the Perlis-Thompson principle, although it still needs to be fully formed and explained. There are obvious objections to it (which I have some answers to).

Sketch of the argument here, with links: http://www.oilshell.org/blog/2021/07/blog-backlog-1.html#con...

Here's my comment which links the "Unix vs. Google" video (and I very much agree based on my first hand experience with Google's incoherent architecture, which executives started to pay attention to in various shake-ups.)

https://lobste.rs/s/euswuc/glue_dark_matter_software#c_sppff...

It links to my comment about the closely related "narrow waist" idea in networks and operating systems. That is a closely related concept regarding scaling your "codebase" and interoperability.

I have been looking up the history of this idea. I found a paper co-authored by Eric Brewer which credits it to Kleinrock:

http://bnrg.eecs.berkeley.edu/~randy/Papers/InternetServices... (was this ever published? I can't find a date or citations)

But I'm not done with all the research. I'm not sure if it's worth it to write all this, but I think it's interesting I will learn something by explaining it clearly and going through all the objections.

I'm definitely interested in the input of others. I have about 10 different resources where people are getting at this same scaling idea, but I can use more arguments / examples / viewpoints.


Going to post a lovely update for docker swarm here - Swarm simplifies/reduces the possibility space compared to K8, but i consider that a feature not a drawback. With Mirantis actively hiring and extending support for SwarmKit, it should be considered a viable 'batteries included' alternative to K8:

https://github.com/docker/roadmap/issues/175#issuecomment-82...


Most of this stuff is completely over my head, and I'm certainly no kubernetes expert, but I'm working on a project that's deployed with kubernetes, and one of the steps in our process is running our e2e tests, also in a separate kubernetes deploy. These tests (using Cypress) have proven to be extremely flakey on the server. Locally they work fine, though. I was wondering if Cypress is simply crap, but this article makes me wonder if kubernetes might be the real culprit here.

Kubernetes for sure. But it will force you to write more relisient software. Since we migrated to kubernetes, we had to implement automatic retry strategies in every network exchange, http requests, database transactions, because the managed kubernetes of a major cloud provider is a train wreck.

Two amazing quotes that really resonate with me:

> The industry is full of engineers who are experts in weirdly named "technologies" (which are really just products and libraries) but have no idea how the actual technologies (e.g. TCP/IP, file systems, memory hierarchy etc.) work. I don't know what to think when I meet engineers who know how to setup an ELB on AWS but don't quite understand what a socket is...

> Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.


this is bound to happen. the more complicated the stack that you use becomes, the less details you understand about the lower levels.

who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

All of these were table stakes at some point in time. The key is not to understand all layers perfectly. The key is to know when to stop adding layers.


Totally get your point! But I worry the industry is becoming bloated with people who can glue a few frameworks together building systems we depend on. I wish there was more of a focus on teaching and/or learning fundermentals than frameworks.

Regarding your points, I actually would expect a non-junior developer to be able to write a libary in their main language and understand the basics of OS internals (to the point of debugging and profilling, which would include troubleshooting *nix processes). I don't expect them to know assembly or C, or be able to write a compiler (although I did get this for a take-home test just last week).


I think learning the fundamentals is a worthy pursuit, but in terms of getting stuff done well, you realistically only have to grok one level below whatever level of abstraction you're operating at.

Being able to glue frameworks together to build systems is actually not a negative. If you're a startup, you want people to leverage what's already available.


I agree. An ideal is far from reality.

I like to get deep into low level stuff, but my employer doesn't care if I understand how a system call works or whether we can save x % of y by spending z time on performance profiling that requires good knowledge of Linux debugging and profiling tools. It's quicker, cheaper and more efficient to buy more hardware or scale up in public cloud and let me use my time to work on another project that will result in shipping a product or a service quicker and have direct impact on the business.

My experience with the (startup) business world is that you need to be first to ship a feature or you lose. If you want to do something then you should use the tools that will allow you to get there as fast as possible. And to achieve that it makes sense to use technologies that other companies utilise because it's easy to find support online and easy to find qualified people that can get the job done quickly.

It's a dog-eat-dog world and startups in particular have the pressure to deliver and deliver fast since they can't burn investor money indefinitely; so they pay a lot more than large and established businesses to attract talent. Those companies that develop bespoke solutions and build upon them have a hard time attracting talent because people are afraid they won't be able to change jobs easily and these companies are not willing to pay as much money.

Whether you know how a boot process works or how to optimise your ELK stack to squeeze out every single atom of resource is irrelevant. What's required is to know the tools to complete a job quickly. That creates a divide in the tech world where on one side you have high-salaried people who know how to use these tools but don't really understand what goes on in the background and people who know the nitty-gritty and get paid half as much working at some XYZ company that's been trading since the 90s and is still the same size.

My point is that understanding how something works underneath is extremely valuable and rewarding but isn't required to be good at something else. Nobody knows how Android works but that doesn't stop you from creating an app that you will generate revenue and earn you a living. Isn't the point of constant development of automation tools to make our jobs easier?

EDIT: typo


IMO the problem with this is when you go from startup -> not a startup you go from creating an MVP to something that works with a certain amount of uptime, has performance requirements, etc. Frameworks will still help you with those things, but if you need to solve a performance issue its gonna be hard to debug if a you don't know how the primitives work.

Lets say you have a network performance issue because the framework you were using was misusing epoll, set some funky options with setsockopt, or turned on Nagle's algorithm. A person can figure it out, but its gonna be a slog whereas if they had experience working with the lowest level tools the person could have an intuition about how to debug the issue.

An engineer doesn't have to write everything with the lowest level primitives all the time, but if they have NEVER done it than IMO that's an issue.


I agree with what you said, but Isn’t the goal to survive the seed stage to find product market fit and customers at all costs? If you get that, you can raise money and hire engineers to rewrite your stack. If you fail to get customers, you might have a really maintainable codebase but no money and hence bankruptcy.

The point being that maybe it’s fine if there are a lot of people who only know how to glue frameworks together if they know enough to build useful products. Let all of them try; some of them might very well make it.


This totally matches my experience from two different perspectives.

1. Working as a programmer perspective: I worked at a company with good practices but so-so revenue. What happens: horribly underpaid salary, nice laptop (but not the one I want), nice working conditions. I am now working at a company with pretty great revenue and mediocre practices. What happens: good salary, I get the laptop I want (not the one I need), working conditions are mediocre.

2. UX perspective (I did a bootcamp for fun): UX'ers make throwaway prototypes all the time in order to validate a certain hypothesis. When that's done, they create the real thing (or make another bigger throwaway prototype).

I feel this is the best approach, from a business standpoint. This also means you have different kind of developers and it depends on the stage what kind they are. I'd separate it as prototype stage, mid-stage and massive scale stage.


That’s exactly what was covered in the Systems track of my CS undergrad. I’m always confused when people dismiss their own as irrelevant or primarily mathematical… we were coding and debugging toy schedulers, virtual memory managers, file systems, TCP stacks, IRC and mail servers, locking primitives, etc. in C.

I really like the way you've put it "Glue a few X together".

This is what most software development is becoming. We are no longer building software, we are gluing/integrating prebuild software components or using services.

You no longer solve fundamental problems unless you have a very special use case or for fun. You mostly have to figure out how to solve higher level problems using off-the-shelf components. It's both good and bad if you ask me (depends at what part of the glass you're looking at).


I also would have loved discovering electricity or information theory. Somehow it's convenient that people stacked on the shoulders of each other across a few generations made processors from that but it sadly put the bar pretty high to go further nowadays.

Thankfully I can use these cool processors to build the next CandyCrush and shine in our modern and innovative society.


This is something that I can’t show numbers for but it seems likely that the absolute number of jobs of people who do “build software” has likely increased with time, it’s just that the number of “glueing frameworks” jobs have increased by a lot more so you’re probably just in the wrong category. It seems difficult to think that there aren’t thousands of network engineers keeping the internet backbone humming along.

It's like building a house. Should I have the HVAC guy do the drywall and the drywall guy do the HVAC? Clearly software engineering isn't the same as building a house, but if you have an expert in JAX-WS/SOAP and a feature need to connect to some legacy soap healthcare system... have him do that, and let the guy that knows how to write an MPI write the MPI.

At the risk of falling down an analogy rabbit hole, I'll be upset if the HVAC guy assumes that air will flow freely throughout the house and has no understanding of walls, or if the drywall guy blindly screws into my air ducts. No abstraction is perfect; some knowledge of the other layers is necessary to do a proper job. Unfortunately, in software, it seems like our abstractions are particularly leaky, and knowledge of other layers is frequently necessary to do a proper job. In house building, issues are usually contained by physical proximity, whereas the same is obviously not true in software, particularly networked software.

the hvac guy does not know how drywall is made and would struggle to produce a piece od drywall. As a matter of fact, the drywall guy would struggle. They don’t build their own materials, they use materials they buy from Home Depot.

This isn't a bad analogy. Like modern houses, software has gotten large, specific, and more complex in the last 30 some odd years.

Some argue it's unnecessary complexity, but I don't think that's correct. Even individuals want more than a basic geo cities website. Businesses want uptime, security, flashy, etc... in order to stand out.


I've been (unfortunately) in a few houses refurbishing by now and the good workers are the ones that also know a bit about other domains in house refurbishing. The HVAC guy will know about wiring and the good dry wall guy will know a bit of the layman job as well. They don't necessarily have to, but the good ones will.

> How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

That's what I expect from someone who graduated from a serious CS/Engineering program.


you're mixing having an idea of how the OS works (ie: conceptual/high level) to having working knowledge and being able to hack into the OS when needed. I know this may sound like moving the goal posts, but it really does not help me that I know conceptually that there is a file system if I don't work with it directly and/or know how to debug issues that arise from it.

> having working knowledge and being able to hack into the OS when needed.

I'm going to parrot the GP: "That's what I expect from someone who graduated from a serious CS/Engineering program."

I know there are a lot of really bad CS programs in the US, but some experience implementing OS components in a System course so that they can "hack into the OS when needed" is exactly what I would expect out of a graduate from a good CS program.


I think your expectations are out of alignment with what's happening. I know software engineers who graduated with CS degrees from schools like MIT, Urbana Champaigne, and Stanford who took Operating System classes but could not realistically "hack into the OS". If those programs aren't consistently imparting that knowledge to students without an explicit interest, I don't see how others can be expected to...

> I know software engineers who graduated with CS degrees from schools like MIT, Urbana Champaigne, and Stanford who took Operating System classes but could not realistically "hack into the OS".

That's surprising. Recent grads?


By into I assume you meant on. The OS courses at UIUC (not a wine, btw :)), MIT, and Stanford def prepare you some kernel hacking if needed.

"into" was quoting an earlier poster and hasty typos abound :)

The discussion centers on the following expectation of graduates from strong CS programs.

> having working knowledge and being able to hack into the OS when needed.

Now, the course from the listed schools may prepare some students, but I am simply reporting that I have met numerous graduates who state very explicitly.

- they are not comfortable with a variety of operating system concepts

- they are not comfortable interacting with operating systems in any depth

I don't have a big diverse data set, but the impression given is that if you expect this level of expertise you will be disappointed regularly. If the strongest CS programs pre-selecting for smart and driven students can't reliably impart that skillset, why would I expect other schools to?


IDK, I think the convo is hard to have without explicit goalposts.

For context, the original quote was:

* > How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving nix process?

Writing a compiler, writing a library for their fav language, and troubleshoot a misbehaving nix process are all examples of things I would definitely expect a CS major to have done at some point.

A SoTA compiler for Rust or whatever? Ok, no. But, you know, a compiler.

Ditto for library -- better than the standard lib? Ok, no. But, you know, a standard lib that's good enough.

ditto for debugging nix processes. Not world-class hacker, just, you know, capable of debugging a process.

I guess the other examples in that quote seem to suggest that "OS internals" probably means something like "knowledge at the level of a typical good OS course".

And who knows what those people meant by "comfortable interacting with operating systems in any depth". There could also be some reverse D-K effect going on here... "I got a B- in CMU's OS course" still puts you very well into the category of "understand the OS internals", IMO.


> who ... understand the OS internals? ... How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

Ex-Amazon here. You are describing standard skills required to pass an interview for a SDE 2 in the teams I've been in at Amazon.

Some candidates know all the popular tools and frameworks of the month but do not understand what an OS does, or how a CPU works or networking and do not get hired because they would struggle to write or debug internal software written from scratch.

[added later] This was many years ago when the bar raiser thing was in full swing and in teams working on critical infrastructure.


LoL. Also Ex-Amazon here. I can tell you for a fact that most SDE2s I've worked with had zero clue on how the OS works. What you're describing may have been true 5-10 years ago, but I think is no longer true nowadays (what was that? raising the bar they called it). A typical SDE2 interview will not have questions around OS internals in it. Before jumping on your high horse again: I've done around 400 interviews during my tenure there and I don't recall ever failing anyone due to this.

Also, gate-keeping is not helpful.


Also, gate-keeping is not helpful.

This term is really getting over-used. The purpose of job interviews is to decide who gets to pass through the gate. It is literally keeping of a gate.


The term is perfectly apt and descriptive here, because gate keeping isn't about the keeping of a gate, it's about the inappropriateness of the criteria that is used.

Software engineers, even the ones that are so superpowered that they :gasp: got a job at Amazon once in their life, can go an entire successful career without knowing how to use a kernel debugger, or understand iptables or ifconfig, or understand how virtual memory works.

Some engineers might need to know some of those things, but it is absolutely bonkers to claim that you could never progress past level 2 at Amazon without knowing such things. I know this because I once taught a senior principal engineer at Amazon how to use traceroute.

For many roles in Amazon (particularly the tens of thousands of SDE positions that will end up working with the JVM all day long), asking such low level questions about how OSes work is about as useful of a gatekeeping device as asking them whether white cheese tastes better than yellow cheese. And that's why the term gatekeeping is used.


Yikes. Do you think Amazon engineers are overall just dumber or just less used to the lower abstractions? After all, I can’t even ssh into the machines my code runs on nowadays.

newer engineers are less used to lower level abstraction. anecdotal, but that’s what I observed

Yes they do. There is too much software to be written. A person with adequate knowledge of higher abstractions can produce just fine code.

Yes, if there is a nasty issue that needs to be debugged, understanding the lower layers is super helpful, but even without that knowledge you can figure out what's going on if you have general problem-solving abilities. I certainly have figured out a ton of issues in the internals of tools that I don't know much about.

Get off your high horse.


Says one guy. Sorry, there's lots of people who make a living writing software who don't know what an OS does. Gatekeeping helps nobody.

Current big tech here (not Amazon) and very few know lower level things like C, systems or OS stuff. Skillsets and specializations are different. Your comment is incredibly false. Even on mobile if someone is for instance a JS engineer they probably don't know Objective-C, Swift, Kotlin or Java any native APIs. And for the guys who do use native mobile, they can't write Javascript to save their lives and are intimidated by it.

I agree with you, as opposed to the other ex-amazon comments you've had (I had someone reach out to interview me this week if that counts? ;)).

Playing devils advocate I guess it depends on what sort of software you're writing. If you're a JS dev then I can see why they might not care about pointers in C. I know for sure as a Haskell/C++ dev I run like the plague from JS errors.

However, I do think that people should have a basic understanding of the entire stack from the OS up. How can you be trusted to choose the right tools for a job if your only aware of a hammer? How can you debug an issue when you only understand how a spanner works?

I think there's a case for engineering accreditation as we become even more dependent on software which isn't a CS degree.


But the value isn't equal. If you think of the business value implemented in code as the "picture" and the run time environment provided as the "frame" the frame has gotten much larger and the picture much smaller, as far as what people are spending their time on. (Well, not the golang folks that just push out a systemctl script and a static binary, but the k8s devops experts). I have read entire blogs on k8s and so on where the end result is just "hello world." In the old days, that was the end of the first paragraph. Now a lot of YAML and docker files and so on and so on are needed just to get to that hello world. Unix was successful initially because it was a good portable abstraction to managing hardware resources, compute, storage, memory, and network, over a variety of actual physical implementations. Many many of the problems people are addressing in k8s and running "a variety of containers efficiently on a set of hosts" are similar to problems unix solved in the 80s. I'm not really saying we should go back, Docker is certainly a solution to "depdendency control and process isolation" when you can't have a good static binary that runs a number of identical processes on a host, but the knowledge of what a socket is or how schedulers work is valuable in fixing issues in docker-based systems. (I'm actually more experienced in Mesos/docker rather than k8s/docker but the bugs are from containers spawning too many GC threads or whatever).

If someone is trying to debug that LB and doesn't know what a socket is, or debug latency in apps in the cluster and not know how scheduling and perf engineering tools work, then it's going to be hard for them, and extremely likely that they will just jam 90% solution around 90% solution, enlarging the frame to do more and more, instead of actually fixing things, even if their specific problem was easy to fix and would have had a big pay off.


Kubernetes is complicated because it carries around Unix with it and then duplicates half the things and bolts some new ones on.

Erlang is[0] what you can get when you try to design a coherent solution to the problem from a usability and first-principles sort of idea.

But some combination of Worse is Better, Path Dependence, and randomness (hg vs git) has led us here.

[0] As far as what I've read about its design philosophy.


Who is using K8s for Hello World levels of complexity?

Complex problems often have complex solutions, the algorithm we need to run as developers is - what's the net complexity cost of my system if I use this tool?

If the tool isn't removing more complexity than it's adding, you probably shouldn't use it.


(author here) The key difference is that a C compiler is a pretty damn good abstraction (and yes Rust is even better without the undefined behavior).

I have written C and C++ for decades, deployed it in production, and barely ever looked at assembly language.

Kubernetes isn't a good abstraction for what's going on underneath. The blog post linked to direct evidence of that which is too long to recap here; I worked with Borg for years, etc.


K8s may have its time and place but here is something most people are ignoring: in 80% of the time you don't need it. You don't need all that complexity. You're not Google, you don't have the scale or the problems Google has. You also don't have the discipline AND the tooling Google has to make something like this work (cough cough Borg).

For the things that are 1:1 comparable, the Borg abstraction leaks in pretty much the same places as the Kubernetes abstraction. In slightly different ways. The "kubernetes abstraction" spans a larger space than the Borg abstraction does (note, I count "Chubby" and "GSLB" as "not Borg"), so there are more abstraction leaks as a whole in Kubernetes.

Source, I was a Google SRE for 5 years (Ads, Traffic). I ran the in-house kubernetes clusters at a company for 3 years (so, no, no hosted kubernetes, we stood them up either on pretty naked VMs or bare metal).


Assembly aside, all the things you mention are things I would expect a software engineer to understand. As an engineer in my late twenties myself, these are exactly the things I am focusing on. I'm not saying I have a particularly deep understanding of these subjects, but I can write a recursive descent parser or a scheduler. I value this knowledge quite highly, since its applicable in many places.

I think learning AWS/kubernetes/docker/pytorch/whatever framework is buzzing is easy if you understand Linux/networking/neural networks/whatever the underlying less-prone-to-change system is.


Is there a networking-for-developers style course that you would recommend?

The one at your local university. Either one named something like "Introduction to Networking" or "Introduction to Distributed Systems", depending on what you want to learn.

You could also read some books. Rami Rosens "Linux Kernel Networking - Implementation and Theory" is quite detailed.

The "UNIX and Linux System Administration Handbook" (Nemeth et al.) covers a lot superficially and will point you in the right direction to continue studying. It's very practical-minded.

For low-level socket programming, you can probably read "Advanced Programming in the UNIX environment". It might be more detail than you need though.

At the other extreme, if you want to study distributed systems, you could read Steen & Tanembaums "Distributed Systems"


disclaimer: I don't mean this to come across as arrogant or anything (I'm just ignorant).

I'm totally self-taught and have never worked a programming job (only programmed for fun). Do professional SWEs not actually understand or have the capability to do these things? I've hacked on hobby operating systems, written assembly, worked on a toy compiler and written libraries... I just kind of assumed that was all par for the course


The challenge is that lower level work doesn't always translate into value for businesses. For instance, knowledge of sockets is very interesting. On one hand, I spent my youth learning sockets. For me to bang out a new network protocol takes a few weeks. For others, it can take months.

This manifested in my frustration when I lead building a new transport layer using just sockets. While the people working with me were smart, they had limited low level experience to debug things.


I understand that that stuff is all relatively niche/not necessarily useful in every day life (I know nothing about sockets or TCP/IP) - I just figured your average SWE would at least be familiar with the concepts, especially if they had formal training. Guess it just comes down to individual interests


I think you may have missed the point (as probably a lot of people did) I was trying to make. It's one thing to know what assembly is and to even be able to dabble in a bit of assembly, it's another thing to be proficient in assembly for a specific CPU/instruction set. It's orders of magnitude harder to be proficient and/or actually write tooling for it vs understanding what a MOV instruction does or to conceptually get what CPU registers are.

Professional SWE are professional in the sense that they know what needs to happen to get the job done (but I am not surprised when someone else does not get or know something that I consider "fundamental")


yes, some intermediate devs I've worked with are unable to do almost anything except write code. e.g. unable to generate an ssh key without assistance or detailed cut and paste instructions.


Shit, I google or manpage or tealdeer ssh key generation every single time....

Pretty much any command I don't run several times a month, I look up. Unless ctrl+r finds it in my history.


Maybe I should apply for some senior dev roles then :)


Many/most senior devs do not have the experience you described. But there are often a lot of meetings, reports, and managing other devs.


Yes, you absolutely should, unless you are already making a ton of money in a more fulfilling job.


It's extremely common. And many of them are fairly productive until an awkward bug shows up.


> who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process? All of these were table stakes at some point in time.

All of these were still table stakes when I graduated from small CS program in 2011. I'm still a bit horrified to discover they apparently weren't table stakes at other places.


> who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

Any one of the undergraduates who take the systems sequence at my University should be able to do all of this. At least the ones who earn an A!


And maybe to learn the smell of a leaking layer?


> who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

But developers should understand what assembly is and what a compiler does. Writing a library for a language you know should be a common development task. How else are you going to reuse a chunk of code needed for multiple projects?

Certainly also need to have a basic understanding of unix processes to be a competent developer, too, I would think.


there is a huge difference between understanding what something is and actually working with it / being proficient with it. huge.

I understand how a car engine work. I would actually explain it to someone that does not know what is under the hood. Does that make me a car mechanic? Hell no. If my car breaks down I go to the dealership and have them fix it for me.

My car/car engine is ASM/OS Internals/writing a compiler/etc.


While I will not pretend to be an expert at either of those, having at least a minimal understanding of all of these is crucial if you want to pretend to be a software engineer. If you can't write a library, or figure out why your process isn't working, you're not an engineer, you're a plumber, or a code monkey. Not to say that's bad, but considering the sheer amount of mediocre devs at FAANG calling themselves engineers, it just really shines a terrible light on our profession.


abstractions layers exist for this reason. as much of a sham as the 7-layer networking model is, it's the reason you can spin up an http server without knowing tcp internals, and you can write a webapp without caring (much) about if its being served over https, http/2, or SPDY.


I would make a big distinction between 'without knowing' and "without worrying about." Software productivity is directly proportional to the amount of the system you can ignore while you are writing the code at hand. But not knowing how stuff works makes you less of an engineer and more of a artist. Cause and effect and reason are key tools, and not knowing about TCP handshake or windows just makes it difficult to figure out how to answer fundamental questions about how your code works. It means things will be forever mysterious to you, or interesting in the sense of biology where you gather a lot of data rather than mathematics where pure thought can give you immense power.


To be an engineer, you need the ability to dive deeper into these abstractions when necessary, while most of the time you can just not think about them.

Quickly getting up to speed on something you don't know yet is probably the single most critical skill to be a good engineer.


All true. The problems start getting gnarly when Something goes Wrong in the magic black box powering your service. That neat framework that made it trivial to spin up an HTTP/2 endpoint is emitting headers that your CDN doesn't like and now suddenly you're 14 stack layers deep in a new codebase written in a language that may not be your forte...


While I wouldn't judge someone not knowing anything about layer 1 or 2, knowing something about MTUs, traffic congestion, routing is something that should be taught at any basic level of CS school. Not caring if it's served over http2? Why the hell would you? Write your software to take advantage of the platform it's on, and the stack beneath it. The simple fact of using http2 might change your organisation from one fat file served from a CDN, into many that load in parallel and quicker. By not caring about this, you just... waste it all to make yet another shitty-performing webapp. In the same way, I don't ask you to know the TCP protocol by heart, but knowing just basics means you can open up wireshark and debug things.

Once again: if you don't know your stack, you're just wasting performance everywhere, and you're just a code plumber.


> knowing something about MTUs

isn't that why MTU discovery exists?

> Write your software to take advantage of the platform it's on, and the stack beneath it

sure, but usually those bits are usually abstracted away still. otherwise cross-compatability or migrating to a different stack becomes a massive pain.

> The simple fact of using http2 might change your organisation from one fat file served from a CDN, into many that load in parallel and quicker.

others have pointed out things like h2push specifically, that was kind of what i meant with the "(much)" in my original comment. Even then with something like nginx supporting server-push on its end, whatever its fronting could effectively be http/2 unaware and still reap some of the benefits. I imagine it wont be long before there are smarter methods to transparently support this stuff.


But this does matter to web developers! For example http/2 lets you request multiple files at once and server push support. If you don't know this you might not implement it and end up with subpar performance. http/3 is going to be built on UDP-based Quic and won't even support http://, will need a `Alt-Svc:` header, and removes the http/2 prioritisation stuff.

God knows how a UDP-based http is going to work but these are considerations a 'Software Engineer' who works on web systems should think about.


Someone writing the framework should absolutely be intimately familiar with it, and should work on making these new capabilities easy to use from a higher level where your typical web dev can make use of it without much thought, if any.

Err, no. Look at most startups and tell me how many of them care if they’re serving optimized content over HTTP/2?

you know. deep down inside: we are all code monkeys. Also, as much as people like to call it software engineering, it's anything but engineering.

In 95% of cases if you want to get something/anything done you will need to work at an abstraction layer where a lot of things have been decided already for you and you are just gluing them together. It's not good or bad. It is what it is.


This reminds me of Jonathan Blow's excellent talk on "Preventing the Collapse of Civilization":

https://www.youtube.com/watch?v=ZSRHeXYDLko


I honestly can't tell if this is sarcasm or not.

Which says a lot about the situation we find ourselves in, I guess.


It's not sarcasm. A lot of things simply do not have visibility and are not rewarded at the business level - therefore the incentives to learn them are almost zero

Likewise I don’t know what to think when I meet frequent flyers who don’t know how a jet turbine functions! :)

It is a process of commodification.


The people flying the airplane do understand it though. At least they are supposed to. Some recent accidents make one wonder.


Pilots generally do have some level of engineering background, in order to be able to understand possible in-flight issues, but they're not analogous to software engineers. They're analogous to software operators. Software engineers are analogous to aerospace engineers, who absolutely do understand the internals of how turbines work because they're the people who design turbines.

The problem with software development as a discipline is its all so new we don't have proper division of labor and professional standards yet. It's like if the people responsible for modeling structural integrity in the foundation of a skyscraper and the people who specialize in creating office furniture were all just called "construction engineers" and expected to have some common body of knowledge. Software systems span many layers and domains that don't all have that much in common with each other, but we all pretend we're speaking the same language to each other anyway.


I really like your analogy, I’m stealing it. As a pilot(devops) during interviews I’m often asked deep aeronautics internals (some graphs/tree question) about whatever plane that aeronautic (software) engineer built and it’s always annoyed me that that’s a game I have to play. Same realm but completely different fields, that are somewhat and yet closely intertwined. The frequency of this is quite common

I sometimes hate joke/fantasize about nailing a SE candidate with an obscure BPG or esoteric DNS question and then being outwardly disappointed in his response, watching him realize he’s going to lose this job over something I found completely reasonable to ask, but ultimately entirely useless to his position


It doesn't help that most of it is completely abstract and intangible. You can immediately spot the difference between a skyscraper and a chair, but not many can tell the difference between a e2e encrypted chat app and a support chat app. It's an 'app' but they are about as different between a chair and a skyscraper in architecture and systems.

Software has been around for longer than aeroplanes

Developers who can only configure AWS are software operators using a product, not software engineers. There’s nothing wrong with that but if no one learns to build software, we’ll all be stuck funding Mr Bezos and his space trips for a long time.


> Software has been around for longer than aeroplanes

Huh?


Ada Lovelace wrote the first program in 1842, it was another 61 years before the Wright brother’s inaugural flight

But it was never actually executed. Too tightly coupled to the hardware layer :/

I think the important point here is that even pilots dont know the full mechanics of a modern jet engine (AFAIK at least, I don't have an ATPL so not 100% on the syllabus). They may know basics like the Euler turbine equation and be able to run some basic calculations across individual rows of blades, but they most likely will not fully understand the fluid mechanics and thermodynamics involved (and especially not the trade secrets of how the entire blades are grown from single crystals).

This is absolutely fine, and one can draw parallels in software, as a mid level software engineer working in an AWS based environment wont generally need to know how to parse TCP packet headers, despite the software/infrastructure they work on requiring them.


> and especially not the trade secrets of how the entire blades are grown from single crystals

Wait, what? Are you telling me that jet turbine blades are one single crystal instead of having the usual crystal structure in the metal?


I'm not a materials guy personally so won't be the best person to explain the exact science behind them, but they're definitely a really impressive bit of engineering. I had a quick browse of this article and it seems to give a pretty good rundown of their history and why their properties are so useful for jet engines https://www.americanscientist.org/article/each-blade-a-singl...


Wow... Mindblowing stuff. Long but woth reading.

They are grown as single metal crystals in order to avoid the weaknesses of joints. They are very strong!



Yes and no, for a private pilot license you are taught through intuition and diagrams. No Navier Stokes, no Lattice Boltzmann, no CFD. The FAA does not require you to be able to solve boundary condition physics problems to fly an aircraft.


Modern jet pilots certainly know much less about airplane functions than they did in the 1940s, and modern jet travel is much safer than it was even a decade ago.


Software today is more like jets in the 1940s than modern day air travel. Still crashing a lot and learning a lot and amazing people from time to time.


Many of them know the checklists for their model of aircraft. The downside of the checklists is that they sometimes explain the "what" and not the "why". They are supposed to be taught the why in their simulator training. Newer aircraft are going even further in that direction of obfuscation to the pilots. I expect future aircraft to even perform automated incident checklist actions. To your point, not everyone follows the checklists when they are having an incident as the FDR often reports.


most pilots probably don't know how any specific plane's engine works further than what inputs give what outcomes and a few edgecases. larger aircrafts have most of their functions abstracted away with some models effectively pretending to act like older ones to ship them out faster (commercial pilots have to be certified per plane iirc, so more familiar plane = quicker recertification), which has led to a couple disasters recently as the 'emulation' isn't exact. this is still a huge net benefit as larger planes are far more complicated than a little cessna and much harder to control with all that momentum, mass, and airflow.


Perhaps it is not about a jet engine, but I find this beautiful presentation extremely fascinating:

https://www.faa.gov/regulations_policies/handbooks_manuals/a...


"I don't know what to think when I meet engineers who know TCP/IP but don't quite understand how photons are transmitted over fiber."

"I don't know what to think when I meet engineers who know UNIX but don't quite understand assembly."

What you quoted is tantamount to the lament of a dinosaur that has ample time to observe the meteor approaching and yet refuses to move away from the blast zone.

Less facetiously: the history of progress in most domains, and especially computing, is in part a process of building atop successive layers of abstraction to increase productivity and unlock new value. Anyone who doesn't see this really hasn't been paying attention.


> Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.

Can we provide an example that isn't also a big company? I'm not really thinking of big companies that don't either dogfood their own tech or rely on someone bigger to handle things they don't want to (Apple spends 30m a month on AWS, as an example[0]). You could also make the argument that kind of no matter what route you take you're "relying on" some big player in some big space. What OS are the servers in your in-house data center running? Who's the core maintainer of whatever dev frameworks you might ascribe to (note: An employee of your company being the core maintainer of a bespoke framework that you developed in house and use is a much worse problem to have than being beholden to AWS ELB, as an example).

This kinda just sounds like knowledge and progress. We build abstractions on top of technologies so that every person doesn't have to know the nitty gritty of the underlying infra, and can instead focus on orchestrating the abstractions. It's literally all turtles. Is it important, when setting up a MySQL instance, to know how to write a lexer and parser in C++? Obviously not. But lexers and parsers are a big part of MySQL's ability to function, right?

[0]. https://www.cnbc.com/2019/04/22/apple-spends-more-than-30-mi...


I guess I don’t really understand what a socket is? It’s a magic thingy that allows two computers/processes to communicate and sometimes has trouble with NAT.

I know how to use it certainly, but how the hell it is implemented is more or less black magic to me.

Now that’s not to say I couldn’t learn how a socket works. It’s just never been at all relevant to performing my job.


Yes, but you should at least know some basic troubleshooting skills like running netstat to see a socket in syn sent or whatever to get an idea if there is a network connectivity issue to your endpoint.

The second quote resonates well with the old Joel Spolsky blog post "Fire and Motion" [1]. Chasing new technologies is something your huge competitors want, you keep adopting XML databases, Corba (in the olden days), NoSQL just a few years ago, today it is Kafka, Crypto, AI, kirjillion of AWS products instead of working on your business.

[1] https://www.joelonsoftware.com/2002/01/06/fire-and-motion/


I hope you and the author realise that sockets are a library. And used to be products! They're not naturally occurring.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: