Funny, we're running it sanely without doing that. We've separated our clusters based on use-case - delivery vs. back-end, aiming towards the "cell-based" architecture.
> Managed Kubernetes (EKS on AWS, GKE on Google) is very much in its infancy and doesn’t solve most of the challenges with owning/operating Kubernetes (if anything it makes them more difficult at this time)
...some details on the challenges they don't solve, or indeed make more difficult would be good.
But yep, K8s is complex. So, to paraphrase `import this`, you only want to use it when you have sufficiently complicated systems that the complexity is worth it.
At my current shop, we struggle to maintain k8s clusters with an 8 person team. We inherited the debt of a previous team that had deployed k8s and their old legacy stuff was full of dependency rot. We have new clusters, and we update them regularity, but it's taken nearly half a year so far and we don't have everything moved over.
You do need good teams to move fast; and good leaders to prioritize minimizing tech debt.
I think the way we've approached it achieves the same goal as just giving each team their own cluster to avoid them messing up other teams.