Except that most devs who learn this stuff but do not use it daily (or ever) (and why would they, they are devs), will know just enough to have opinions and too little for them to make sense. You (in general, maybe YOU do) do not understand the env your code runs on: it is layers on layers on layers with millions of LoC in between; you know some abstraction and maybe you know a bit more about this abstraction than others but you still do not understand it really. If you run Java or .NET Core or whatever popular with good support, your day to day programming won't matter for whatever env it runs on; if you write best practice code in those envs, writing different code for whether it runs in k8s or bare metal is... weird in almost every case. Someone in the team should know how to tweak the knobs and if there are things you should not do (use the filesystem for persistence and other trivial things) but the average dev or data scientist really doesn't need to know about it in any significant detail.
But I am curious where you have seen modern runtimes fail and where the code was the issue (not tweaks to the JVM settings); any concrete examples where well written, best practice code worked on the laptop but failed in k8s?
> But I am curious where you have seen modern runtimes fail and where the code was the issue (not tweaks to the JVM settings); any concrete examples where well written, best practice code worked on the laptop but failed in k8s?
Not sure about OP, but the most times I have seen devs have issues with Kubernetes is in the tweaking of the knobs around deployments including security. Startup v/s readiness v/s liveness probes, rolling updates, auto-scaling, pod security policies and such are usually all-new to developers, and have a lot of different options. Most devs just want "give me the one that works, with good defaults", and need a higher level abstraction.
But at most companies I have seen those are handled by specific roles in the company who are in the team as well. Not all devs on the team need this knowledge. Depending on the service, you need resourcing. We have monoliths and microservices running on ecs and eks and we have 1 person who does the knobs turning and 1 person (me) who can take over if need be. I see no need to burden others with this, I dare say it, crap, because it is just not really useful or needed for writing business functionality that our clients want and need and pay for.
OP seemed to imply that coders needed to know this stuff because their code might not work: if that means turning knobs on the outside (runtimes/containers) then sure, but the devs don't need to know, but their comment about the JVM implies something else and I am curious what that is.
But I am curious where you have seen modern runtimes fail and where the code was the issue (not tweaks to the JVM settings); any concrete examples where well written, best practice code worked on the laptop but failed in k8s?