and to the GitHub projects themselves
The one of the main design ideas behind the original Borg system (which Kubernetes is inspired by) was utilisation of resources and simplicity. Now we just stuff all the compute power we got with unnecessary proxy systems that provide minimal value. I truly believe we have lost our way.
That doesn’t mean that k8s itself is all that complex. It just needs a lot of libraries for all the stuff you might do with it, where any given library will be dead code to 99% of people; and an architecture flexible enough to allow it to drive and track the state of those libraries.
The latter is more of a microkernel for distributed control systems.
My hunch is that this means OAM may be dead letter in the long run. Abstract from Kubernetes-as-scheduler, fine and good. But Kubernetes-as-microkernel is close to doing that already. Whatever my gripes about the details, the mechanisms and affordances are consistent and predictable. That's very valuable.
I think the quotes in the article make very good points.
I've introduced Kubernetes at multiple companies , usually with good results.
But Kubernetes is a relatively low level runtime. It requires a lot of knowledge if you want to use it correctly, even with hosted Kubernetes offerings.
Application developers want to specify how the application works without having to learn a whole lot about Kubernetes internals. They also want the freedom to run something on different platforms without a lot of changes.
Think Elastic Beanstalk or Google App Engine, but provider independent. With the freedom to run it locally , maybe on a cheap self hosted cluster for dev deployments and on AWS for production. This is a lot of work right now, even with the hosted Kubernetes offerings.
Also enterprises often want to offload the decision making and rely both on common standard solutions and outside support if things happen. This is hard with k8s due to the flexibility of the platform.
I agree that it is too low level and too complex to provide a good company wide foundation on its own.
OAM could be a very valuable tool to have.
Also the points about Dapr resonant e. Building a complex event driven architecture with many services is what a lot of enterprises want , but it is really hard to do it well. There is a similar effort by IBM , but I don't remember the name right now.
Cloud Foundry is my reference model for the power of clear, safe boundaries between roles. Good fences make good neighbours.
Right now I'm writing a book about Knative and one of my goals is to require zero prior Kubernetes experience or knowledge. It's turning out to be trickier than I'd first thought. I can't tell if that's genuinely because of the close adoption of idioms and patterns or whether it's just that I've spent a lot of time around Kubernetes for the past 2-ish years and can't faithfully recreate my original ignorance.
The problem is migration. Old apps don't handle k8s very well. An application that follows the k8s model should be really easy to deal with. It either is one of a bunch of figuratively identical apps, and you connect randomly, or it is part of a master/Replica group. Older applications need help with that, or expect multiple connections to the same host, or... Any number of anti-patterns.
(Databases excluded - the Borg version of Statefulsets operates very differently, significantly because the trade-off of speed/reliability of persistent storage sucks in the cloud world right now.)
Utilization - yes, simplicity - no. Kubernetes is paragon of beautiful and concise design compared to Borg especially if you get into cluster management side of things.
The complexity that you see comes from unifying compute, networking, storage and now even cluster management under single bundle of APIs. Borg never did that on my watch and openshift did this in a way more complex way.
This doesn't include practical or useful documentation on them and doesn't cover CRDs (like what Istio provides).
The amount of complexity someone has to learn to use Kubernetes is large and growing.
I really like the idea of systems to simplify the learning curve and onboarding.
A fair amount of bulk rests at the feet of Golang, not Kubernetes per se. Vast sweeping vistas of Kubernetes code is generated from a handful of files because generics are silly and code generation is just swell.
OAM is essentially a YAML file.
It can be put in a service catalog
or marketplace and deployed from there.
But what’s maybe most important, says
Russinovich, is that the developer can hand
off the specification to the ops team and the
ops team can then deploy it without having to
talk to the developer.
I think the answer is that this can't be settled once and for all, because there are economies from specialisation and diseconomies from coordination.
My general view these days is that for a sufficiently large org, you will inevitably have a degree of separation between application engineering and platform engineering. What's important is to establish relatively clear contracts between them which avoids unnecessary complexity and delay being transmitted across the Conway boundary.
Good interfaces and technical contracts between the roles make this a lot easier. It used to be that getting something up and running was difficult because the incidence of cost and control fell on opposite sides more than once. For example: As a Dev, I need a place to run my software (cost). But someone else needs to provision and install it (control). Or: as an Operator, I don't want to be paged at 3am (cost). But someone else wrote the app (control).
When you draw the boundaries so that devs get self-service and ops get to (mostly) ignore what's inside the self-service boxes, a lot of this tension largely vanishes. That's not so much about whether folks sit in the same team and more about using technology to dissolve the tensions altogether.
Disclosure: I work for Pivotal, we have been known to dabble in this sort of thing.
It doesn't inspire a great deal of confidence that the Azure CTO promotes this development model - if he isn't quoted out of context that is.
The desire to have software engineers focus only on business logic is strong. This tends to result in the creation of an internal “platform team” that provides services to dev teams. Effectively an internal PaaS.
OAM helps enable this pattern.
I think that like folks at Azure, we evolved the approach based on experiences dogfooding various platforms, applying our existing thinking about product development and lots of learning from industry peers.
I am however overall very sympathetic to the idea that Kubernetes never drew a crisp line between roles.
Disclosure: I'm writing a book about Knative.
If you’re talking about orgs where software is tossed over the wall from dev to ops, then I agree. The goal here is to empower the internal ops function to build self-service platforms with clean interfaces so developers can do what they do best, which is write code and business logic.
That is the exact mentality that I see as totally dysfunctional.
OAM is just a contract between dev and ops so ops could tell what he has (Trait) in a way dev understand, and dev could tell ops what he want (Components) in a way easy to manage by ops. That's all.
I'd need to look at the regs directly, but I believe just having different roles would be enough, i.e. not being logged in as a production admin all day long, and doing devops and CI/CD, would probably allow a dev to support production under break glass circumstances.
Yes that is what Mark meant. At Microsoft we care deeply about empowering ISVs — we always have. OAM is designed to allow components from an ISV to be bound to a runtime environment by a separate consumer, where the concept of traits can provide configurability for things like ingress, autoscaling, secrets management, etc.
And it’s never been a goal, to disintermediate ops people from other companies’ devs. We’ve been doing just the opposite for ages, with things like VM “appliances” and Docker images, created so that the interface between “the devs at the company that created this” and “the ops people deploying it” can be more formally defined by some kind of deployment-time configuration API. This is another case of that.
Yes, you “shouldn’t” use text templating systems for YAML and you “should” use well-tested YAML parsers and emitters. But I am very skeptical here, especially given that this wonky format is supposed to be the interface between teams that aren’t talking to each other.
am i just old?
There's always going to be a need for some structured static configuration file format. Be it XML, INI, TOML, JSON, YAML, properties files, or whatever.
From my perspective, YAML and JSON have been more successful and well-liked than XML because they map much more directly to the basic data types common to all programming languages. How do you represent a list or a map in XML? Well, it depends...
Besides missing straightforward ways to map common data structures, XML is also way more verbose and much harder to read and write by hand than YAML and JSON. And no, there really is no way to easily map between XML and these languages. Again, how do you specify a list in XML?
Add to that the fact that for most use cases, marshaling and unmarshaling YAML can be handled directly with common libraries. But to parse XML into your internal data structures? You're going to have to write code, or decide on some schema to encode your data in before converting to XML. So XML didn't actually solve your problem of "how do we serialize this data?" It just provided a framework within which it was possible to write further standards.
Add onto all of this, that XML pretty early on started adding layers of confusing and contradictory standards and associated tools--XML Schema, XML Namespaces, XPath, XSLT. And still, none of those things solved the underlying problem. They just provided the framework.
And to that that XML is much, much more expensive to parse than JSON and YAML...
So I guess I don't get why you are confused. XML addresses a different set of problems than YAML, and it does so in an overly-complicated manner that's both human- and machine-unfriendly.
what i care about is writing yaml for the purpose of configuration and not have any feedback about whether the data i've prepared by hand is actually a valid configuration. not having to have a schema is a bug in the spec for this use case in all those nice acronyms and shortcuts you've listed above and they're all guilty of it. i'd like my configs strongly typed and well documented and none of the above helps developers do that - and that's where my confusion comes from.
For example, consider this map of regions in YAML:
northamerica: [ca, us, mx]
scandinavia: [dk, no, se, ax, fi, fo, gl, is, sj]
Writing a parser is also a bit of a nightmare, because there are a bunch of features which can turn a bit dangerous if you’re not careful—things like cyclic graphs or declaring types of objects. These are complete non-issues for the other formats I listed above—they’re all trees, and it’s very unusual for parsers to let you instantiate unintended types with those formats.
I know this is rhetorical, but I've been bitten by this enough times so for those who don't know `no` will translate to a boolean false.
I understand thinking YAML makes the wrong tradeoffs, but if you think it's less friendly than XML, then you haven't really worked with XML.
Yes. YAML was a damn mess compared to the others. You can get a rough estimate of how much by looking at the size of the specs—the XML spec is a fair bit shorter than YAML’s, and if you drop the part about DTDs (which are used less these days) the difference is even bigger. The TOML spec is far, far shorter than either one and the JSON spec makes the TOML spec look big.
I write a lot of parsers. I think it’s fun.
> …but if you think it's less friendly than XML, then you haven't really worked with XML.
If you want to talk about formats, let’s talk about formats. If you make claims that I must be inexperienced because I disagree with you, then it’s just rude.
I have done a few reasonable size projects with heavy XML use. A build system, some work with RPCs, and a web app where I wrote a ton of data for it in XML format, by hand. I also wrote an XML pretty-printer and a YAML pretty-printer. I did a conversion of the XML build system to YAML. I thought it was a bad tradeoff, so I reverted it. Since then I’ve migrated to Bazel. All this experience is a mix of hobby projects and professional.
The bad for XML—it’s more verbose. You have to decide on your own mapping between XML and data. That’s it, as far as I’m concerned.
My personal sense of it is YAML is in a pretty awkward place—it only makes sense for human authoring, not data exchange. My experience with it is that people will naturally want to automatically generate things that they would otherwise have to write by hand. So if you draw a Venn diagram, the YAML use cases are “human authored but not machine generated”.
If we think of using these formats for configuration, then the BIG problem is the sliding scale between pure-data approaches to configuration and using code for configuration. As systems mature and get more complex, the configs often acquire features of programming languages, or parts of the config gets rewritten in code. This is where YAML really suffers. XML is a bit easier, either to extend to add these kind of features or to emit from code.
<book title="XML Cookbook" author="John Doe" />
<book title="XML Cookbook">
<title>The <strong>Awesome</strong> <abbr>XML</abbr> Cookbook</title>
<person name="Jane Doe" />
<person name="Tim Pickens />
title: XML cookbook
- name: Jane Doe
- name: Tim Pickens
title: XML cookbook
authors: [ Jane Doe, Tim Pickens ]
This doesn’t add up to XML hate, for me. The way I would probably write the document is:
I wouldn’t use YAML as a basis for comparison. YAML has a fair number of oddities and inconsistencies that led me to stay away from it. XML is at least consistent and simple, there are not really any surprises to speak of and there are plenty of tools for modifying XML documents even when you don’t have the schema. For YAML, although there’s a spec, it’s complicated enough that different implementations are inconsistent with each other and there seems to be some inertia at work here.
There’s also the downright bizarre set of regexes that YAML uses to recognize bare strings as other types, that means that '3.3.0' is a string, but '3.3' is a number. If I write 'ni' that’s a string but 'no' is a boolean. I personally find it harder to read or author YAML due to all these rules. You also have to be a bit more careful to sanitize YAML input due to things like the way !! is handled by various libraries, or the way YAML allows object cycles. It gives you too much rope to hang yourself, has too many surprises, and too many footguns. The fact that YAML is a bit more concise just isn’t enough of an advantage.
# Quiz: What value does this give you when parsed?
MAC Address: 11:02:03:04:05:06
XML is workable in a lot of situations and in some cases the verbosity makes it a bit more self-documenting than e.g. JSON.
TOML would be my choice for config files that I maintain.
+1 on YAML having its fair share of problems. I like to think of them as our collective problems, since nothing has emerged to replace YAML yet. If something does I’m certain OAM could be adapted to it.
I’ve personally seen some promising exploration of config through Turing-complete languages like TypeScript. See Pulumi.
I think if you have this viewpoint, you have probably defined your problem too narrowly, and may want to revisit some of your requirements.
I took a look at some of the example config files in the GitHub, and what I see are future problems when people deploying applications need to use some kind of templating system to deploy multiple variations of an application (e.g. production / development, or different replicas).
If you show these configuration files and ask a developer to whip up some templates, they are almost certainly going to reach for a text templating system, and down the road you will see broken configs. This creates an additional burden for tech leads who will need to educate their team on how to avoid the various traps when generating YAML files.
I could be deeply mistaken here about the nature of how these config files work, but that is my first impression. What I would be looking for if I were evaluating this software is an alternative configuration format which is more “bulletproof” like JSON or XML, since I know that there is excellent tooling for these formats, and they don’t suffer from the same kinds of traps and other defects as YAML does.
If you're using a language that doesn't have an open-source package/module for YAML, I'd be surprised.
But I balk at the idea that there is a general good to be had from further separating dev and ops. Isn't the whole point of kubernetes to provide useful control abstractions over the complexities involved in deploying components on the back end? You seem to be implying that once an engineer has developed a thing it needs to be tossed over a wall so that a specialist can write the k8s manifests, determine the runtime resource requirements, provision ingress, etc. Maybe that general approach is necessary in the enterprise environments which seem to be Azure's primary target market, I don't know. But I do know that on our much smaller team the back end engineers have become thoroughly comfortable with performing those tasks for their own applications with a little assist here and there from devops. Most of them never have to touch kubectl because deployment to test and production environments is handled by ci/cd pipelines, so really their concerns are focused on writing proper manifests to create the environment their thing needs. I don't think that is too complex a task for someone whose daily job is writing back-end server components.
The challenge I'm wrestling with is the smaller teams you're referencing -- the ones who can write the Kubernetes YAML for Ingress, HPAs -- they aren't representative of mainstream enterprise developers. More importantly, they're not representative of the millions of new developers who we need to empower with simpler code-to-cloud solutions that are also build on a layered, industry-standard foundation (e.g. Kubernetes).
How do we empower these new developers without separating concerns and reducing cognitive overhead around ops?
Yeah its an interesting question, and I think to some extent it is addressed by the devops model. If you take the entirety of a kubernetes deployment manifest as an example, it represents a range of concepts that cross over from the application to the infrastructure sphere. On the one hand back end engineers should certainly be able to specify their own entrypoints, probes, environments, etc. On the other they are probably not qualified to set affinity and toleration, and might not be familiar with the finer details of probe timing properties or update strategies, again just to give a few examples.
On our team we recognize that a range of inputs are needed from different perspectives to achieve the final deployment spec. The pipeline is based on kustomize and patches so back end engineers can write the patches that set up things like I mentioned above, and devops (which in our group is a role combined somewhat with SRE) can assist in fine tuning those, and in writing patches to set the infrastructure level concerns like what nodepool the thing has affinity for. It's worked well for us but I am not expert in the enterprise arena.
Build & Run is becoming very popular.
Operations teams are scaling down their monitoring to just infrastructure, because the applications are more opaque than ever. Incidents at the application level are no longer fixable by Ops, because they have no idea what the developers deployed or how it's configured.
Developers now need to take responsibility for application monitoring, patch management and incident management. Meaning that we're shifting more work to a group of people that where already in short supply.
I don't think we're necessarily moving in the right direction. There's certainly benefits to development and operations working in tandem, but currently we're just moving operations to developers without much consideration for the people that needs to do the actual work.
In my opinion your company/solution needs to be somewhat limited for "DevOps" to make sense. For everyone else, it's two separate roles.
Realstically, if the platform isn't down, I wouldn't need to wake up my platform engineer, just the dev. But the point is, the dev should build more reliable software so they don't have to wake up.
However, you're right, often these are two different teams. There are some people that can do both, but they're few and far between.
But I don’t think it’s necessarily that much harder to hire developers than it is to hire devops. Some managers have told me that hiring for devops is harder.
And developers should feel some of the pain—I’m not saying page them in the middle of the night, but they should be doing daytime on-call rotations. This helps align incentives and makes developers aware of reliability issues in the systems that they create.
> In my opinion your company/solution needs to be somewhat limited for "DevOps" to make sense. For everyone else, it's two separate roles.
When I think DevOps, I already think of it as a role separate from development. You have devs, and then you have devops. You can combine both roles into one, and that makes sense for smaller / earlier stage projects, but otherwise I think of devops as a separate role.
Kind of a mess because devops varies so much between companies and isn’t even consistently named.
But my experience is that it’s not necessary to wake up two groups of people—it’s either a developer that gets woken up because the project is too small or too early to be supported by operations, or it’s someone in devops/SRE/production engineer who gets woken up. There’s a lot of practices that need to be put into place to make this work, but it’s doable.
Better call the second role "Ops" then, and wait for someone to propose merging it with "Dev" again.
There's so much mental confusion and meaningless use of language now, about the term "DevOps" that was a fairly simple suggestion about bringing DEVelopment and OPerationS together. That sentiment is literally in the word, I don't know how it could be plainer.
Then functionally they're going to end up as software developers. And you're going to treat them as such or they're going to leave. Which is what the notion of a "devops engineer" evolved to be, yet every good-to-great "devops engineer" I've ever worked with identified as a software developer and could be hired at any developer role they wanted.
If you want a warm-body ops team, they're not going to be experts. If you want experts, in 2019 they're going to rapidly stop being just-an-ops-team.
> Some managers have told me that hiring for devops is harder.
It's easier to hire "devops" if your idea of "devops" is "somebody with an AWS Associate cert." It's much harder to hire "devops" if your idea of "devops" is "thinks systematically and is capable of debugging a problem nose-to-tail without waking up a developer."
I am the latter, and there are remarkably few of these.
> And developers should feel some of the pain—I’m not saying page them in the middle of the night, but they should be doing daytime on-call rotations.
I feel like you've got this wildly backwards. Developers should be first-line on-call in almost every situation modulo detectible hardware failures. Because here's the thing--I've been doing this a long time as a mostly-unbiased consultant (I'm in-house now and it still holds), and in most environments, operations/infrastructure 1) break things less, and 2) break things quickly, so there's usually relatively little time between the break and the working-hours fix.
If a developer can't solve a developer problem because they think it's actually infrastructural, they can escalate. Making that ops/infra group be first-line support for application pages is both inefficient and unfair.
Merging the two as you said, is DevOps.
Anyway, that's a bit sad IMHO. In my opinion, running what you build is great to discover ways to improve your software.
Regardless if you want to or don't want to touch ops: You still might want to talk to ops or vice-versa.
Even if you have strict roles for devs and ops (and no mixed roles), you might be in a team where both are present and hopefully talk to each other.
(Also… in English, “we” does not always include the listener. Whether the listener is included is unspecified.)
It's better if that data is unearthed and fed back into the development process, it's better if that feedback loop is closed, it's better if that separation is removed.
That's one of the reasons for "DevOps" as originally formulated.
We built OAM for you, and for your counterparts in ops who want to help you innovate faster. :)
OAM though, perplexes me. Even the justification - that k8s is out of scope of the developer, and is handled by ops - speaks to a misunderstanding of some of the advantages clusters provide. Some research  tells me that developers author "components" which encapsulate singular parts of an architecture, and are connected by operators into an application. And then auto-scaling is handled by "traits", which are entirely separate? It's interesting, but it seems to largely replicate (and act as a translation layer for) existing k8s features. Maybe that's the real value prop here, and the techcrunch article buried the lede - by defining components and traits through OAM you can move to any cloud-based model that supports the manifest spec (thus avoiding the high learning curve of something like k8s). Even so, I have a sinking feeling that most of this will be folded into the ever-changing definition of devops, so small teams that manage an app's entire lifecycle (and are told to get on board with the new-fangled dealio by management) will have another layer of indirection to debug when something inevitably goes wrong. If Azure comes out with an implementation of OAM that uses their VSIs, I'll get interested. Then it'll at least provide real value/choice, and hopefully make it easier to migrate existing workloads that don't need an entire machine for themselves onto a k8s cluster.
At the same time, we aim to improve application modeling on systems like Kubernetes that currently focus more on container infrastructure.
disclosure: I'm one of the spec authors.
On the other hand, if you can pair already-written software with a collection of VMs and place an OAM layer between the machines and business logic, the portability of your code between OAM-compatible vendors becomes a selling point for the standard. I know a large project like this is a team effort, but can you shed any light on the reasons behind your decision-making and prioritization towards kubernetes?
I'll have to check out linkerd, thanks for the rec!
Gabe from the Azure team.
VSIs? I'd love to understand this better.
>> He also argues that Kubernetes itself is too complicated for enterprise developers. “At this point, it’s really infrastructure-focused,” he said. “You want a developer to focus on the app. What we saw when we talked to Kubernetes shops, they don’t let developers near Kubernetes.”
Not sure what he means by "get near" but it's really not that complicated to use kubectl to interrogate and modify the cluster. All of the back end engineers on our small team are comfortable with it.
And I'm sure it's easier to throw money at a project like this than to actually fix that problem.
Tell me you are saving money after you hire them :)
K8s isnt bulletproof and requires significant care. Not to mention constant upgrades
In Docker Swarm, that's a Stack. K8s has no equivalent. So there's no answer to "how do i deploy/update my flask api and celery workers together"
This makes it especially easy for anyone transitioning from Docker Swarm to Kubernetes.
In Swarm, the stack is an atomic unit.
That's what Microsoft is trying to fix - the atomicity of an abstraction level that is higher than K8s Services.
Not sure why they didnt do this as part of the k8s core committee process.
My take is that the equivalent in k8s is it's declarative management with configuration files. Just kubectl apply -f your.yaml and you're good to go.
You nailed it. We think that microservices building blocks enabled by extensible side-cars has a lot of potential. We’d love you to take it for a spin and provide some feedback on GitHub. :)
* Dapr: an open-source project to make it easier to build microservices:
* Announcing the Open Application Model (OAM):
When Docker came out, I was just finishing an in-house-Heroku style project at my employer, based on LXC containers. I watched as Deis came along, promising to give the same benefits in an open platform. I was sad to see Deis go away after the Microsoft acquisition. The industry got all excited about Kubernetes, but it seemed to me like we were backsliding from progress that had been made toward a 12 Factor platform. (https://www.12factor.net/) . It's very encouraging to see that coming back.
Glad you like what you see. The original Deis team worked on much of it. I’m happy to say we have a lot more innovation coming in the PaaS space.
RedHat OpenShift (yes yes IBM) is gaining momentum because of it. But the field is still wide open. And the burden will fall on small ISVs to provide open solutions around data portability, redundancy, security, and a host of other concerns.
In the early days of Cloud technologies, the original vision was highly commoditized. You'd pull a Docker container from a registry hub and perhaps not even know where it ran. Perhaps even an exchange would handle pricing and performance. Instead, we have the Big 4 vendors in a highly fragmented space offering roughly the same services.
One of the author's of the OAM spec mentioned that an OAM abstraction might enable developers to deploy to k8s, docker swarm or Apache mesos with the same OAM config, provided that all platforms (or vendors) implement/support the OAM API.
They have to work on projects like this to ensure developers can easily deploy to Azure so they can hedge their revenue.
disclosure: am one of the OAM authors.
Creating a new actor follows a local call like
for example http://localhost:3500/v1.0/actors/myactor/50/method/getData to call the getData method on myactor with id=50
The sidecar pattern is shared by a service mesh, so I understand the comparison. However Dapr is focused on enabling in-IDE experiences versus intercepting and proxying networking traffic like a service mesh.
It will be interesting to see how OAM evolves, especially since it is coming from the same team that is leading the charge on Helm and CNAB (https://cloudblogs.microsoft.com/opensource/2018/12/04/annou...).
Here is what we have learnt about this space in the last one and a half years.
Application portability -
While the point about application portability is true, if an organization has decided to adopt Kubernetes, the portability requirement is solved by Kubernetes itself to a large extent. If your application platform workflows are built as Kubernetes YAMLs, then they can be run on any cluster. Kubernetes YAMLs of built-in resources (Pod, Secret, etc.) and Custom Resources (MySQL, Postgres, etc.) can be leveraged to create such Kubernetes-native platform workflows. Our learning has been that a solution that focuses on solving the platform workflows problem on Kubernetes needs to augment existing Kubernetes tools such as Helm, Kustomize, etc. We have been developing such a tool (https://github.com/cloud-ark/kubeplus). Check out this blog post which provides detailed comparison between existing tools. https://medium.com/@cloudark/discovery-and-binding-of-kubern...
On separation of concern between Devs & Ops -
Again with adoption of Kubernetes, our observation has been that Kubernetes YAML and the tooling around it such as Helm is becoming a common language of communication between Devs & Ops. In our view the goal for anyone developing new tools/frameworks in this space should be to help break the barriers that have existed between Dev and Ops teams further. One way we are trying to do this is by extending the vocabulary of ‘as-Code’ systems from the Infrastructure world to the ‘platform’ world of application development teams.
Check out some of our work in this space primarily focusing on Kubernetes Custom Resources here - https://cloudark.io/platform-as-code
Glad to see you engage. Thanks to enterprise sub & monthly credit azure is def my favorite cloud right now :)