Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft launches new open-source projects around Kubernetes and microservices (techcrunch.com)
223 points by bastichelaar on Oct 17, 2019 | hide | past | favorite | 124 comments




Somehow these enterprise people have turned Kubernetes, which basically was an advanced job scheduler for compute resources, into a unnecessary complex monster (just look at the LOC of kube-api). And that's still pretty decent compared to all the other projects in the ecosystem (looking at OpenShift, Istio ...). I really don't believe a lot of people have a valid use case for these projects like OAM when you look at the complexity, failure cases and administrative tasks these systems introduce.

The one of the main design ideas behind the original Borg system (which Kubernetes is inspired by) was utilisation of resources and simplicity. Now we just stuff all the compute power we got with unnecessary proxy systems that provide minimal value. I truly believe we have lost our way.


Fundamentally, k8s isn’t a compute job scheduler—it’s an IaaS state converger, like Terraform, or AWS CloudFormation. As such, it needs to know how to model—and manipulate—the state of pretty much any IaaS resource you have. (And, unlike alternatives, it’s also extensible with custom convergible resource types, too.)

That doesn’t mean that k8s itself is all that complex. It just needs a lot of libraries for all the stuff you might do with it, where any given library will be dead code to 99% of people; and an architecture flexible enough to allow it to drive and track the state of those libraries.


I think Kubernetes-the-scheduler and Kubernetes-the-patterns-with-reified-examples have diverged and are going to continue diverging.

The latter is more of a microkernel for distributed control systems.

My hunch is that this means OAM may be dead letter in the long run. Abstract from Kubernetes-as-scheduler, fine and good. But Kubernetes-as-microkernel is close to doing that already. Whatever my gripes about the details, the mechanisms and affordances are consistent and predictable. That's very valuable.


Keeping different perspectives and roles in mind is important here.

I think the quotes in the article make very good points.

I've introduced Kubernetes at multiple companies , usually with good results.

But Kubernetes is a relatively low level runtime. It requires a lot of knowledge if you want to use it correctly, even with hosted Kubernetes offerings.

Application developers want to specify how the application works without having to learn a whole lot about Kubernetes internals. They also want the freedom to run something on different platforms without a lot of changes.

Think Elastic Beanstalk or Google App Engine, but provider independent. With the freedom to run it locally , maybe on a cheap self hosted cluster for dev deployments and on AWS for production. This is a lot of work right now, even with the hosted Kubernetes offerings.

Also enterprises often want to offload the decision making and rely both on common standard solutions and outside support if things happen. This is hard with k8s due to the flexibility of the platform.

I agree that it is too low level and too complex to provide a good company wide foundation on its own.

OAM could be a very valuable tool to have.

Also the points about Dapr resonant e. Building a complex event driven architecture with many services is what a lot of enterprises want , but it is really hard to do it well. There is a similar effort by IBM , but I don't remember the name right now.


I strongly agree with the vision of abstracting away the infrastructure details, having worked on, around or adjacent to Cloud Foundry over the past 5 years. These days I lurk on the fringes of Knative.

Cloud Foundry is my reference model for the power of clear, safe boundaries between roles. Good fences make good neighbours.

Right now I'm writing a book about Knative and one of my goals is to require zero prior Kubernetes experience or knowledge. It's turning out to be trickier than I'd first thought. I can't tell if that's genuinely because of the close adoption of idioms and patterns or whether it's just that I've spent a lot of time around Kubernetes for the past 2-ish years and can't faithfully recreate my original ignorance.


To put it really simply: there's not a lot to kubernetes internals that you should know. The abstractions in k8s are really, really good.

The problem is migration. Old apps don't handle k8s very well. An application that follows the k8s model should be really easy to deal with. It either is one of a bunch of figuratively identical apps, and you connect randomly, or it is part of a master/Replica group. Older applications need help with that, or expect multiple connections to the same host, or... Any number of anti-patterns.

(Databases excluded - the Borg version of Statefulsets operates very differently, significantly because the trade-off of speed/reliability of persistent storage sucks in the cloud world right now.)


> The one of the main design ideas behind the original Borg system [...] was utilisation of resources and simplicity.

Utilization - yes, simplicity - no. Kubernetes is paragon of beautiful and concise design compared to Borg especially if you get into cluster management side of things.

The complexity that you see comes from unifying compute, networking, storage and now even cluster management under single bundle of APIs. Borg never did that on my watch and openshift did this in a way more complex way.


If you print the Kubernetes API (only the latest stable parts) for what you'd need to workloads to PDF as 8.5x11 (~A4) you'll get over 1,100 pages. This is for the descriptions of all the objects in the API you work with. The manifests.

This doesn't include practical or useful documentation on them and doesn't cover CRDs (like what Istio provides).

The amount of complexity someone has to learn to use Kubernetes is large and growing.

I really like the idea of systems to simplify the learning curve and onboarding.


> just look at the LOC of kube-api

A fair amount of bulk rests at the feet of Golang, not Kubernetes per se. Vast sweeping vistas of Kubernetes code is generated from a handful of files because generics are silly and code generation is just swell.


  OAM is essentially a YAML file. 
  It can be put in a service catalog 
  or marketplace and deployed from there. 
  But what’s maybe most important, says 
  Russinovich, is that the developer can hand 
  off the specification to the ops team and the 
  ops team can then deploy it without having to 
  talk to the developer.
This statement sounds very backwards to me. Isn't this increasing the separation between devs and ops which we want to get rid of?


> This statement sounds very backwards to me. Isn't this increasing the separation between devs and ops which we want to get rid of?

I think the answer is that this can't be settled once and for all, because there are economies from specialisation and diseconomies from coordination.

My general view these days is that for a sufficiently large org, you will inevitably have a degree of separation between application engineering and platform engineering. What's important is to establish relatively clear contracts between them which avoids unnecessary complexity and delay being transmitted across the Conway boundary.

Good interfaces and technical contracts between the roles make this a lot easier. It used to be that getting something up and running was difficult because the incidence of cost and control fell on opposite sides more than once. For example: As a Dev, I need a place to run my software (cost). But someone else needs to provision and install it (control). Or: as an Operator, I don't want to be paged at 3am (cost). But someone else wrote the app (control).

When you draw the boundaries so that devs get self-service and ops get to (mostly) ignore what's inside the self-service boxes, a lot of this tension largely vanishes. That's not so much about whether folks sit in the same team and more about using technology to dissolve the tensions altogether.

Disclosure: I work for Pivotal, we have been known to dabble in this sort of thing.


Well said, Jacques.


Agree. This looks to me as if the audience is traditional enterprise who haven't yet adopted DevOps and to make kubernetes more accessible. But those are likely better served by either a PaaS or serverless solution.

It doesn't inspire a great deal of confidence that the Azure CTO promotes this development model - if he isn't quoted out of context that is.


Gabe from the Azure team here. The goal of OAM is to promote better layering of the development and operations functions inside any org. This is modeled from what we’ve learned about high-functioning teams operating Kubernetes at scale, plus what we’ve learned running services inside Microsoft.

The desire to have software engineers focus only on business logic is strong. This tends to result in the creation of an internal “platform team” that provides services to dev teams. Effectively an internal PaaS.

OAM helps enable this pattern.


Pivotal Labs in fact has an entire practice focusing on teaching and evolving the practice of "Platform as a Product". Operations is more than "just" operations, it's an engineering practice. You're delivering capabilities to a customer, looking for ways to make everyone's lives easier and better and faster.

I think that like folks at Azure, we evolved the approach based on experiences dogfooding various platforms, applying our existing thinking about product development and lots of learning from industry peers.


Yes precisely! OAM is meant to provide some consistency and standardization to this practice, while fully expecting each platform to have unique capabilities and requirements that can be surfaced through the model. It's really illuminating how often we hear "oh hey we're trying to do something very similar to OAM!"


I think the tricky part of the sale will be in abstracting away Kubernetes; or at least, not adopting its conventions for registration and discovery of resources/endpoints. You can keep those and not use Kubernetes, which is more or less how Knative went about it (insofar as it's possible to create an implementation that's Knative without using Kubernetes).

I am however overall very sympathetic to the idea that Kubernetes never drew a crisp line between roles.

Disclosure: I'm writing a book about Knative.


I concur, my same reaction as well. Every place I've worked at where there was a line drawn in the sand like this, showed a huge amount of dysfunction, chair spinning and finger pointing.


Gabe from the Azure team here.

If you’re talking about orgs where software is tossed over the wall from dev to ops, then I agree. The goal here is to empower the internal ops function to build self-service platforms with clean interfaces so developers can do what they do best, which is write code and business logic.


> "developers can do what they do best, which is write code and business logic"

That is the exact mentality that I see as totally dysfunctional.


Unfortunately, that's not what OAM could provide to you.

OAM is just a contract between dev and ops so ops could tell what he has (Trait) in a way dev understand, and dev could tell ops what he want (Components) in a way easy to manage by ops. That's all.


This isn't the problem that kubernetes solves, and pretty much every bank or big org I've been involved with have little fiefdoms fighting against teamwork... err I mean DevOps.


In finance at least, there are often regulators that get in the way. For example, in my current company, we’d love to support our anemic Ops team, however a developer having any kind of influence on the deployment process is strictly forbidden by our regulator.


I'm not entirly sure this is a regulatory requirement so much so, it's the solution to the requirement.

I'd need to look at the regs directly, but I believe just having different roles would be enough, i.e. not being logged in as a production admin all day long, and doing devops and CI/CD, would probably allow a dev to support production under break glass circumstances.


As he mentions marketplace, maybe he's talking about product developers and ops teams in other organisations? Elastic, for example, publish an OAM that helps X organisation to get started and they can customise the implementation?


Gabe from the Azure team here.

Yes that is what Mark meant. At Microsoft we care deeply about empowering ISVs — we always have. OAM is designed to allow components from an ISV to be bound to a runtime environment by a separate consumer, where the concept of traits can provide configurability for things like ingress, autoscaling, secrets management, etc.


You’re thinking of the separation between devs and ops at the same company. I think the text you’re quoting is talking about the separation between third-party vendors’ devs, and first-party downstream ops people.

And it’s never been a goal, to disintermediate ops people from other companies’ devs. We’ve been doing just the opposite for ages, with things like VM “appliances” and Docker images, created so that the interface between “the devs at the company that created this” and “the ops people deploying it” can be more formally defined by some kind of deployment-time configuration API. This is another case of that.


Not only that, but YAML as configuration has some pretty significant drawbacks in that the natural (but disastrous) trend is towards using text templating systems to generate YAML files. It’s also a bit difficult to parse or write correctly.

Yes, you “shouldn’t” use text templating systems for YAML and you “should” use well-tested YAML parsers and emitters. But I am very skeptical here, especially given that this wonky format is supposed to be the interface between teams that aren’t talking to each other.


i honestly have trouble understanding the love for yaml when decades ago everything was xml and it was almost universally hated. technically you can define a yaml-xml isomorphism and as a consequence they both should be treated as unsuitable, and yet... yaml somehow is cool and xml isn't.

am i just old?


I don't really understand what you're saying here. It's definitely not a matter of being "cool" or not.

There's always going to be a need for some structured static configuration file format. Be it XML, INI, TOML, JSON, YAML, properties files, or whatever.

From my perspective, YAML and JSON have been more successful and well-liked than XML because they map much more directly to the basic data types common to all programming languages. How do you represent a list or a map in XML? Well, it depends...

Besides missing straightforward ways to map common data structures, XML is also way more verbose and much harder to read and write by hand than YAML and JSON. And no, there really is no way to easily map between XML and these languages. Again, how do you specify a list in XML?

Add to that the fact that for most use cases, marshaling and unmarshaling YAML can be handled directly with common libraries. But to parse XML into your internal data structures? You're going to have to write code, or decide on some schema to encode your data in before converting to XML. So XML didn't actually solve your problem of "how do we serialize this data?" It just provided a framework within which it was possible to write further standards.

Add onto all of this, that XML pretty early on started adding layers of confusing and contradictory standards and associated tools--XML Schema, XML Namespaces, XPath, XSLT. And still, none of those things solved the underlying problem. They just provided the framework.

And to that that XML is much, much more expensive to parse than JSON and YAML...

So I guess I don't get why you are confused. XML addresses a different set of problems than YAML, and it does so in an overly-complicated manner that's both human- and machine-unfriendly.


serialization is explicitly not the problem being discussed. JSON also sucks at serialization but much less than YAML or XML both. i don't care that much about serialization.

what i care about is writing yaml for the purpose of configuration and not have any feedback about whether the data i've prepared by hand is actually a valid configuration. not having to have a schema is a bug in the spec for this use case in all those nice acronyms and shortcuts you've listed above and they're all guilty of it. i'd like my configs strongly typed and well documented and none of the above helps developers do that - and that's where my confusion comes from.


The fact that they're isomorphic to machine partly misses the point. Yaml is immensely more friendly on the human to write (and read). Yaml is used when people need to write declarative instructions to machines, and it does a good job of that. XML is much more of a pain to read and write by hand.


I used to think YAML was friendly for humans to read. Then I wrote a parser for it, and discovered all the weird corners, edge cases, etc. I now consider it to be a fairly user-hostile format, which should be avoided in favor of just about everything else (XML, JSON, TOML, text protobuf, etc are all more friendly).

For example, consider this map of regions in YAML:

    regions:
      northamerica: [ca, us, mx]
      scandinavia: [dk, no, se, ax, fi, fo, gl, is, sj]
Spot the error!

Writing a parser is also a bit of a nightmare, because there are a bunch of features which can turn a bit dangerous if you’re not careful—things like cyclic graphs or declaring types of objects. These are complete non-issues for the other formats I listed above—they’re all trees, and it’s very unusual for parsers to let you instantiate unintended types with those formats.


> Stop the error!

I know this is rhetorical, but I've been bitten by this enough times so for those who don't know `no` will translate to a boolean false.


Am I rhe only one who likes single quotes around literal strings?


yaml is not nice, but just quote every string that is a string and many corner cases go away.


Thanks. I was staring at the snippet wondering. I'm not all that familiar with YAML, so I thought perhaps all the values needed to be quoted rather that just written as is.


Curious if you wrote parsers for the other languages you claim are easier. YAML has problem areas, particularly around implicit booleans, but languages without any comment syntax (ie JSON) can not be considered human-friendly. And XML is not even the same sort of language as the rest of these.

I understand thinking YAML makes the wrong tradeoffs, but if you think it's less friendly than XML, then you haven't really worked with XML.


> Curious if you wrote parsers for the other languages you claim are easier.

Yes. YAML was a damn mess compared to the others. You can get a rough estimate of how much by looking at the size of the specs—the XML spec is a fair bit shorter than YAML’s, and if you drop the part about DTDs (which are used less these days) the difference is even bigger. The TOML spec is far, far shorter than either one and the JSON spec makes the TOML spec look big.

I write a lot of parsers. I think it’s fun.

> …but if you think it's less friendly than XML, then you haven't really worked with XML.

If you want to talk about formats, let’s talk about formats. If you make claims that I must be inexperienced because I disagree with you, then it’s just rude.

I have done a few reasonable size projects with heavy XML use. A build system, some work with RPCs, and a web app where I wrote a ton of data for it in XML format, by hand. I also wrote an XML pretty-printer and a YAML pretty-printer. I did a conversion of the XML build system to YAML. I thought it was a bad tradeoff, so I reverted it. Since then I’ve migrated to Bazel. All this experience is a mix of hobby projects and professional.

The bad for XML—it’s more verbose. You have to decide on your own mapping between XML and data. That’s it, as far as I’m concerned.

My personal sense of it is YAML is in a pretty awkward place—it only makes sense for human authoring, not data exchange. My experience with it is that people will naturally want to automatically generate things that they would otherwise have to write by hand. So if you draw a Venn diagram, the YAML use cases are “human authored but not machine generated”.

If we think of using these formats for configuration, then the BIG problem is the sliding scale between pure-data approaches to configuration and using code for configuration. As systems mature and get more complex, the configs often acquire features of programming languages, or parts of the config gets rewritten in code. This is where YAML really suffers. XML is a bit easier, either to extend to add these kind of features or to emit from code.


XML is just a canonical form and proper subset of SGML always requiring quotes around attribute values, all start- and end-element tags explicitly specified, no short reference (Wiki syntaxes), nor other constructs which can be (unambigiously) omitted in SGML as directed by a DTD grammar. As such, XML is a machine format rather than a format intended for editing by humans, and it's odd to complain about XML being unfriendly to edit when that's what SGML is for.


my point is that yaml isn't easier to write at all; it isn't as verbose but it bites you all over the place with unexpected behavior and the fact that validating the schema isn't the same as checking the syntax is super frustrating as you can create a valid yaml file with a typo and it'll be an either invalid or noop configuration. i'd rather have a proper DSL, preferably strongly typed.


I didn’t understand the XML hate either. It was just a bit annoying to parse, depending on the language and ecosystem you used. It was a little verbose, but so what?


The biggest problem to me is that XML is not a data serialization language, it's a document markup language. In documents, the distinction between attributes and content makes sense. In data serialization, the choice of whether a given datum is an attribute or a text content appears rather arbitrary. Should I write this?

  <book>
    <title>XML Cookbook</title>
    <author>Jane Doe</author>
  </book>
Or this?

  <book title="XML Cookbook" author="John Doe" />
Now attributes don't work when there are multiple values, so I guess I should use attributes for single values and child nodes for lists:

  <book title="XML Cookbook">
    <author>Jane Doe</author>
    <author>Tim Pickens</author>
  </book>
But that rule also has problems. If I decide to include markup in the title, it suddenly needs to be a child node again:

  <book>
    <title>The <strong>Awesome</strong> <abbr>XML</abbr> Cookbook</title>
    <author>Jane Doe</author>
    <author>Tim Pickens</author>
  </book>
Also, "author" is a misleading name for a field that is actually an array, so should I actually use an "authors" node to make that clearer?

  <book>
    <title>XML Cookbook</title>
    <authors>
      <author>Jane Doe</author>
      <author>Tim Pickens</author>
    </authors>
  </book>
Or maybe:

  <book>
    <title>XML Cookbook</title>
    <authors>
      <person name="Jane Doe" />
      <person name="Tim Pickens />
    </authors>
  </book>
Now compare to this to YAML:

  book:
    title: XML cookbook
    authors:
      - name: Jane Doe
      - name: Tim Pickens
Or even just:

  book:
    title: XML cookbook
    authors: [ Jane Doe, Tim Pickens ]
I need to make way fewer design choices when writing that down. In fact, I probably don't need to design anything since that's already the data structure that I've written down as a type somewhere in my code. That's why it's a good idea to use a data serialization language for, well, data serialization.


Thank you for articulating this, but I’m familiar with these complaints. XML does give you a lot of freedom to format your data in different ways, which can get you into traps. I’ve run into those traps before, like the decision between attributes and child nodes.

This doesn’t add up to XML hate, for me. The way I would probably write the document is:

  <book>
    <title>XML Cookbook</title>
    <author>Jane Doe</author>
    <author>Tim Pickens</author>
  </book>
This is a fairly boring way to write out a document and while you can bikeshed all you want, I don’t see the possible bikeshedding as a major drawback. The above is concise and easy to understand.

I wouldn’t use YAML as a basis for comparison. YAML has a fair number of oddities and inconsistencies that led me to stay away from it. XML is at least consistent and simple, there are not really any surprises to speak of and there are plenty of tools for modifying XML documents even when you don’t have the schema. For YAML, although there’s a spec, it’s complicated enough that different implementations are inconsistent with each other and there seems to be some inertia at work here.

There’s also the downright bizarre set of regexes that YAML uses to recognize bare strings as other types, that means that '3.3.0' is a string, but '3.3' is a number. If I write 'ni' that’s a string but 'no' is a boolean. I personally find it harder to read or author YAML due to all these rules. You also have to be a bit more careful to sanitize YAML input due to things like the way !! is handled by various libraries, or the way YAML allows object cycles. It gives you too much rope to hang yourself, has too many surprises, and too many footguns. The fact that YAML is a bit more concise just isn’t enough of an advantage.

    # Quiz: What value does this give you when parsed?
    MAC Address: 11:02:03:04:05:06
For data serialization, I would stick to something like Protocol Buffers. You get a text and binary format, a schema, consistency across implementations, and good tooling.

XML is workable in a lot of situations and in some cases the verbosity makes it a bit more self-documenting than e.g. JSON.

TOML would be my choice for config files that I maintain.


I've grown to like Avro, mostly because of its ability to support schema evolution for reader and writer independently. You get the usual niceties around binary wire format, schema, dynamic parsing and/or code generators etc.


Thank you... this pretty much sums up most of my disgust regarding XML in general. And while JSON is more universal, YAML is much more accessible for humans.


Gabe from the Azure team.

+1 on YAML having its fair share of problems. I like to think of them as our collective problems, since nothing has emerged to replace YAML yet. If something does I’m certain OAM could be adapted to it.

I’ve personally seen some promising exploration of config through Turing-complete languages like TypeScript. See Pulumi.


> …nothing has emerged to replace YAML yet…

I think if you have this viewpoint, you have probably defined your problem too narrowly, and may want to revisit some of your requirements.

I took a look at some of the example config files in the GitHub, and what I see are future problems when people deploying applications need to use some kind of templating system to deploy multiple variations of an application (e.g. production / development, or different replicas).

If you show these configuration files and ask a developer to whip up some templates, they are almost certainly going to reach for a text templating system, and down the road you will see broken configs. This creates an additional burden for tech leads who will need to educate their team on how to avoid the various traps when generating YAML files.

I could be deeply mistaken here about the nature of how these config files work, but that is my first impression. What I would be looking for if I were evaluating this software is an alternative configuration format which is more “bulletproof” like JSON or XML, since I know that there is excellent tooling for these formats, and they don’t suffer from the same kinds of traps and other defects as YAML does.


YAML is pretty close to the same as JSON for import/export functionality. There are even converters to transform through JSON for this data... there's no reason not to use it similarly.

If you're using a language that doesn't have an open-source package/module for YAML, I'd be surprised.


With a text editor that understands YAML, I have yet to see a significantly better option for software configuration files. TOML is a decent option as well.


Gabe from the Azure team here. We do want to promote more separation between dev and ops. Taking on too much conceptual overhead is hard for any engineer. By having developers focus on business logic and operators focus on platform concerns, both can be more productive.


There is a whole layer of on-prem care and feeding of kubernetes with which I have no experience (having run workloads on GKE for four years), and it certainly makes sense that there is some specialization of that stuff and separation between the people who implement those things and the people who develop back end apps.

But I balk at the idea that there is a general good to be had from further separating dev and ops. Isn't the whole point of kubernetes to provide useful control abstractions over the complexities involved in deploying components on the back end? You seem to be implying that once an engineer has developed a thing it needs to be tossed over a wall so that a specialist can write the k8s manifests, determine the runtime resource requirements, provision ingress, etc. Maybe that general approach is necessary in the enterprise environments which seem to be Azure's primary target market, I don't know. But I do know that on our much smaller team the back end engineers have become thoroughly comfortable with performing those tasks for their own applications with a little assist here and there from devops. Most of them never have to touch kubectl because deployment to test and production environments is handled by ci/cd pipelines, so really their concerns are focused on writing proper manifests to create the environment their thing needs. I don't think that is too complex a task for someone whose daily job is writing back-end server components.


> But I do know that on our much smaller team the back end engineers have become thoroughly comfortable with performing those tasks for their own applications with a little assist here and there from devops.

The challenge I'm wrestling with is the smaller teams you're referencing -- the ones who can write the Kubernetes YAML for Ingress, HPAs -- they aren't representative of mainstream enterprise developers. More importantly, they're not representative of the millions of new developers who we need to empower with simpler code-to-cloud solutions that are also build on a layered, industry-standard foundation (e.g. Kubernetes).

How do we empower these new developers without separating concerns and reducing cognitive overhead around ops?


>> The challenge I'm wrestling with is the smaller teams you're referencing -- the ones who can write the Kubernetes YAML for Ingress, HPAs -- they aren't representative of mainstream enterprise developers.

Yeah its an interesting question, and I think to some extent it is addressed by the devops model. If you take the entirety of a kubernetes deployment manifest as an example, it represents a range of concepts that cross over from the application to the infrastructure sphere. On the one hand back end engineers should certainly be able to specify their own entrypoints, probes, environments, etc. On the other they are probably not qualified to set affinity and toleration, and might not be familiar with the finer details of probe timing properties or update strategies, again just to give a few examples.

On our team we recognize that a range of inputs are needed from different perspectives to achieve the final deployment spec. The pipeline is based on kustomize and patches so back end engineers can write the patches that set up things like I mentioned above, and devops (which in our group is a role combined somewhat with SRE) can assist in fine tuning those, and in writing patches to set the infrastructure level concerns like what nodepool the thing has affinity for. It's worked well for us but I am not expert in the enterprise arena.


Wouldn't this be because they want the ops team to be Azure and don't want the developer to talk to customer support?


As a developer I don't want to touch ops. So "we" does not exist.


I think the idea that a Ops needs to wake up in the middle of the night to deal with shit outages because a developer is unwilling to stand behind their code is probably outdated.

Build & Run is becoming very popular.


We're reaching a point where two groups of people, rather than one, is required to wake up in the middle of the night. Previously just the operations people needed to get up, now developers need to be on-call as well.

Operations teams are scaling down their monitoring to just infrastructure, because the applications are more opaque than ever. Incidents at the application level are no longer fixable by Ops, because they have no idea what the developers deployed or how it's configured.

Developers now need to take responsibility for application monitoring, patch management and incident management. Meaning that we're shifting more work to a group of people that where already in short supply.

I don't think we're necessarily moving in the right direction. There's certainly benefits to development and operations working in tandem, but currently we're just moving operations to developers without much consideration for the people that needs to do the actual work.

In my opinion your company/solution needs to be somewhat limited for "DevOps" to make sense. For everyone else, it's two separate roles.


Yes and no.

Realstically, if the platform isn't down, I wouldn't need to wake up my platform engineer, just the dev. But the point is, the dev should build more reliable software so they don't have to wake up.

However, you're right, often these are two different teams. There are some people that can do both, but they're few and far between.


I would say that the problem in this situation is hard to avoid. If you can have an operations team who are experts on the application as well, and have the team more closely integrated with the development team, then you generally don’t need to wake up developers at night. I spent a few years on a team set up like this and it worked very well. If your project isn’t big enough for its own operation team you can share a team between a few different projects.

But I don’t think it’s necessarily that much harder to hire developers than it is to hire devops. Some managers have told me that hiring for devops is harder.

And developers should feel some of the pain—I’m not saying page them in the middle of the night, but they should be doing daytime on-call rotations. This helps align incentives and makes developers aware of reliability issues in the systems that they create.

> In my opinion your company/solution needs to be somewhat limited for "DevOps" to make sense. For everyone else, it's two separate roles.

When I think DevOps, I already think of it as a role separate from development. You have devs, and then you have devops. You can combine both roles into one, and that makes sense for smaller / earlier stage projects, but otherwise I think of devops as a separate role.

Kind of a mess because devops varies so much between companies and isn’t even consistently named.

But my experience is that it’s not necessary to wake up two groups of people—it’s either a developer that gets woken up because the project is too small or too early to be supported by operations, or it’s someone in devops/SRE/production engineer who gets woken up. There’s a lot of practices that need to be put into place to make this work, but it’s doable.


> You have devs, and then you have devops.

Better call the second role "Ops" then, and wait for someone to propose merging it with "Dev" again.

There's so much mental confusion and meaningless use of language now, about the term "DevOps" that was a fairly simple suggestion about bringing DEVelopment and OPerationS together. That sentiment is literally in the word, I don't know how it could be plainer.


> If you can have an operations team who are experts on the application as well

Then functionally they're going to end up as software developers. And you're going to treat them as such or they're going to leave. Which is what the notion of a "devops engineer" evolved to be, yet every good-to-great "devops engineer" I've ever worked with identified as a software developer and could be hired at any developer role they wanted.

If you want a warm-body ops team, they're not going to be experts. If you want experts, in 2019 they're going to rapidly stop being just-an-ops-team.

> Some managers have told me that hiring for devops is harder.

It's easier to hire "devops" if your idea of "devops" is "somebody with an AWS Associate cert." It's much harder to hire "devops" if your idea of "devops" is "thinks systematically and is capable of debugging a problem nose-to-tail without waking up a developer."

I am the latter, and there are remarkably few of these.

> And developers should feel some of the pain—I’m not saying page them in the middle of the night, but they should be doing daytime on-call rotations.

I feel like you've got this wildly backwards. Developers should be first-line on-call in almost every situation modulo detectible hardware failures. Because here's the thing--I've been doing this a long time as a mostly-unbiased consultant (I'm in-house now and it still holds), and in most environments, operations/infrastructure 1) break things less, and 2) break things quickly, so there's usually relatively little time between the break and the working-hours fix.

If a developer can't solve a developer problem because they think it's actually infrastructural, they can escalate. Making that ops/infra group be first-line support for application pages is both inefficient and unfair.


DevOps is not a role. You arebtalking about a modern ops role in this dialogue.

Merging the two as you said, is DevOps.


"we" doesn't necessarily include all.

Anyway, that's a bit sad IMHO. In my opinion, running what you build is great to discover ways to improve your software.

Regardless if you want to or don't want to touch ops: You still might want to talk to ops or vice-versa. Even if you have strict roles for devs and ops (and no mixed roles), you might be in a team where both are present and hopefully talk to each other.


Like… you don’t want to talk to the ops team at all? You don’t want them to talk to you?

(Also… in English, “we” does not always include the listener. Whether the listener is included is unspecified.)


From experience, I think he just means he doesn't want to work on operations tasks. I don't think he has anything against people doing ops.


The truth of what the code that you develop actually does, is in production; often buried in log stores and timing stats and other "ops" places.

It's better if that data is unearthed and fed back into the development process, it's better if that feedback loop is closed, it's better if that separation is removed.

That's one of the reasons for "DevOps" as originally formulated.


Well, as an ops person, I don't want to touch your code, then.


Gabe from the Azure team here.

We built OAM for you, and for your counterparts in ops who want to help you innovate faster. :)


Dapr looks like Microsoft's answer to Istio - anecdotally, I've never gotten Istio to work (as recently as a few months ago) so I'll have to give this one a try at some point to see if they've done a better job.

OAM though, perplexes me. Even the justification - that k8s is out of scope of the developer, and is handled by ops - speaks to a misunderstanding of some of the advantages clusters provide. Some research [1] tells me that developers author "components" which encapsulate singular parts of an architecture, and are connected by operators into an application. And then auto-scaling is handled by "traits", which are entirely separate? It's interesting, but it seems to largely replicate (and act as a translation layer for) existing k8s features. Maybe that's the real value prop here, and the techcrunch article buried the lede - by defining components and traits through OAM you can move to any cloud-based model that supports the manifest spec (thus avoiding the high learning curve of something like k8s). Even so, I have a sinking feeling that most of this will be folded into the ever-changing definition of devops, so small teams that manage an app's entire lifecycle (and are told to get on board with the new-fangled dealio by management) will have another layer of indirection to debug when something inevitably goes wrong. If Azure comes out with an implementation of OAM that uses their VSIs, I'll get interested. Then it'll at least provide real value/choice, and hopefully make it easier to migrate existing workloads that don't need an entire machine for themselves onto a k8s cluster.

[1] https://cloudblogs.microsoft.com/opensource/2019/10/16/annou...


The overarching idea with OAM is to standardize the model by which applications are composed and operated, regardless of the environment you end up working in. So as you go from one platform to another, you have a consistent experience and a transferable process. We fully expect the implementing platform capabilities to differ, and the model is designed around this assumption. I think some standardization here is valuable.

At the same time, we aim to improve application modeling on systems like Kubernetes that currently focus more on container infrastructure.

disclosure: I'm one of the spec authors.


Interesting - if you don't mind indulging me a little, it was surprising to me that you chose kubernetes as the first implementation for this standard. After all, if a large part of OAM's functionality is provided by k8s out of the box (auto-scaling, for example) it's not very useful to someone who knows the existing tech. Kubernetes is, by and large, already portable with minimal dev resources between the major cloud vendors and/or on-prem resources.

On the other hand, if you can pair already-written software with a collection of VMs and place an OAM layer between the machines and business logic, the portability of your code between OAM-compatible vendors becomes a selling point for the standard. I know a large project like this is a team effort, but can you shed any light on the reasons behind your decision-making and prioritization towards kubernetes?


OAM is the app mgmt API for K8s, while not only for K8s. So implement it firstly on K8s is a natural choice. You may argue hey why I need app mgmt API for K8s. Well, have you ever tried to expose full Deployment api for developers to describe their app? My personal experience is super messy ...


Dapr is definitely not Microsoft's answer to Istio (see e.g. [1]). But if you are looking for something like Istio but that actually works out of the box (zero config), I'd highly recommend checking out Linkerd.

[1] https://news.ycombinator.com/item?id=21283956


Good to know - I haven't been keeping up with this thread.

I'll have to check out linkerd, thanks for the rec!


> If Azure comes out with an implementation of OAM that uses their VSIs, I'll get interested.

Gabe from the Azure team.

VSIs? I'd love to understand this better.


Virtual Server Instances - VMs :)


I don't see a link to dig into the Open Application Model, and I don't have time to search for one tonight. Maybe kubernetes can benefit from a higher level of abstraction than helm charts provides, but I would need to see some use cases. I did get a kick out of:

>> He also argues that Kubernetes itself is too complicated for enterprise developers. “At this point, it’s really infrastructure-focused,” he said. “You want a developer to focus on the app. What we saw when we talked to Kubernetes shops, they don’t let developers near Kubernetes.”

Not sure what he means by "get near" but it's really not that complicated to use kubectl to interrogate and modify the cluster. All of the back end engineers on our small team are comfortable with it.


Yes. If you don't trust your developers to use controls like kubectl, you have a bigger problem.

And I'm sure it's easier to throw money at a project like this than to actually fix that problem.


I think they meant get near editing Kubernetes yaml files (or some abstracted equivalent there of)


Here's a link to the Open Application Model spec.

https://openappmodel.io/


Thanks!


All of this is well and good. Now find folks who understand the entire stack and can support it.

Tell me you are saving money after you hire them :)


I really don't understand how this point isn't more widely acknowledged. k8 expertise is expensive. I'm seeing companies jump in head-first without doing any cost-benefit analysis, and winding up in some really difficult situations.


First mover advantage. And when the recession hits you fire them and your pipeline should be mostly automated by then... The rest is still paying full monolithic big servers. And slow deployments.


That only works until your first outage :)

K8s isnt bulletproof and requires significant care. Not to mention constant upgrades


The bit about significant care may be true in on-prem installations, I don't know, but GKE for example is pretty darn bulletproof. As for upgrades, they do come along frequently and you don't want to get too far behind. Rolling nodepools has made the process a lot less labor and time intensive for us.


From my perspective using GKE or any other k8s service isn't you running k8s. It's you paying devops salaries to release managers.


Basically this is a helm chart. Which is one of the problems of k8s - a typical application consists of multiple services that need to be deployed together.

In Docker Swarm, that's a Stack. K8s has no equivalent. So there's no answer to "how do i deploy/update my flask api and celery workers together"


Actually, Docker can now deploy stacks as a k8s “stack” resource: https://www.docker.com/blog/simplifying-kubernetes-with-dock...

This makes it especially easy for anyone transitioning from Docker Swarm to Kubernetes.


True. However the fact remains that k8s doesn't have a primitive here - so the toolsets (helm, docker stacks) have to manage it on a beat effort basis.

In Swarm, the stack is an atomic unit.

That's what Microsoft is trying to fix - the atomicity of an abstraction level that is higher than K8s Services.

Not sure why they didnt do this as part of the k8s core committee process.


Swarm does not have a concept of a stack. This is purely client-side.


Thanks for the link. I haven't heard about this before but it seems really useful.


> In Docker Swarm, that's a Stack. K8s has no equivalent.

My take is that the equivalent in k8s is it's declarative management with configuration files. Just kubectl apply -f your.yaml and you're good to go.


Ideally yes, however in practice a template engine is nice.


I don't think there is anything stopping you from bundling multiple services/deployments in the same helm chart. Many of the "curated" helm charts do it: https://github.com/helm/charts


I’ve really thought that something like this was needed for a long time. Why isn’t there standardised containers for user management and permissions, authentication, scheduling things, processing events, caching say... there are off the shelf containers for simple things like proxying but nothing that understands your application. And it looks like they have taken it further to include Functions and Actors which might be useful and allow easier scaling of certain things. Let’s hope this goes some way to addressing that.


MarkF from the Azure team. We thought so too. Having seen developers reinvent the same capabilities time and time again, and seen the frustration when my favorite framework X did not have a certain capability, we wanted to provide a distributed system building block approach. And one that can be just dropped in with local calls without having to recompile in many different libraries. It is an approach that we have found to provide easier extensibility and also support.


Gabe from the Azure team here.

You nailed it. We think that microservices building blocks enabled by extensible side-cars has a lot of potential. We’d love you to take it for a spin and provide some feedback on GitHub. :)


Links to related discussions:

* Dapr: an open-source project to make it easier to build microservices: https://news.ycombinator.com/item?id=21272098

* Announcing the Open Application Model (OAM):

https://news.ycombinator.com/item?id=21272082


This is exciting!

When Docker came out, I was just finishing an in-house-Heroku style project at my employer, based on LXC containers. I watched as Deis came along, promising to give the same benefits in an open platform. I was sad to see Deis go away after the Microsoft acquisition. The industry got all excited about Kubernetes, but it seemed to me like we were backsliding from progress that had been made toward a 12 Factor platform. (https://www.12factor.net/) . It's very encouraging to see that coming back.


Gabe from the Azure team here. I’m also the guy who founded Deis ;)

Glad you like what you see. The original Deis team worked on much of it. I’m happy to say we have a lot more innovation coming in the PaaS space.


We've been working on something similar for a few years now. Micro is a runtime for microservices https://github.com/micro/micro. We primarily focused on Go and are moving towards multilanguage via a http api, proxy and SDKs much like dapr.


Worry about vendor lock-in is real.

RedHat OpenShift (yes yes IBM) is gaining momentum because of it. But the field is still wide open. And the burden will fall on small ISVs to provide open solutions around data portability, redundancy, security, and a host of other concerns.

In the early days of Cloud technologies, the original vision was highly commoditized. You'd pull a Docker container from a registry hub and perhaps not even know where it ran. Perhaps even an exchange would handle pricing and performance. Instead, we have the Big 4 vendors in a highly fragmented space offering roughly the same services.


I haven't used k8s that much, so I can't see what features OAM (or the implementation, rudr) adds to k8s. Could someone provide me with some insights?


On a previous HK discussion on OAM[¹], it was mentioned that the idea is to provide a higher level abstraction on kubernetes that enable developers to configure and deploy distributed applications in a platform-agnostic way.

One of the author's of the OAM spec mentioned that an OAM abstraction might enable developers to deploy to k8s, docker swarm or Apache mesos with the same OAM config, provided that all platforms (or vendors) implement/support the OAM API.

[1] https://news.ycombinator.com/item?id=21272082


At first glance it looks similar to operators. Except operators are more suitable for system software, and this thing aims to simplify the deployment of business software.


I think Microsoft is becoming an open-source company!


That's what they want you to think!


They have to become an open source company because they can't risk putting all their eggs in the software basket.

They have to work on projects like this to ensure developers can easily deploy to Azure so they can hedge their revenue.


Trying to manage an “application” in microservices world is misunderstanding the whole concept of microservices, IMO.


Interestingly, what we found through our research was that nobody could agree on a definition of "application" in a microservices world. The way OAM is designed currently doesn't enforce a rigid "application" structure for services. We have a concept of application scopes that can be used to place application-like boundaries around groups of services (modeled as "components" in OAM). For example, grouping services in a "health" scope where the health of each service in the group is evaluated when any one of the services is upgraded as a trigger for automated rollback is something application scopes are designed for.

disclosure: am one of the OAM authors.


First I thought dapper, but that was google, and it was about distributed tracing... but sounds very close (and have some relation - e.g. you need sidecar to capture network traffic, and propagate tokens for distributed tracing - that is unless you want to change your source code to do so)...


Gabe here from the Azure team. Happy to answer questions about Dapr and OAM.


How does Dapr (or the Actors in Dapr) relate to Microsoft Orleans?


MarkF from Azure team. The actors in Dapr are based on the same virtual actor concept that Orleans has, meaning that they are activated when called and eventually garbage collected. If you are familiar with Orleans, Dapr actors will be familiar. The difference with Dapr is that because it is programming language agnostic with an http/gRPC API the actors can be called from any language (although there are also friendly language SDKs on top).

Creating a new actor follows a local call like http://localhost:3500/v1.0/actors/<actorType>/<actorId>/meth...

for example http://localhost:3500/v1.0/actors/myactor/50/method/getData to call the getData method on myactor with id=50


Dapr looks like a slightly beefier sidecar-based service mesh.


MarkF from the Azure team. Dapr is not a service mesh, however it will work with service meshes such as itsio, linkerd etc. Dapr does provide direct service-to-service invocation which you can use in place of a service mesh if you want, however Dapr does not handle network policies or traffic management that service meshes do. Dapr is a side-car that is language agnostic, and using http or gRPC provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. All of this is extensible, so you can add new building block capabilities and only choose the use the ones you care about.


Gabe from the Azure team here.

The sidecar pattern is shared by a service mesh, so I understand the comparison. However Dapr is focused on enabling in-IDE experiences versus intercepting and proxying networking traffic like a service mesh.


Referring to Microsoft's original announcement post of OAM, (https://cloudblogs.microsoft.com/opensource/2019/10/16/annou...), the problem of defining application workflows and their dependencies that OAM is addressing is genuine. We have also faced this problem as part of providing Kubernetes-native platform stacks to our customers.

It will be interesting to see how OAM evolves, especially since it is coming from the same team that is leading the charge on Helm and CNAB (https://cloudblogs.microsoft.com/opensource/2018/12/04/annou...).

Here is what we have learnt about this space in the last one and a half years.

Application portability - While the point about application portability is true, if an organization has decided to adopt Kubernetes, the portability requirement is solved by Kubernetes itself to a large extent. If your application platform workflows are built as Kubernetes YAMLs, then they can be run on any cluster. Kubernetes YAMLs of built-in resources (Pod, Secret, etc.) and Custom Resources (MySQL, Postgres, etc.) can be leveraged to create such Kubernetes-native platform workflows. Our learning has been that a solution that focuses on solving the platform workflows problem on Kubernetes needs to augment existing Kubernetes tools such as Helm, Kustomize, etc. We have been developing such a tool (https://github.com/cloud-ark/kubeplus). Check out this blog post which provides detailed comparison between existing tools. https://medium.com/@cloudark/discovery-and-binding-of-kubern...

On separation of concern between Devs & Ops - Again with adoption of Kubernetes, our observation has been that Kubernetes YAML and the tooling around it such as Helm is becoming a common language of communication between Devs & Ops. In our view the goal for anyone developing new tools/frameworks in this space should be to help break the barriers that have existed between Dev and Ops teams further. One way we are trying to do this is by extending the vocabulary of ‘as-Code’ systems from the Infrastructure world to the ‘platform’ world of application development teams. Check out some of our work in this space primarily focusing on Kubernetes Custom Resources here - https://cloudark.io/platform-as-code


Dapr sounds like it could be cool. Almost like a cloud function but with more intelligence.


MarkF from the Azure team here. The idea is to provide a function like experience with any programming language and then have common capabilities like saving state, sending events that your app can use with local host calls. And make this all extensible with components


>MarkF from the Azure team here.

Glad to see you engage. Thanks to enterprise sub & monthly credit azure is def my favorite cloud right now :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: