Hacker News new | past | comments | ask | show | jobs | submit login

> kubernetes went ahead and implemented an oop system in golang

I don’t think this was ever an objective, can you clarify what you mean by “oop system” and where we implemented it?

We aggressively used composition of interfaces up to a point in the “go way”, but certainly in terms of serialization modeling (which I am heavily accountable for early decisions) we also leveraged structs as behavior-less data. Everything else was more “whatever works”.

> why? because they felt go's assumption about the problem space (that it can be modeled and solved as imperative programs)

You’d have to articulate which subsystems you feel support this statement - certainly a diverse set of individuals were able to quickly and rapidly adapt their existing mental models to Go and ship a mostly successful 1.0 MVP in about 12 months, which to me is much more of the essential principle of Go:

pragmatism and large team collaboration




Presumably, the poster is referencing the patterns described in this talk: https://archive.fosdem.org/2019/schedule/event/kubernetesclu...

> Unknown to most, Kubernetes was originally written in Java. If you have ever looked at the source code, or vendored a library you probably have already noticed a fair amount of factory patterns and singletons littered throughout the code base. Furthermore Kubernetes is built around various primitives referred to in the code base as “Objects”. In the Go programming language we explicitly did not ever build in a concept of an “Object”. We look at examples of this code, and explore what the code is truly attempting to do. We then take it a step further by offering idiomatic Go examples that we could refactor the pseudo object oriented Go code to.


I have a lot of respect for Kris but in this context, as the person who approved the PR adding the “Object” interface to the code base (and participated in most of the subsequent discussions about how to expand it), it was not done because we felt Go lacked a fundamental construct or forces an imperative style. We simply chose a generic name because at the time “not using interface{}” seemed to be idiomatic go.

The only real abstraction we need from Go is zero-cost serialization frameworks, which I think is an example of where having that would compromise Go’s core principles.


I have nothing against go or kubernetes, I was simply citing the linked article in the thread, which at one point states:

> Golang] Kubernetes, one of the largest codebases in Golang, has its own object-oriented type system implemented in the runtime package.

I would agree that Golang is overall a great language well-suited for solving a large class of problems. The same is true for the other languages I cited. This isn't about immutable properties of languages so much as it is about languages and problems fitting or not fitting well together.


That’s fair - as the perpetrator of much of that abstraction I believe that highlighting the type system aspect of this code obscures the real problem we were solving for, which Go is spectacularly good at: letting you get within inches of C performance with reasonable effort for serialization and mapping versions of structs.

I find it amusing that the runtime registry code is cited as an example for anything other than it’s possible to overcomplicate a problem in any language. We thought we’d need more flexibility than we needed in practice, because humans involved (myself included) couldn’t anticipate the future.


But do you really need "near c" performance for a container orchestrator? I would think your bottleneck will be talking to etcd


Our bottleneck was serialization of objects to bytes to send to etcd. Etcd cpu should be about 0.01-0.001 control plane CPU, and control plane apiserver CPU has been dominated by serialization since day 1 (except for brief periods around when we introduced inefficient custom resource code, because of the generic nature of that code).


Huh, I'd have guessed that distributed synchronization of etcd (which goes over the network and has multiple events, and uses raft, which has unbounded worst case latency) would be the limiting step.


Kubernetes at this point is a general api server that happens to include a container scheduler.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: