
AWS Controllers for Kubernetes - yonasb
https://github.com/aws/aws-controllers-k8s
======
sytse
Kelsey Hightower had a good take in
[https://twitter.com/kelseyhightower/status/12963119515307048...](https://twitter.com/kelseyhightower/status/1296311951530704896)
"AWS Controllers for Kubernetes is pretty dope. You can leverage Kubernetes to
manage AWS resources such as API gateways and S3 buckets. Think Terraform but
backed by Kubernetes style APIs and "realtime" control loops."

An in the thread he mentions Crossplane as the cross-cloud way to do this
[https://twitter.com/kelseyhightower/status/12963213771342315...](https://twitter.com/kelseyhightower/status/1296321377134231552)

~~~
takeda
I thought whole point of running K8S in AWS was to have infrastructure not
tied to their offerings.

If you use something like that, why use K8S and not just use AWS services
natively?

~~~
scarface74
The entire idea of your infrastructure not being tied to your provider is
completely lost when you are at any type of scale. I have a feeling that
anyone who thinks it is easy or even usually worth it has never been part of
planning a large scale migration.

At the same time, you usually end up spending more money and having worse
results when you don’t go all in.

As far as why use EKS vs ECS - the “native service”? They seem to have feature
parity, ECS is easier to use for the unitiated. But, there are so many people
who know k8s and your knowledge is portable.

Which brings up my second point. Most software engineers don’t care about
cloud mobility as much as they claim. They care about career mobility. There
is a much better chance that you will leave a company and move to a company on
a different provider than your company will. I’m not saying it’s a bad thing
to focus on technologies that give you as an individual the most optionality.

~~~
dilyevsky
> The entire idea of your infrastructure not being tied to your provider is
> completely lost when you are at any type of scale

Pretty categorical statement and I don’t find this to be true at all if you
avoid using anything managed except block devices and vms themselves. That is
unless “any type or scale” is tens of thousands of VMs and petabytes of
storage in which case it is indeed a moot endeavor since someone who is still
on public cloud at that point apparently likes to give all their money to
cloud vendors anyway.

~~~
scarface74
If all you’re doing is using a cloud provider to host a bunch of VMs, you’ve
already lost the plot. Once you use any cloud provider as a glorified colo
without changing any of your processes you’re already spending more than a
colo and not getting any of the benefits of managed services.

Have you ever budgeted a project plan involving a large migration.

~~~
parasubvert
“Managed services” in cloud are a Faustian bargain.

It’s also totally untrue that you need to adopt any specific proprietary
features of public cloud to change your IT processes to break the ITIL
straightjacket into a sane process. It’s just an abstraction, and there are
plenty to choose from that work with whatever you already have.

First of all, they’re not actually managed services as we typically expect.
The support experience is much more like a “hosted service”, with very
circumscribed boundaries around feature scope, mostly because it is managed
entirely by the kind robots rather than having humans actually manage
anything.

Secondly, they’re lock in for debatable benefit.

Third, they’re almost always the way the clouds make their margin and you’re
stuck with their design decisions, and have little control plane
adjustability. Running Google Cloud SQL is expensive and crappy, I would much
rather run my own DB on a Kubernetes cluster via a mature and commercially
supported Kubernetes operator. Because it’s automated, reliable, and if I need
to, I can get into the weeds.

Because the dirty secret is that all of the managed cloud services _are just
software_ , no different from what you install, they’ve just closed source
certain components to make it seem magical. And yes you can file a ticket to
get them to fix it, but remember the law of outsourced availability: you still
own your SLO to your customer, you still own your end to end availability. All
the “managed cloud” will do is cut you a check for the wasted pennies on a
bill, while your customers leave and business tanks.

If you treat cloud like a commodity player that you can fire (not likely -
more “shrink investments in favor of an alternative”) every few years if their
quality goes down or price goes up, you’re in a much better position for cost,
performance, and reliability. Same as it always was. The “one Throat to choke”
theory of outsourcing was a always business school bullshit line that never
really worked, and that now has been adopted by tech experts that want to
ensure their cloud-specific skills are always relevant, regardless of whether
these all proprietary features are really a good idea.

Speed + having no idea what you’re doing = use all the managed services!, this
seems to be the cloud pundit elevator pitch. but as with anything, if you
don’t know what you’re doing, at some point you will be caught being a sucker.

The multi-cloud, open source ecosystem continues to grow over many proprietary
cloud services for these reasons.

~~~
scarface74
You act as if people to manage servers are free. How much time are you
spending to avoid “cloud lock-in”? How much money are you spending on the
extra support staff or are you making your developers do double duty? Is doing
the grunt work of managing servers and services helping you win against your
competitors? Is it helping you to go to market faster?

I’m not just speaking about cloud providers. Do you know how many healthcare
systems are so tightly “locked in” to their EMR/EHR systems that it would make
you cry?

So is “multi cloud” saving you money?

~~~
echelon
Managed Cloud is Oracle all over again.

Engineers who job hop don't care, but they leave their orgs paying Amazon
forever.

I'm doing this right now, but I know a boondoggle when I see one.

~~~
scarface74
Would you also tell hospitals not to lock themselves into their EMR/EHR? Would
you tell Enterprises to get rid of their entire dependence on Microsoft and go
Linux only?

~~~
parasubvert
Don’t be patronizing.

Management needs to act, and locking yourself in, if it’s the only choice, go
for it.

In the case of IT, composing across clouds is pretty common because it solves
problems. Note I am not talking about brokerage or constant migration or other
such nonsense. I’m talking about treating your clouds as a portfolio where you
have levels of investment for particular reasons, and can make long run
decisions about upping or lowering investment in. You compose across them.
Salesforce for CRM, AWS for my front end apps, Google for my analytics, Azure
and O365 for internal apps, whatever. It’s reality.

Forcing everything on one cloud, boy I donno. For an SMB, sure.

My point is, don’t tell me how tasty and good for me the shit sandwich I am
forced to eat is. Users of EPIC hate it, that’s a market opportunity.

------
mfer
One consequence is the accidental deletion of AWS things...

If a CRD is deleted the CRs described it are also deleted. So, deleting a CRD
(even accidentally) could end up deleting resources in AWS (e.g., backups).
So, be careful.

Some things being managed by Kubernetes would be really cool. Other things
being managed by k8s could break things if something goes wrong. I would plan
accordingly.

~~~
digitallogic
Also, not all AWS follow the same deletion semantics. Example: S3 buckets. The
report as being deleted somewhat quickly, but their name may not be available
again for hour or so.

In this case the delete will appear to succeed, but the recreation, if done
with the same name, may fail.

~~~
freedomben
Great example. I worry about adding a layer of abstraction over provisioning
resources this way.

Of course we have to _try_ this because it's a badass (tho obvious in
hindsight) idea, but in practice it might have some downsides.

------
compsciphd
I started building something along this line a few years ago. the ability to
control AWS VM and treat them as pods (i.e. can backend a service, access
other services) to have a hybrid (VM / Container) infrastucture that is all
managed in the kubernetes way. Future work would have been to try and manage
other resources similarly.

Sadly startup interest changed and then went under (but the freedom I was
given to explore there was the best experience I have ever had)

[https://github.com/apporbit/infranetes](https://github.com/apporbit/infranetes)

------
buzer
Some prior discussion:
[https://news.ycombinator.com/item?id=24219448](https://news.ycombinator.com/item?id=24219448)

------
just-juan-post
You can create a S3 bucket but you can't set permissions on it.

Pass, I'll check back in a year.

------
cagenut
On the one hand this could be such a cool and powerful concept.

On the other hand my brain segfaults on the recursive loop of how the layer-
inversion gets modeled as IaC with a CI/CD pipeline. I guess if you were very
strict about having your provider-infra layer (cloudformation/terraform) do
only the bare minimum to get your kube environment up, and then within that
kube environment you used something like ACK to provision any cloud-provider
resources that your kube-managed apps/pipelines needed.

Yet another case where I'm like "I don't know if kube should be the answer to
everything, but I sure as shit won't miss <x>".

~~~
rektide
Yes, the goal is very much to bring provisioning & operations of all resources
on to the core platform we're using, Kubernetes. I mentioned in another reply
how excited I am to be able to manage things like SQS queues now with my app,
via Helm Charts, rather than need separate machinery to manage/operate my app
& the various resources it needs.

And now there is a control loop. So if Ben in support accidentally deletes my
queue, it'll get recreated.

Breaking away from the proprietary platform underlay is going to be great.
Managing things more consistently is going to be really great.

------
HereBeBeasties
See also KubeForm - [https://kubeform.com/](https://kubeform.com/) which will
do a lot of this already, via Terraform.

------
nikolay
I Terraform Kubernetes Operator sounds like a better idea.

------
MuffinFlavored
How is this different than Terraform?

~~~
harpratap
Traditional Infrastructure-as-Code solutions were based on edge-triggers. Like
create ingress, delete ingress. But what about when ingress misbehaves or is
in an unrecoverable state?

Kubernetes introduced edge-triggered level-driven with resync reconciliation
based "controllers". User defines a state and the controller does it's best to
keep the infra in this desired state all the times. (Although Terraform has
also moved to this same design in recent times)

This establishes a consistent experience. Everyone knows you just need to do
kubectl get my-resource to check your desired state. All the issues will be
logged in status and controller. You can combine multiple controllers to
achieve your desired application design. For example, Knative has their own
kind called "Service" which has some custom components, some inherited from
istio and things like replicasets from default kubernetes controllers.

------
brian_herman__
This reminds me of this classic comic.
[https://www.catmuseumsf.org/images/print/comix/bill.jpg](https://www.catmuseumsf.org/images/print/comix/bill.jpg)

~~~
femto113
Or almost any Cathy strip [https://mirabiledictu.org/cathy-
ack-2/](https://mirabiledictu.org/cathy-ack-2/)

