
The AWS Controllers for Kubernetes - bdcravens
https://aws.amazon.com/blogs/containers/aws-controllers-for-kubernetes-ack/
======
solatic
Currently no support for provisioning IAM permissions. ACK will be happy to
construct an S3 bucket for you, that is then inaccessible, unless you use
dangerous IAM wildcard permissions.

Team is concerned about the security ramifications of setting up IAM
permissions from ACK: [https://github.com/aws/aws-
controllers-k8s/issues/22#issueco...](https://github.com/aws/aws-
controllers-k8s/issues/22#issuecomment-595816197)

Look, it'll be great when it matures... but this is very much in the developer
preview stage. Caveat emptor.

------
Legogris
Am I the only one who's pessimistic about this? One of the big upsides of
Kubernetes is having portable workloads and provisioning cloud-provider
specific resources (whose lifecycles very likely outlive clusters!) in
Kubernetes just seems wrong to me. Kubernetes is great for managing,
orchestrating and directing traffic for containerized workloads but it really
shouldn't be The One Tool For Everything.

Coupling everything together like this just seems to make things less
manageable.

IMO infrastructure including managed services are better provisioned through
tools like Terraform and Pulumi.

~~~
solatic
The issue (or benefit, depending on your perspective) with Terraform is that
it's a one-shot CLI binary. If you're not running the binary, then it's not
doing anything. If you want a long-running daemon that responds to non-human-
initiated events, then Terraform isn't a good tool.

Any time you try to declaratively define state, if you don't have a daemon
enforcing the declarative state, then you will suffer from drift. One approach
is the one Terraform has - assume that drift is possible, so ask the user to
run a separate plan stage, and manually reconcile drift if needed. Another
approach is the controller approach, where the controller tries to do the
right thing and errors/alerts if it doesn't know what to do.

~~~
redwood
This is why Hashicorp needs to accelerate their cloud offering.

Frankly I get the sense they got a little bit too addicted to central ops
driven on-prem style deals for Vault but in the public cloud they need to be
front and center with SaaS which is a long road. They have a rudimentary
Terraform SaaS I believe but none for Vault as far as I'm aware. I see a lot
of folks going straight to cloud provider services because of this.

You sum it up well... In these times you don't want to run a daemon

~~~
t3rabytes
They used to have a managed Vault offering! But then it disappeared one day
never to return.

------
thinkersilver
Kubernetes is becoming the lingua-franca of building infrastructure. Through
CRDs and the kube api spec I can

\- start an single application

\- deploy a set of interconnected apps

\- define network topologies

\- define traffic flows

\- define vertical and horizontal scaling of resources

And now I can define AWS resources.

This creates an interesting scenario where infrastructure can be defined by
the k8s API resources and not necessarily have k8s build it. For example
podman starting containers off a K8S deployment spec. It's an API first
approach and its great for interoperability. The only downside is managing the
yaml and keeping it consistent across the interdependencies.

~~~
soulnothing
I really wish fabric8, and more specifically kotlin k8 dsl[2] was getting more
traction.

It removes the down side of yaml all over the place. It's missing the package
management features of helm. But I have several jars acting as baseline
deployments, and provisioning. It works really well, and I have an entire
language. So I can map over a list of services, instead of do templating. The
other big down side is a java run takes a minute or two to kick off

I was resilient to k8 for a long time. Complexity was secondary to cost, but
Digital ocean has a relatively cheap implementation now. This commonality and
perseverance of tooling is great.

I want metrics, a simple annotation. I want a secret injected from vault, just
add an annotation. It's also cloud agnostic, so this logic can be deployed any
where some one provides a k8 offering.

EKS was very powerful. As running service accounts via non managed clusters.
Removed the need to pass an access key pair to the application. That service
account just ran with a corresponding iam role.

[1] [https://github.com/fabric8io/kubernetes-
client](https://github.com/fabric8io/kubernetes-client) [2]
[https://github.com/fkorotkov/k8s-kotlin-
dsl](https://github.com/fkorotkov/k8s-kotlin-dsl)

~~~
thinkersilver
It's been a while since I've looked at Fabric8 but it had good java -> k8s
integration and was great for writing k8s tools.

It appears though that Fabric8 is useful for solo java projects without
complex dependencies on non-java projects or small java shop. It overlaps with
where jenkins-x is going, which has made major strides in the last 24 months.

The original team that worked on Fabric8 lead by James Strachan all moved on
from Redhat and many of them are working on Jenkins-x.

------
harpratap
Glad to see AWS finally embracing Kubernetes too. Google did a similar thing a
while back - [https://cloud.google.com/config-
connector/](https://cloud.google.com/config-connector/) So I guess this
solidifies Kubernetes as the defacto standard of Cloud Platforms.

~~~
zxienin
Azure as well [https://github.com/Azure/azure-service-
operator](https://github.com/Azure/azure-service-operator)

~~~
FridgeSeal
Bold of you to assume that:

* the magic permissions ghost that runs in Azure whose job it is to inexplicably deny you resources to things won’t interfere

* Said Azure service will stay up long enough to be useful

* you finish writing the insane amount of config Azure services seemingly require before the heat death of the universe.

* Azure decides that it likes you and won’t arbitrarily block you from attaching things to your cluster/nodes because it’s the wrong moon cycle/day of the week/device type/etc

* you can somehow navigate Azures kafka-esque documentation got figure out which services you’re actually allowed to do this with.

It is only a slight exaggeration to say that Azure is the most painful and
frustrating software/cloud product I’ve used in a long time, probably ever,
and I earnestly look forward to having literally any excuse to migrate off it.

~~~
xiwenc
I’m feel your pain also my friend. Azure quality is terrible compared to
competitors:

* no good working examples in docs

* docs hard to read

* docs are not consistent with reality

* the web portal UX is inconsistent and outright weird (when you navigate through resource group, you can scroll back horizontally to previous context/screen; what a joke)

* there are a gazillion preview api versions that never gets released officially.

* and if you’re lucky to work with azure devops, it’s like building a house of cards with different card types and sizes

I’ve worked with AWS and GCP in the past. Indeed Azure is often chosen by
CIO’s rather than people that have to work with the service every day.

~~~
FridgeSeal
Oh my god the web UX, how did I forget about that: for the life of me I cannot
figure out why they make all the interfaces scroll sideways. Why? Who does
that?

Docs being hard to read and inconsistent with reality is a big point. My
favourite mismatch is the storage classes one: it turns out there's actually 2
different grades of SSD available, but their examples and docs only mention
premium SSD's. I only discovered "normal" SSD because they happen to auto-
create a storage class with them in your Kubernetes cluster. The adventure to
figure out whether you can attach a premium SSD to an instance is a whole new
ball game - trying to find which instances _actually_ allow you to attach them
is like looking for a needle in a haystack. Why are they so difficult about
it? AWS is like "you want an EBS volume of type io1? There you go, done".
Azure: "oh no, you _can't_ have premium ssd. Because reasons".

~~~
ahoka
Actually there are three kind of SSD storage types in Azure: Standard, Premium
and Ultra. I’m assuming that you need to provision an ‘s’ VM because the
regular instances lack the connectivity for the faster storage, but that’s
just guessing.

~~~
FridgeSeal
Oh I forget about the ultra ones.

I found a few instance types when I went looking, but their interface does not
make it easy to figure out which ones are premium eligible, but I do remember
the price going up not-insignificantly for a premium capable machine, which
feels about like double-dipping if you’re also paying extra for the SSD.

------
sytringy05
I can't decide if I think this is a good idea or not. Conceptually I like that
I can get a s3 bucket/rds db/sqs queue by using kubectl but I'm not sure if
that's the best way to manage the lifecycle, especially on something like a
container registry, that likely outlives any given k8s cluster.

~~~
closeparen
Why are these clusters going away?

~~~
sytringy05
We rebuild ours all the time. New config, k8s version upgrade, node OS
patching.

~~~
closeparen
Interesting. I'm only familiar with Mesos/Aurora, which is often considered
outdated next to Kubernetes, but it can do all those things in place.

Do you end up with a "meta-kubernetes" to deploy kubernetes clusters and
migrate services between them?

~~~
harpratap
You definitely can do the same with Kubernetes too, just that the scope is too
large and it doesn't have a good reputation with rolling updates of
controlplane.

> Do you end up with a "meta-kubernetes" to deploy kubernetes clusters and
> migrate services between them?

Congratulations, you just discovered the ClusterAPI

------
ransom1538
Here is my container: run it. Where is my url? The end.

No, I don't want Terraforms, puppets, yaml files, load balancers, nodes, pods,
k8s, chaos monkeys, Pulumies, pumas, unicorns, trees, portobilities, or
shards.

I love cloudrun and fargate. Cloudrun has like 5 settings, I wish it had like
2.

~~~
throwaway894345
I too want simplicity, but Fargate still requires a load balancer in most
cases. Further, you’ll probably need a database (we’ll assume something like
Aurora so you needn’t think about sharding or scale so much) and S3 buckets at
some point, and security obligates you to create good IAM roles and policies.
You’ll need secret storage and probably third-party services to configure.
Things are starting to get complex and you’re going to want to be able to know
that you can recreate all of this stuff if your app goes down or if you simply
want to stand up other environments and keep them in sync with prod as your
infra changes, so you’re going to want some infra-as-code solution (Terraform
or CloudFormation or Pulumi etc). Further, you’ll probably want to do some
async work at some point, and you can’t just fork an async task from your
Fargate container (because the load balancer isn’t aware of these async tasks
and will happily kill the container in which the async task is running because
the load balancer only cares that the connections have drained) so now you
need something like async containers, lambdas, AWS stepfunctions, AWS Batch,
etc.

While serverless can address a lot of this stuff (the load balancer, DNS, cert
management, etc configuration could be much easier or builtin to Fargate
services), some of it you can’t just wave away (IAM policies, third party
servic configuration, database configuration and management, etc). You need
some of this complexity and you need something to help you manage it, namely
infra-as-code.

~~~
nojvek
Cloud run is one of my favorite cloud services. It’s so easy to use and cheap
for low traffic things. I set one up last year. GCP bills me 5 cents a month
(they have no shame billing in cents)

[https://issoseva.org](https://issoseva.org) hasn’t ever gone down.

------
hardwaresofton
At the risk of being early, RIP CloudFormation.

I posited that this was the benefit in knowing Kubernetes all along, and
possibly the ace up GCP's sleeve -- soon no cloud provider will have to offer
their own interface, they'll all just offer the one invented by Kubernetes.

~~~
weiming
There is also the AWS CDK
([https://aws.amazon.com/cdk/](https://aws.amazon.com/cdk/)) which is
essentially lets you use your favorite language like Typescript or Python to
generate CloudFormation, with an experience similar to Terraform. We've been
experimenting with instead of TF, hoping it's here to stay.

~~~
Normal_gaussian
Take a look at pulumi; it provides a programmatic interface and related
tooling on top of terraform.

~~~
hardwaresofton
Took the comment right out of my keyboard (?) -- these days whenever I talk
about devops with people, I bring up pulumi. HCL and and almost all config-
languages-but-really-DSLs are a death sentence.

I am very unlikely to pick terraform for any personal projects ever again,
imagine being able to literally drop down to AWS SDK or CDK in the middle of a
script and then go back to Pulumi land? AFAIK this is basically not possible
with terraform (and terraform charges for API access via terraform cloud? or
it's like a premium feature?)

------
ponderingfish
Orchestration tools are the way forward especially when it comes to on-demand
video compression - it's helpful to have the tools to be able to spin up 100s
of servers to handle peak loads and then go down to nothing. Kubernetes is so
helpful in this.

~~~
jtsiskin
Would AWS spot instances be useful here?

~~~
big-malloc
Currently the cluster autoscaler supports using a pool of spot instances based
on pricing, which is super helpful for test clusters, and there are some other
tools available to ensure that you can evict your spot nodes when amazon needs
them back

------
wavesquid
This is great!

Are other companies doing similar things? e.g. I would love to be able to set
up Cloudflare Access for services in k8s

~~~
sytringy05
GCP (Config Connector) and Azure (Service something) both have similar things.
I've not heard of it happening outside a managed k8s env.

~~~
harpratap
[https://crossplane.io](https://crossplane.io) is doing a multi-cloud one

~~~
zxienin
I like their work, but their OAM centricity is too heavy an opiniation.

~~~
bassamtabbara
disclaimer: I'm a maintainer on Crossplane.

OAM is an optional feature of crossplane - you don't have to use it if you
don't want to

~~~
zxienin
Good to know, at least that warm me up to crossplane further. The messaging
might need update, including within the docs. I mean - crossplane is _the_ OAM
implement - coupled with OAM sprinkled all over docs, gave me very different
impression.

This aside, I think crossplane work is interesting.

------
zxienin
There is now secular push towards use of custom operators instead of OSB. I
wonder what finally caused this.

~~~
jacques_chester
A mix of factors, I think.

1\. OSBAPI is not widely known outside of the Cloud Foundry community it came
from. In turn that's because Cloud Foundry is not widely known either. Its
backers never bothered to market Cloud Foundry or OSBAPI to a wider audience.

2\. It imposes a relatively high barrier to entry for implementers. You need
to fill in a lot of capabilities before your service can appear in a
conformant marketplace. With CRDs you can have a prototype by lunchtime. It
might be crappy and you will reinvent a whole bunch of wheels, but the first
attempt is easy.

3\. Fashion trends. The first appearance of OSBAPI in Kubernetes-land used API
aggregation, which was supplanted by CRDs. Later implementations switched to
CRDs but by then the ship was already sailing.

4\. RDD. You get more points for writing your own freeform controller than for
implementing a standard that isn't the latest, coolest, hippest thing.

It's very frustrating as an observer without any way to influence events.
OSBAPI was an important attempt to save a great deal of pain. It provided a
uniform model, so that improvements could be shared amongst all
implementations, so that tools could work against standard interfaces in
predictable ways, so that end-users had one and only one set of concepts,
terms and tools to learn. It also made a crisp division between marketplaces,
provisioning and binding.

What we have instead is a mess. Everyone writing their own things, their own
way. No standards, no uniformity, different terms, different assumptions,
different levels of functionality. No meaningful marketplace concept.
Provisioning conflated with binding and vice versa.

It is a medium-sized disaster, diffuse but very real. And thanks to the
marketing genius of enterprise vendors who never saw a short-term buck in
broad spectrum developer awareness, it is basically an invisible disaster.
What we're heading towards now is seen as _normal_ and _ordinary_. And it
drives me bonkers.

~~~
zxienin
I’d agree on the mess. I also find it on over-engineered side. Do I really
need service discovery of services that I already know of, from AWS GCP...?

~~~
jacques_chester
If you want a little from column A and a little from column G, having a single
interface is pretty helpful. It's easier to automate and manage.

------
Niksko
The approach of generating the code from the existing Golang API bindings
means that hopefully this project will get support for lots of resources
pretty quickly.

Excited about this, though you do wonder whether it'll suffer the same fate as
Cloudformation: the Cloudformation team finds out about new feature launches
the same time that the general public does. If the Kubernetes operator lags
behind, you're going to have to fall back to something else if you need
cutting edge features.

------
moondev
Seems odd there is no controller for EC2 or even planned on the roadmap
[https://github.com/aws/aws-
controllers-k8s/projects/1](https://github.com/aws/aws-
controllers-k8s/projects/1)

~~~
alexeldeib
It's not weird at all. A prime use case for this is to use Kubernetes itself
for the compute layer and orchestrating peripheral AWS components using
Kubernetes as the common control plane.

You can orchestrate entire application stacks (pods, persistent storage, cloud
resources as CRDs) using this approach.

~~~
harpratap
There is a fairly decent demand for orchestrating VMs using kubernetes
(kubervirt), many legacy apps are too expensive to be rewritten in a cloud
native way

------
etxm
This is nice from the app manifest perspective because you can declare your
database right along side your deployment.

The provisioning time of a deployment and an RDS instance is very different
though. This is probably most useful when you’re starting a service up for the
first time. This is also when it’s not going to work as expected due to that
latency of RDS starting up while your app crashed repeatedly waiting for that
connection string.

This would be really nice for buckets and near instant provisioned resources,
but also kinda scary that someone could nuke a whole data store because they
got a trigger finger with a weird deployment and deleted and reapplied it.

My feelings, they are mixed. :D

------
MichaelMoser123
Kubernetes is supposed to be cloud vendor agnostic; the cloud vendors counter
that by having extension operators to create some tie in to the kubernetes
deployment of their making.

I guess the 'kubernetes' way would be to create a generalized object for
'object store', that would be implemented by means of s3 on aws and on azure
it would be done as blob storage.

Now with this approach you can only do the common features between all
platforms, you would have a problem with features exclusive to aws for
instance, or you would need some mapping from a generalized CRD object to
specific implementation of each platform.

------
1-KB-OK
Interesting how they enforce the namespaced scope for ACK custom resources.
This is a logical design choice but makes it trickier for operators to use.

Say I have an operator watching all namespaces on the cluster. Since operator
CRDs are global in scope it makes sense for some operators to be installed as
singletons. A CR for this operator gets created in some namespace, and it
wants to talk to s3 -- it has to bring along its own s3 credentials and only
that CR is allowed to use the s3 bucket? You can imagine a scenario where
multiple CRs across namespaces want access to the same s3 bucket.

------
la6471
Everything from DNS to AWS SDKs gets reinvented in Kubernetes. It is the most
anal approach of Infrastructure design I have ever seen in the last three
decades. A good design builds on the things that are already there and does
not goes around trying to change every well established protocol in the
world.KISS.

------
sytse
This feels like Crossplane.io but limited to only AWS. Kelsey seems to think
the same
[https://twitter.com/kelseyhightower/status/12963213771342315...](https://twitter.com/kelseyhightower/status/1296321377134231552)

------
toumorokoshi
This has been posted a couple times, but GCP has an equivalent that's been
around for a while:

[https://cloud.google.com/config-
connector/docs/overview](https://cloud.google.com/config-
connector/docs/overview)

disclaimer: I work at GCP.

------
sunilkumarc
Wow. Now we can directly manage AWS services from Kubernetes.

Github: [https://aws.github.io/aws-
controllers-k8s/](https://aws.github.io/aws-controllers-k8s/)

On a different note, recently I was looking to learn AWS concepts through
online courses. After so much of research I finally found this e-book on
Gumroad which is written by Daniel Vassallo who has worked in AWS team for 10+
years. I found this e-book very helpful as a beginner.

This book covers most of the topics that you need to learn to get started:

If someone is interested, here is the link :)

[https://gumroad.com/a/238777459/MsVlG](https://gumroad.com/a/238777459/MsVlG)

