
Container-Native Multi-Cluster Global Load Balancing with Cloud Armor on GCP - talonx
https://blog.jetstack.io/blog/container-native-multi-cluster-glb/
======
rossmohax
> This means the Service must be created first, then when the corresponding
> NEG is created the name can be queried and added to the Terraform project.

These kind of reverse dependencies, where app level changes have to be
reflected in infrastructure are source of endless bugs and headaches.

To do it right, you'd want to encode app and infra changes in the same
ubiquitous tool, but "infrastructure as a code" tools such as terraform suck
big times at app level deploys.

Pulumi takes steps in right direction, where it is actually not that painfull
to manage everything, that is what I ended up using for very similar problem
configuring GCPs load balancers for my k8s apps.

~~~
solatic
> To do it right, you'd want to encode app and infra changes in the same
> ubiquitous tool

Well, that's exactly what we do with Dhall, which lets us put static types on
infrastructure tooling that prints out unpredictable output but in a
predictable format and transform that output into the format that other
dependent tools expect. Everything is then glued together with simple shell
scripts and run in a CI environment.

Once you understand the pattern - use an uber-config plus typed tool outputs
to generate downstream config and apply all config in an idempotent way - you
wonder why you ever did things any other way.

~~~
rossmohax
I looked at Dhall before. It is a pure language for static configs, right? It
can't help me to express "deploy k8s resource, wait for annotation to appear,
take annotation value and use it to deploy infra resource". Wrapping multiple
tools in a bash is exact thing I wish I didn't have to do :(

If Dhall enables safer bash glue, I'd be happy to read how

------
cbushko
This is a decent article on how the load balancing between GCP loadbalancers
and kubernetes works.

We use terraform end to end and tried the http/https L7 loadbalancer first in
our setup but I had a heck of a time with:

\- there is no API for the annotation mentioned in the article so if you miss
it while setting up a backend, nothing works \- gRPC was hard to get going.
Most gRPC examples out there use some port such as 50051 so you assume that is
needed. gRPC does work over 443.

We currently use istios ingressgateway as a loadbalancer that you set in your
kubernetes setup. It works but I don't know if it is better or not. We had to
run an ingressgateway pod as a daemonset on every node so that we could get
the real IP addresses from requests for security logging. That was a pain.

------
pmlnr
Is the title peak buzzword-era, or is there more to come?

~~~
minxomat
This might be an elaborate demo for the upcoming thisblogpostdoesnotexist ML
text synthesis project.

Just for fun, I used Grover to generate an article based on the headline:

> What is the cheapest way to optimize the cloud? Going globally? Using open-
> source cloud computing platforms? Choosing AWS in the U.S. and for the
> remaining countries? Choosing SaaS? Or, using AWS and establishing a global
> presence on the EC2 Container Manager platform?

> I know because I did all four. And I won’t do them again.

> I am not arguing that globalizing your compute resources through a community
> cloud is any less important than globalizing your compute resources through
> your enterprise cloud (though I am somewhat of a skeptic about the value of
> community clouds). The important point is that migrating to a global cloud
> requires a state-of-the-art approach to container-native load balancing,
> container-native load balancing on AWS, and container-native load balancing
> on other clouds. And you need to be prepared to pay for it.

~~~
DangitBobby
Okay, but unless Grover can accurately describe how to deploy services and
load balancers on GCP, generate valid kubernetes yaml that matches the
contents of the article, and generate an infrastructure diagram that matches
the infrastructure described in the article, then this was certainly not
generated by Grover. It in no way resembles something generated by some ML
text synthesis. I'm not sure how anyone who read it could come to the
conclusion that it was.

------
vemacs
This tool makes what the OP is trying to do much easier:
[https://github.com/GoogleCloudPlatform/gke-autoneg-
controlle...](https://github.com/GoogleCloudPlatform/gke-autoneg-controller)
you can have GKE configure the NEG on the BS directly, and not have to
interrogate K8s for the name of the NEG to add to the BS via terraform.

------
whatsmyusername
I've yet to see a compelling argument for the complication of kubernetes or
the 'its not AWS' nature of GCP over a bog standard AWS ALB with ECS cluster
tasks attached to it.

~~~
minxomat
Because k8s mostly prevents lock-in. A much better idea is to run EKS on
Fargate. You still won't have to manage a cluster, but at least you can now
use standard k8s manifests that work in any other cluster instead of the ECS
homebrew stuff. You can keep ALB, because ALB can run as an Ingress
Controller.

~~~
whatsmyusername
EKS billing is hilariously bad (Managed SFTP is the same way). It's obvious
they built it as a feature parity thing and they don't want you to actually
use it.

I can run all of my container infrastructure for a property for the cost of
the control plane for EKS (before you attach hosts and before you spend how
much time figuring out why hosts won't attach to it). It's just bad.

------
je42
when I experimented with GCP, I found it weird that cloud-run would not work
together with the GLB. That was really a surprise.

------
fulafel
Why would GCP restrict the ports like this?

~~~
tn890
GCP has a lot of ports restrictions.

A place I used to work at migrated from GCP to AWS because they don't allow
port 25 incoming; if you're doing e-mail, you 100% need this.

~~~
jkaplowitz
They block it outbound, not inbound - email receiving should work fine in GCP.
I believe that's the only mandatory firewall restriction, and even that one
can have exceptions made for use cases such as your former employer, as the
other reply to you said.

The point is to make sure that anything sending email from GCP has someone
properly attending to it, so that GCP does not become a source of spam, rather
than to prevent email service providers or the like from using GCP.

For many companies that aren't an email service provider, it's often a better
use of IT funds to send via one anyway, given how much work is involved in
maintaining security and deliverability of an outbound email server in 2020.
Most of those support sending through ports that aren't 25, and Google has
some nice free tier deals for GCP customers.

------
jonstewart
Is it web scale?

------
znpy
CLOUD NATIVE DEVOPS RAFT-BASED BLOCKCHAIN-ENABLED MULTI CLOUD RESILIENT
CONTAINERIZED MULTI-REGION MULTI-AZ static blog generator.

~~~
bryanlarsen
I downvoted this because even though I like it, it belongs at the bottom of
the conversation below any real discussion of the actual article.

~~~
tyfon
Personally I think it belongs at the top. When the article looks like it is
generated by the buzzword generator, that should be the real discussion.

~~~
bryanlarsen
It's quite a good article with a solution for a real problem. That it looks
like a buzzword generator to HN says more about HN than it does about the
article, IMO.

