
Sample cloud-native application with microservices - zdw
https://github.com/GoogleCloudPlatform/microservices-demo
======
warp_factor
Completely ridiculous. I have been in the Kubernetes community for a couple
years now, and there is a crazy amount of ridiculous unneeded micro-
optimization for problems that nobody except the 1% of users got.

Istio mesh is a good example.

~~~
cle
Why is everyone freaking out? This is a demo of the technology, its complexity
is contrived to demonstrate various facets.

This kind of architecture is not unreasonable for larger companies with many
teams, which is where the technology itself becomes useful as well. So in that
context, this architecture is entirely reasonable.

~~~
tedunangst
There's a fine line between "this is possible" and "this is how it should be".

~~~
_pmf_
The line is fine, but this is already miles behind it.

------
marcrosoft
How the heck did we get here as an industry. The complexity is ridiculous.

~~~
drdaeman
Not just complexity, but quantity of tools that do the same thing. Helm,
Forge, Ksonnet, Operators, now this Skaffold (haven't heard of it before)...
Sure, there are some differences, but still... I'm currently moving one app
from GAE to GKE and saw recipes for using all of those to describe the
infrastructure.

Quantity itself is not an issue - it's great to have options. Incompatibility,
segmentation and uncertainity about the future are issues, though. E.g.
ksonnet and Forge are dying. Some say Helm isn't particularly healthy, too -
is it just speculations or would it die in next year or two, leaving all those
charts repos a dead code for archeologists?

Maybe that's just me, but modern DevOps feels like JavaScript world from a few
years ago. Things are born in abundance, promise lots, and die before they're
even v1.0.

~~~
bpicolo
Pretty much all of these tools feel like the wrong abstraction level for app
devs.

Have to imagine some k8s-controller/git server that enables git deployments to
k8s with just a node size config is the optimal end state.

I’ve mentioned this before in related threads: app devs want heroku. They want
a managed platform where they type “git push” and they have an application
with all the fixings. Heroku got the UX right long ago, and the rest of the
ecosystem is still playing catchup

~~~
drdaeman
Problem there is, this isn't easy and K8s doesn't do anything about that. You
need a whole CI/CD pipeline to build Docker images first.

However, there is [https://gitkube.sh/](https://gitkube.sh/) that looks
somewhat similar. Can't vouch for or against - haven't used it at all.

------
reustle
Could someone please point me in the direction of some solid documentation
about deciding on when / how to split out microservices? So many of the cases
I see them used are overkill and just make development and devops far more
complicated for the scale of the application & amount of data/users being
processed. I find myself usually comfortable with splitting out Auth
(Cognito?), Payments (Stripe?), and not much else.

~~~
alpha_squared
The way I've generally thought about it and have seen it done successfully in
practice is to create a microservice for each component that scales
independently. Auth and payments make sense because they scale independently.
You may get authorization requests and financial transactions at a different
rate than traffic to your application itself.

Similarly, if you run a website that does batch processing of images for
example, the image processor application would be a microservice since that
would scale independent of website load. It could be that you would need to
process 100 or even 1000 images for each user, on average, and it doesn't make
sense to scale your whole application when a bulk of the application
processing is for image processing.

~~~
jjeaff
That might be good criteria. But it still depends on your application. Most
web apps don't have hardly any overhead when under load. So it's essentially
just as efficient to load the whole codebase as a monolith into each node as
you scale up.

~~~
sitkack
Correct, the hierarchical breakdown of services is orthogonal to the scaling
unit of code. If every node in the cluster could execute every function, there
is no need to split things out.

When deployment and coordination become an issue, that is when _deployment_
needs to get split up. But given our current RPC mechanisms, deployment and
invocation are over-coupled so we have to consciously make these decisions
when they could be made by the runtime.

------
ejholmes
I know this was probably well intentioned, but I can't shake the irony of how
overly complex and over-engineered this is. If you're just starting out,
please, please, please don't do this.

~~~
xnxn
Man, this is a frustrating and shallow criticism. _Obviously_ this is over-
engineered. It's a demonstration of how these patterns would fit together in
an application that required them.

~~~
not_kurt_godel
Agree it's not the best criticism, but there's at least some validity to it. I
think the main issue is that this is an extremely heavyweight architecture
that will incur a disproportionate level of administrative overhead - you
would need an ample team of competent full-time devs to build and maintain
this system properly. It's like an advertisement for a giant excavator where
the excavator is featured crushing a soda can. It's an impressive piece of
machinery, but the task being used to demonstrate it is comically mismatched
with its true capabilities.

~~~
xnxn
True, true. That's a fairer way to formulate it. But like, it's supposed to be
an extremely heavyweight architecture. The benefits of microservices are
arguably only apparent when you _have_ an ample team (or teams!) of competent
full-time devs.

You're right they could have... beefed up the soda can, so to speak, but I
don't blame the (presumably) DevRel folks who put this together for hand-
waving it, "now imagine a mountain here".

~~~
not_kurt_godel
> The benefits of microservices are arguably only apparent when you have an
> ample team (or teams!) of competent full-time devs.

I would very much agree with that. Although I'm not against microservices,
they are by no means a quick or efficient way to get things done. Hedging
against future theoretical scaling concerns comes at a high cost - a cost
that's very much worth it if true scale is achieved, but a high cost
nonetheless.

------
ISO-morphism
Microsoft has a similar reference application [1].

There are comments arguing these architectures create a ridiculous amount of
overhead for what would be a simple application traditionally, and others
countering that the point is to show the underlying tech in the context of a
"simple" problem. I think there's a large amount of truth for both sides.

It feels like there's a great paradigm shift on the horizon, and hopefully a
good set of abstractions to build with. We're programming in machine-specific
assembly languages, waiting for a high-level language to come along so we
don't worry about things like calling conventions anymore.

[1] [https://github.com/dotnet-
architecture/eShopOnContainers](https://github.com/dotnet-
architecture/eShopOnContainers)

~~~
Yhippa
I don't have peers where I can bounce off my thoughts and get answers to
solutions like these. I've been very curious about serverless microservices
architectures and this repo has given me a pretty clear way of doing it while
showcasing polyglot microservices and persistence. I get that it looks like it
has a lot of overhead for what it tries to do but I can grok scaling this up
for an enterprise application.

------
emersonrsantos
"Any intelligent fool can make things bigger and more complex... It takes a
touch of genius - and a lot of courage to move in the opposite direction" \-
Einstein

~~~
jasonvorhe
Everyone can randomly quite Einstein.

------
benatkin
I really despise the phrase "cloud-native", but this is a cool project,
because it shows how you can have a bunch of different platforms (python, go,
node, etc) running together, and how you can set them up locally without
having to worry about how to install them. Also there's no worry about someone
publishing a package to npm that overwrites your documents, because everything
runs in containers.

I'd love to say this is overcomplicated and to just use Docker Compose, but I
don't think Docker Compose is the way to go.

The next thing I'd like to see is how to get this integrated with vscode or
Atom to provide autocomplete without installing everything locally.

~~~
tech_tuna
How about "cloud-amenable"?

------
gravypod
It's unfortunate that each project has a `genproto.sh` and that there's no
tool that autogenerates protocol libraries/modules for any consuming language.
It's a real pain point of trying to get people to look at gRPC. I'd be amazing
if there was a simple way to have:

    
    
       .proto(s) -> language code -> language binary/package
    

Completely automated and ready for an import into a project.

~~~
EdSchouten
Bazel should be pretty good for that:

[https://bazel.build/](https://bazel.build/)

[https://docs.bazel.build/versions/master/be/protocol-
buffer....](https://docs.bazel.build/versions/master/be/protocol-
buffer.html#proto_library)

We use it at work to build a web app whose backend is written in Go and
frontend in Typescript. All of the code gets built and placed in a Docker
image using these rules:

[https://github.com/bazelbuild/rules_docker](https://github.com/bazelbuild/rules_docker)

~~~
alpb
Hi author here. We wanted to keep the demo app as simple as possible for
readers. Adding Bazel would introduce another layer of complexity whereas most
devs can read bash scripts that are a couple of lines.

------
Omnipresent
Tried grpc's in a production application where one microservice had to make
over a million grpc connections to another microservice. We experienced ton of
memory leaks and switched over to http/json which has been working well.
Implementation was done in Scalla, Akka. Curious to know if others have had
similar experiences or if there is another best practice with grpc's that
we're missing.

~~~
achiang
The problem that gRPC solves for you is versioning your messages between your
services.

As your json payloads evolve, you're going to encounter pain trying to keep
your services in sync, whether it comes in the form of writing parsing code to
crack open payloads and do conditional error checking based on the version
(and expected fields), or whether it comes operationally in how you actually
deploy updates to running services.

~~~
zepolen
That's solved by protobuf, not grpc.

------
nzoschke
I really appreciate projects like this.

It’s helpful for casual observers to understand all the pieces of the stack.
You can get an immediate feel if the approach is right for you and your team.

And it’s helpful for engineers to have examples to reference copy when
implementing the patterns themselves.

I’ve been working on project similar to this for go, gRPC and Envoy:

[https://github.com/nzoschke/gomesh](https://github.com/nzoschke/gomesh)

My project doesn’t go into deployment or K8. If I needed to figure that out
I’d look at the OP project.

I also have a project that demonstrates Go and Lambda:

[https://github.com/nzoschke/gofaas](https://github.com/nzoschke/gofaas)

If these help even a few engineers learn to be successful with the tools it’s
a win.

------
chvid
Is this satire?

~~~
jbarham
I honestly thought it was myself until I saw that it was an official GCP repo.
Unbelievable...

Sending emails is a one-liner in Django. Am I a sucker for not building my own
email sending service?!

Of course being a Google demo they have to include the obligatory ad server
service, in Java no less. Nonsense like this is why I'm steering my kids away
from considering software development as a career.

~~~
kumaraman
When a company and platform grows it's very common to build out an email
sending service since you can only send so many emails per second.

An example of an email sending service is Sendgrid, who have built an entire
company around this service.

~~~
jbarham
I'm aware, I've used Postmark for years. My point is that building your own
microservice to send emails is ridiculous overkill.

------
oldsj
Let me try to break this down from first principles, since I'm not sure I
agree with the largely negative sentiment ITT around how complex this looks:

Most software projects are long lasting, and have unpredictable and constantly
changing requirements.

Agile is currently the best software development methodology under these
constraints.

Effective Agile teams should not be larger than 10 people total, minus the PO
and Scrum Master leaves 8 devs.

Its not possible to run a large scale, high traffic web application with only
8 devs.

Therefore, it makes sense to split large applications up into chunks small
enough for a 10 person team to manage autonomously.

Since the application had to be split up, we now have to solve the
communication issue. Now we need networking, DNS, TLS, have to consider
latency and bandwidth, etc... We also have other issues if we’re running at
scale: redundancy, monitoring, separate environments for testing and
production, having a local dev environment similar to the production
environment. There is a huge list of things that are not important to for an
early stage startup to think about, but are very important and very difficult
for most large enterprises to get right if they want to consistently and
reliably deliver good software.

Google is a large enterprise that operates large scale web services that has
proven they know how to get this stuff right.

This repo is a reference architecture, from Google, on how to run micro
services at scale using modern tools and methodologies. If you think this is
over engineered I think you're just not the target audience, and something
like Heroku is much better suited for your scale.

------
lazyant
Why both a ClusterIP and a LoadBalancer services to expose frontend 8080 as
80?

Also, no persistent data storage (no data to the store owner)? (I guess for
easier example)

~~~
kumaraman
The ClusterIP and LoadBalancer are on separate network interfaces. The
ClusterIP exposes the service on an internal interface whereas, the
LoadBalancer exposes on a public interface.

[https://kubernetes.io/docs/concepts/services-
networking/serv...](https://kubernetes.io/docs/concepts/services-
networking/service/#publishing-services-service-types)

------
esseti
on thing that still i don't get with microservices is where to put the central
control. Usually, services requires: authentication, authorization, execute,
(errors), response.

Now, there should be a reverse-proxy that first call the authentication, if
correct, calls authorization, if correct call the execution. But, how do
people do that?

~~~
dilyevsky
Istio (part of that demo) or Spire (SPIFFE) do that for you.

------
thunderbong
If this was just for demonstration, they could have done it with just a couple
of microservices.

Why get the full web stack in the picture?

~~~
alpb
Hi, author of the demo here. We went with a complex app to show a realistic
complex scenario. This way we get non-trivial trace graphs and metrics from
monitoring tools.

------
iamgopal
Calm Down, Its showcase of what is possible, not what is ideal solution.

------
eweise
They should change the name "Go" to "if err != nil"

~~~
why-el
I don't understand why people don't like that. If you hate it so much, why not
write a plugin for your text editor that collapses it? This will make read the
Go code much faster, and only peek in if you are curious. The proposed `check`
keyword will not remove the cognitive load this generates either.

The way I see it, I'd rather have it. Took maybe 30 seconds to write,
eliminated a whole class of bugs, and if I don't like it, I will write that
plugin to hide it.

~~~
cakoose
What do you mean by "eliminated a whole class of bugs"?

~~~
fierro
I think he meant that it changed the nature of error handling, and thus
changed the way developers think about bugs and the way we encounter them.
Errors are returned from function calls, and callers are expected to check the
value of that error. The error must be assigned into a variable (or
intentionally ignored, which should never pass a code review), and then must
be checked. This forces errors to be handled on the spot, or returned up the
call stack intentionally, usually with some extra info/annotation. You don't
get try/catch scenarios where an error is caught 10 calls above where it
occurred.

Imagine writing Java, and wrapping every function call in a try/catch block,
and inspecting the exception if one was caught, and then handling it or re-
throwing it. That's kind of what we do in Go. There are no "unexpected errors"
because it is clear where all errors originate and how they are to be handled.

Go2 will improve on this if err != nil syntax, possibly with some function-
scoped error handler and a new language level handle/check concept. Check it
out

[https://dev.to/deanveloper/go-2-draft-error-
handling-3loo](https://dev.to/deanveloper/go-2-draft-error-handling-3loo)
[https://github.com/golang/go/wiki/Go2ErrorHandlingFeedback](https://github.com/golang/go/wiki/Go2ErrorHandlingFeedback)

~~~
nmadden
Catching all exceptions is what we used to do in Java - using checked
exceptions. It turns out that in many cases the caller cannot do anything
sensible with most exceptions, except to let them bubble up to a higher layer.
Eventually you reach a point where something can be done - rolling back and
retrying a whole transaction, for example.

Forcing every intermediate layer in the call stack to catch and rethrow that
exception (check and propagate an error code) _sounds_ like a good thing for
explicit error handling, but actually in practice introduces a lot of
boilerplate code that just provides more opportunities for mistakes (like
errors being silently swallowed, or logged multiple times creating confusing
log files).

How many times in a code review do you see `if err != nil { return err }` and
ask yourself if what it is doing is actually appropriate? Most people just
mentally mask out that kind of boilerplate over time.

