
Show HN: Go Micro – A Go microservices development framework - chuhnk
https://go.micro.mu
======
kubanczyk
go mod graph | wc -l

787

Oh my. And the most fascinating part of it:

    
    
      - google.golang.org/grpc@v1.17.0
      - google.golang.org/grpc@v1.19.0
      - google.golang.org/grpc@v1.19.1
      - google.golang.org/grpc@v1.20.1
      - google.golang.org/grpc@v1.22.0
      - google.golang.org/grpc@v1.24.0

~~~
wishinghand
I'm unfamiliar with that command. What is this showing?

~~~
winrid
Dependency graph. Piped to wc to show number of lines in output.

------
GiorgioG
Not knocking the OP, or his/her effort, but I think the number of microservice
frameworks exceeds the number of organizations that actually have the problems
that microservice architecture was designed to solve.

We're doing it for the project I'm working on at work, and it's my opinion a
colossal waste of engineering time and effort. We're a big company, but we're
not FAANG. Our user-base will never be even likely to be 100k users total. But
hey we're doing this in the name of industry's current 'best-practice.'

I can't wait for the microservice & scrum trends to die off.

~~~
Hitton
You are right, that for most use cases using microservices isn't necessary,
but it doesn't mean they can't be still useful. Although there is some added
cost in the beginning, providing separation of concerns can be quite useful
for security and in later phases it keeps the monolith from growing too big
and difficult to maintain/extend.

I see microsevices as return to the Unix philosophy:

>Write programs that do one thing and do it well.

>Write programs to work together.

~~~
roganartu
I would argue that, unless you are supporting many developers working on the
same project simultaneously (as in, hundreds if not thousands), that
microservices will actually slow development without improving quality or
robustness.

Many things are significantly easier in a monolith. Integration testing,
reasoning (and verifying with tests) about how components interact,
refactoring of interfaces etc. As soon as you pull components out into
microservices many assumptions developers may not even realise they make about
developing in a monolith go out the window.

~~~
PopeDotNinja
Every microservice you carve out of a monolith gives you at least 2 public
APIs you didn't have to worry about before, and makes local development that
much more complicated. I had a situation where I needed spin up 20
microservices just to wire up an A/B test for marketing, and everyone kept
asking me what was taking so long while refusing to listen to the trade-offs
of their request. Good times.

I vote for punting in microservices until the value proposition is clear.
Otherwise you just end up with a macrolith that makes you dream of monolithic
good old days

~~~
cle
Of course, this should have been the approach from the beginning, and it
boggles my mind why people pick technologies or paradigms without considering
their requirements and the tradeoffs of their technical decisions.

I think part of it is because of the hype machine, where people only talk
about how awesome things are that they invented, instead of talking about what
problems it solves, what it doesn’t solve, and what its tradeoffs are. If you
are reading something to evaluate a technology and it doesn’t talk about all
three of those things, discard what you’re reading, because it will mislead
you.

------
Someoneelse77
There are breaking changes like every second minor release, functionality gets
removed, dependent repositories get deleted/renamed/moved by author. PRs are
discussed on the wrong level and author is very opinionated.

Do not use this framework unless you want to end up in an inconsistent mess!

~~~
chuhnk
Thank you for the honest feedback. We're moving fairly quickly in regards to
the evolution of the framework. Some of that results in breaking changes and
you're right we haven't established the right channels for communication. To
be quite frank, open source and public library maintenance was never my goal
or part of my experience, I build platforms and remain mostly behind the
scenes. So I apologise for the pain but hopefully people have found the
framework to be useful despite some of the issues.

~~~
volkandemirel94
I like the way the author releases minor fixes. 1.14.0 - Remove the consul
registry. What have people to do who're running hundreds of microservices in
prod relied on go-micro with consul registry? Any migration path without
stopping the world? What about the cost of person-hours we need to spend
changing the code everywhere? The business wants to run the services 24x7 and
does not depend on the mood of the third-party framework author.

~~~
chuhnk
I'm assuming you're from a certain large corporate from whom no one actively
engages our community or makes any comments on PRs or creates any issues or
contributes anything for that matter and does not actively pay for support.

As a developer and user of open source I completely understand your pain. As a
maintainer who has built and managed this project alone for the past 4 years I
would tell you that you have many options in how you make use of a completely
free open source project with a liberal Apache 2.0 license.

You are entirely free to fork the project, pin your dependency to a certain
release, to actively engage in the community, to file an issue when you have
concerns and to of your own volition use something entirely different if you
are unhappy with your usage of a free tool.

We are in the process of moving from a totally free open source project
maintained by one person to a small team building a product and business
around these tools. During that period there may be some pain and issues,
we'll move fast and potentially break things and in that make many mistakes
but hopefully people will engage to help us move in the right direction.

If you are a company who relies on this piece of software for the 24x7 uptime
of your business and this adds measurable value to your company then perhaps
it would make sense to engage in some sort of SLA or support agreement for
this critical software that you currently pay nothing for.

------
holografix
Service Discovery, load balancing. Aren't these things that should be done by
the underlying platform?

In other words: if I'm using this with K8s doesn't K8s do that for me? What
major benefits do I still have by using Go Micro?

------
vemv
I'd help a lot if the project showed rationale, alternatives, its taken
choices and the corresponding tradeoffs.

Otherwise, one is essentially invited to blindly adopt someone else's design,
which is particularly reckless in distributed systems.

------
tedunangst
Having built a micro service of sorts just yesterday with nothing more than
net/rpc, it wasn't that bad.

------
jrockway
I fear this does too much.

All applications should care about is an API to do what they want, so all you
need to decide on is a messaging protocol, which is probably going to be GRPC.
(Why GRPC? I picked it out of a hat. JSON is very brittle when service
definitions change, so you want an IDL. Feel free to pick one and then never
care about it again, it doesn't really matter.) Then if you want
publish/subscribe, you write a publish/subscribe service and make API calls to
it. SendMessage / WaitForMessage / etc.

Service discovery and load balancing are already solved problems. Use Envoy
sidecars, Istio, Linkerd, etc. for load balancing, tracing, TLS injection, all
that stuff. Use your "job runner"'s service discovery for service discovery
(think: k8s services, but feel free not to use k8s. It's just an example.)

The tools you really need for success with microservices:

1) A way to quickly run the subset of services you need.

For unit tests, I prefer "fake" implementations of services. Often your app
doesn't need the full API surface of an upstream service. If you have a
StoreKey / RetrieveKey service, an implementation like "map[key]value" is good
enough for tests. Make it super simple so you test your app, not the upstream
app, which already has tests. (Do feel free to write some integration tests as
a sanity check for CI, but keep the code/save/test loop fast and focused!)

For the "try it out in the browser", I'm pretty unhappy with the available
tools. You want something like docker-compose without requiring docker
containers to be built. I ended up writing my own thing to do this at my last
job. Each service's directory has a YAML file describing how to run the
service and what ports it needs. Then it can start up a service, with Envoy as
a go-between for them. That way you get http/2, TLS (important for web apps
because some HTML features are only available from localhost or if served over
https, and your phone is never going to be retrieving your app's content from
localhost), tracing, metrics, a single stream of logs, etc. I got it optimized
to the point where you can just type "my-thing ." and have your web app
working almost like production in under a second. It was great. I wish I open-
sourced it.

2) Observability. You need to know what's going on with every request. What's
failing, what's slow, what's a surprising dependency?

2a) Monitoring. With a fleet of applications, it's unlikely that you'll be
seeking out failures. Rather they just happen and you don't know how often or
why. So every application needs to export metrics, and these metrics need to
feed alerts so that you can be informed that something is wrong. (Alert tells
you something is abnormal; the dashboard with all the metrics will let you
think of some likely causes to investigate.) Just use Prometheus and Grafana.
They're pretty great.

2b) Distributed tracing. You don't have an application you can set a
breakpoint in to pick apart a failing request. So you need to ephemerally
collect and store this information so that when something does break, you have
all the information you would have manually obtained all ready for you, so you
can dive in and start investigating. Just use Jaeger. It's pretty great.
(Jaeger will also give you a service dependency graph based on traces. Great
for checking every once in a while to avoid things like "why is the staging
server talking to the production database?". We don't know why, but at least
we know that it's happening before someone deletes production.)

2c) Distributed logging. You will inevitably produce a lot of interesting logs
that will be like gold when you're debugging a problem that you've been
alerted to. These all need to be in one place, and need to be tagged so that
you can look at one request all at once. The approach I've taken is to use
elasticsearch / fluentd / kibana for this, with the applications emitting
structured logs (bunyan for node.js, zap for go; but there are many many
frameworks like this). I then instructed my frontend proxy (Envoy) to generate
a unique UUID and propagate that in the HTTP headers to the backend
applications, and wrote a wrapper around my logging framework to extract that
from the request context and log it with every log message. (You can also use
the opentracing machinery for this; I personally logged the request ID and the
trace ID; that way I could easily go from looking at Jaeger to looking at
logs, but traces that weren't sampled would still have a grouping key.)

The deeper logs integrate into your infrastructure, the better. As an example,
something I did was to include a JWT signed by the frontend SSO server with
every request. Then my logging machinery could just log the (internal)
username. Then when someone came to my desk and said "I'm trying to foo, but I
get 'upstream connect error or disconnect/reset before headers'" and could
just look for logs by their username. Much easier than trying to figure out
what service that was, or what URL they were visiting.)

Anyway, sorry for the long post. My TL;DR is that you must invest in good
tooling no matter what architecture you use. You will be completely
unsuccessful if you attempt microservices without the right infrastructure.
But all this is great for monoliths too. Less debugging, more relaxing!

~~~
chuhnk
Your assumption is that the framework handles this complexity for you. This
incorrect. It provides abstractions to the developer which then allows them to
build on these while allowing them to be swapped out for the most appropriate
underlying infrastructure.

The point is that building distributed systems as a whole requires a level of
understanding in this space but not one that should require you to initially
focus on infrastructure or even take that into consideration while writing
software. Ideally you should be given these as abstractions which allows you
to build distributed applications and offload operational concerns to the
relevant parties while still coherently having the sum of the parts work
together.

The tools that you mention are infrastructure. And while an environment, a
platform, should be provided that gives you the insights and the relevant
foundation, it really should not be the primary concern of the developer.

Developers should not be forced to reason about infrastructure.

