
Instrumentation in Go - gobwas
https://gbws.io/articles/instrumentation-in-go/
======
Julio-Guerra
Something to note here is that everything is known and done statically in the
source code. This could be addressed through metaprogramming if only the Go
language had some ^^

We have similar needs at Sqreen but for security monitoring and protection
reasons: we need to dynamically instrument functions at run time, while not
asking any code modification to our users. To do so, we instead leverage the
Go compiler to perform compile-time instrumentation that inserts hooks
anywhere interesting. You can read more about this approach at
[https://blog.sqreen.com/dynamic-instrumentation-
go/](https://blog.sqreen.com/dynamic-instrumentation-go/)

~~~
gobwas
I have read this article before and I really like the idea of doing so. I
would love to read more about how you guys modified the compiler!

~~~
Julio-Guerra
We haven't modified the compiler but instead plugged into it an external
instrumentation tool using the compiler option `-toolexec`.

With this option, the compiler invokes every toolchain binary (compile, asm,
link, etc.) through the provided program. So you can basically write a proxy
program intercepting calls to `compile` to do source-code instrumentation,
exactly like you would with go generate, but now automatically done during
compilation on _every_ package.

~~~
gobwas
Ah, cool! Thank you so much, didn't know about this parameter :D Will try it
definitely.

------
jeffbee
Not a big fan of this pattern. I feel like the state of the application should
be maintained in the usual way, with just some struct fields or global
variables. It is easy for any reader of the code to understand what is meant
by a statement like `bytesReceived += msgLen`. To observe this state, you can
have a function that reads and exports bytesReceived, on demand and when/if
needed. This provides for the best flexibility and maintainability of the
system, since you will be able to change your observability stack at a later
time, or have more than one of them, without changing the stats-keeping
statements at the point where they appear in the application code. This also
provides the best scalability and performance since you are able to aggregate
separate counters from multiple threads, if needed, minimize or optimize
locking, etc.

The problem is there are a lot of off-the-shelf observability frameworks for
Go that require the pattern in the article: you emit a value to a hook
function at the point of production. This sucks at any reasonable scale
because you are now required to just take a global lock (or write a channel,
which is the same thing) every time you record a value. This is a significant
contributing reason why Go servers fall apart when given access to more than a
handful of CPU cores.

~~~
gobwas
I think all mentioned issues are related to the implementation of the user
probes, not the pattern.

You mentioned an observation methods, but essentially they are absolutely the
same as hooks, just inverted (with a bit less overhead on branching and hook
call). E.g. your example with bytesReceived counter can be implemented with
atomic operation and further export on demand by some other goroutine.

~~~
jeffbee
The thing I keep in mind is that state is mutated far more than it is
observed. You might handle 1000 or more requests per second and only export
the number of requests once per second or less. In light of this ratio it’s
important to make the recording path as simple and cheap as possible, and it’s
ok if the observing path has to be complicated to compensate.

I usually refuse to use atomic increments for services because it scales very
poorly. Even a mutex-protected increment scales much better than atomic
increment.

~~~
gobwas
I see your point that recording must be much cheaper than exporting, and I
completely agree with it.

Anyway how will you collect that counters inside your component (to be
observed later on demand)?

------
kstenerud
Or you could just adopt [https://opentelemetry.io/](https://opentelemetry.io/)
and then everyone can benefit from all OT enabled libraries without having to
write extra code, and you won't have to rewrite in order to change
implementations since it's just an API.

The go implementation of OT makes extensive use of the Context [1] object to
support tracing, logging, and metrics. You write once, and then the user only
needs to decide which exporter to use, and possibly filters to apply.

[1] [https://golang.org/pkg/context/](https://golang.org/pkg/context/)

~~~
filipn
Is opentelemetry ready for production usage? As my understanding this project
is a merge between OpenCensus and OpenTracing, but it’s still in beta and the
documentation is really lacking. Does anyone have any hands-on experience with
this library?

~~~
owaislone
SDKs for languages are in beta and might not be production ready but the
collector is production ready. You can use any of the OpenTracing or
OpenCensus instrumentation libraries but deploy the OpenTelemetry collector
and once Otel SKDs mature, migrate OC/OT to Otel or not if you don't want to.

------
majewsky
Turns out you can use any language to write Java code.

------
gtramont
"Logging is a feature", as the authors of Growing Object-Oriented Software
Guided by Tests state. As such, we need to treat them as we would treat any
other feature. One relatively straightforward way of adding logs is by
decorating your objects. Logging decorators.

For example, if `Client` is defined as an interface, we can have two concrete
implementations. One that holds business logic – `NetClient` – and one that
holds logging/developer logic – `VerboseClient`. As we inject dependencies, we
can compose a client like this: `NewVerboseClient(NewClient())`. This way you
may even test you logging logic by composing it with different implementations
of clients (one that always fails, for example) to test the different kinds of
logs.

~~~
gobwas
Thats true that such approach also exists.

But it doesn't work well in Go by several reasons.

For example, you usually define interface not in the package where its
implemented, but where its needed. Thus you must provide such wrappers in
every place where some (subset) of the `Client`'s methods are used.

Another reason is that such thing rejects an ability to use your struct as is,
without interfaces (and thus sometimes without heap allocation for your
struct).

And, finally, if you want to adopt such approach in proprietary/internal
software, where interfaces usually change more often than in libraries, you
will change code in N places instead of 1 (in best case); where N is number of
instrumentation methods you want to provide "as feature".

------
joeberon
Why do Go articles so often have no syntax highlighting?

~~~
guessmyname
> _Why do Go articles so often have no syntax highlighting?_

Rob Pike, one of the 3 original Go architects, has stated in the past he
doesn’t like syntax highlighting [1].

> _Syntax highlighting is juvenile. When I was a child, I was taught
> arithmetic using colored rods
> ([http://en.wikipedia.org/wiki/Cuisenaire_rods](http://en.wikipedia.org/wiki/Cuisenaire_rods)).
> I grew up and today I use monochromatic numerals._ — Rob Pike

Maybe Sergey Kamardin, the author of this article, is following Rob Pike’s
philosophy. That being said, the website has references to a Go library called
Chroma [2] which purpose is to colorize source code. The library borrows ideas
from a popular JavaScript library called “highlight.js” [3] and that is why
you can also see “highlight” CSS classes. The website downloads this CSS file
[4] which contains a couple of instructions to make the code look the way it
looks.

The official Go Blog has popularized this type of design [5] and many Go
programmers has adopted it in their own websites.

[1] [https://groups.google.com/d/msg/golang-
nuts/hJHCAaiL0so/kG3B...](https://groups.google.com/d/msg/golang-
nuts/hJHCAaiL0so/kG3BHV6QFfIJ)

[2]
[https://github.com/alecthomas/chroma](https://github.com/alecthomas/chroma)

[3] [https://highlightjs.org/](https://highlightjs.org/)

[4] [https://gbws.io/scss/syntax.min.css](https://gbws.io/scss/syntax.min.css)

[5] [https://blog.golang.org/go-fonts](https://blog.golang.org/go-fonts)

~~~
joeberon
That is very bizarre

~~~
DangitBobby
I would call it juvenile...

~~~
naikrovek
Well, he has a point. If you need a variable to be colored to know it's a
variable, then I'd argue that you don't really know what you're doing.

If you don't need a variable to be colored to know it's a variable, then why
do you color them?

A counter argument would be that you have this huge visual cortex in your
brain which can use cues like this to help you quickly find what you're
looking for.

A counter argument to _that_ would be that you should know your code well
enough to not need to rely on anything like that.

When I started programming, I had notepad and that was it. I didn't need
colors. I use colors now, though. What's that say? I have no idea.

~~~
DangitBobby
Not everything you do is about need. I think highlighting helps reduce some of
the cognitive load involved in mentally parsing a code block. Especially in a
language you might be unfamiliar with. I also happen to like the way it looks.

------
fromtheabyss
Tracing is a microservices phenomenon, right? In other words, would it benefit
monolithic systems that already log and collect metrics?

~~~
hocuspocus
Even monoliths are dealing with outbound network calls.

I assume it depends on your current solution, but for a new project, I find
that setting up instrumentation and pushing traces to Zipkin or Jaeger is very
straightforward, at least on the JVM. Out of the box I can see the overhead on
top of DB queries and plain HTTP calls. And if some bit of internal logic is
particularly time consuming, I can add a custom span in two lines of code and
figure out what's going on. There are of course other ways to achieve the same
thing, but the experience feels nicer.

