
Making a Go program 70% faster by avoiding common mistakes - numo16
http://blog.fmpwizard.com/blog/go_making_a_program_70_faster_by_avoiding_common_mistakes
======
rndstr
Great job. The title was misleading to me because I thought I'd learn
something about Go but this basically boils down to these performance
improvements

    
    
      - store results in variables and don't call the expensive method over and over
      - use batch queries for fetching multiple documents at once from your storage

~~~
emehrkay
It's like seeing this in everyone's JavaScript

    
    
        $('.some_selector').action();
        ...
        $('.some_selector').anotherAction();

~~~
Cthulhu_
Isn't there a decent caching system behind that?

OTOH, .action() could change the whole dom so that .some_selector matches
something else or nothing at all, so I guess jquery couldn't make that
assumption / optimization without knowing what .action() does, or without
doing a deep analysis of what DOM elements .action() altered and whether the
selector on the next line is affected.

------
echlebek
Benchmarks are an important part of optimizing a program.

Another important part of optimization is profiling. Profiling lets you see
where your program is spending the bulk of its time. Then you can make
informed decisions about where to direct your efforts.

[https://blog.golang.org/profiling-go-
programs](https://blog.golang.org/profiling-go-programs)

------
rileymat2
A bit off topic, but do we have a standard way to talk about performance
improvements in terms of percents?

This went from 131 to 76. The 70% calculation was (start-end)/end. Is this the
standard method, because it seems like you would divide by the start timing.

~~~
ant6n
I find reporting like this the wrong way around. It should be a factor
requiring no addition and subtraction, like 1.7x faster.

Similar with compression. If a 8kb file becomes 2kb, the compression factor
should be 0.25x.

~~~
greggyb
This might be pedantic, and I'm sorry, but this is something I always wonder
about when I talk about performance.

When we say 1.7x faster, what is a fast? How does fastness increase? It's
nearly 2 fasts instead of one?

I think the compression sample you gave is much better. Multiply the original
file size by the factor to get the new, compressed file size. This is much
better than a statistic that I hear a lot about the compression in a
columnstore database we use, where everyone says "7-10x compression", because
that was published first and is popular.

I'd prefer to see something like your compression example. It took 8 seconds
before, and now takes 2. My optimization makes the operation take .25x the
time.

~~~
over
1.7x faster means oldRunningTime / newRunningTime = 1.7. Or in your example, 8
seconds / 2 seconds = 4x faster. Sometimes this is called a speedup of 4x or a
4x speedup.

~~~
greggyb
I totally get how the math works. I don't like the language. We're not
measuring speed or velocity which can be represented as rates. We're measuring
time. Speed would be something like operations per second. We're just
measuring seconds.

------
egeozcan
I'm not against people posting their experiences about finding issues (even if
trivial) in their code or learning something fundamental about db queries - I
would actually encourage that. What I find a bit annoying is putting the name
of the programming language they use front and center, as if it was relevant
at all, to attract what is mostly called as "hype".

~~~
laurent123456
Indeed what a clickbait title, even though there's nothing new in the article.
I remember reading something very similar 20 years ago, except it was about
Java.

------
speps
Interesting tool suggested in the comments of the article :
[https://github.com/uber/go-torch](https://github.com/uber/go-torch)

------
blablabla123
> [Go] benchmarks are your friend

> Try not to query your database if possible.

Actually this is quite generic and applies to any platform...

------
al2o3cr
I'd also recommend that the author look into tidying up the `TransactionType`
field at ingest - those `ToLower` calls aren't free either, and if you've got
control of the system writing the data it's easier to just store a consistent
case.

Failing that, `EqualFold` may be worth looking at. It expresses the same
intent, dunno if it's more or less efficient.

------
unfunco
I'm not sure if Go optimises your condition, but you're calling
strings.ToLower(trade.TransactionType) twice, you could memoize the result of
that operation and possibly squeeze a little more performance out of the
impact function.

------
meshko
Sadly the important wisdom is missing from the post -- profile, then optimize.

