
How our QA team leverages Gitlab’s performance testing tool - roleone
https://about.gitlab.com/blog/2020/02/18/how-were-building-up-performance-testing-of-gitlab/
======
awild
This looks like a neat tool, and I can see myself using this!

My main project so far this year has been creating actual performance metrics,
and not just guesstimates (This is especially important with the JVM which
will optimize code at runtime to your actual payloads). And the best tool so
far has been FlameGraphs [1]: I urge everyone to try and find am
implementation for them and their specific language, as these things are
actually interactive. It's not just a nice graphic, but can tell you very
directly where you're spending time. We've found countless minor bugs and
wasted cycles.

The best java integration I could find is Async-Profiler [2] which can - as
the name implies - be attached to any running jvm. The config is pretty
powerful and intuitive. It's one of those magical things that just work.

[1]:
[http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html](http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html)

[2]: [https://github.com/jvm-profiling-tools/async-
profiler](https://github.com/jvm-profiling-tools/async-profiler)

------
wldlyinaccurate
Really cool to see the Gitlab team focusing on performance. I just wish they'd
look at front-end performance as well as API response times. Something like
WebPageTest or SpeedCurve will measure metrics that better represent the user
experience, like when important parts of the page are rendered or become
interactive. Maybe with visibility into these metrics, the engineering teams
could make Gitlab less painful to use on my old laptop :)

~~~
gygitlab
Hey thanks for the nice comments. I'm the blog's author.

So yeah you're absolutely right there. Rendering performance is equally
important. We use a different tool for this - SiteSpeed (that we also offer as
an option for users to use in CI -
[https://docs.gitlab.com/ee/user/project/merge_requests/brows...](https://docs.gitlab.com/ee/user/project/merge_requests/browser_performance_testing.html)).
We run SiteSpeed pretty much constantly against gitlab.com but this is more
for monitoring I believe. We (Quality) are wanting to explore running it in a
more general sense against our test environments to start rendering
performance testing. Hopefully we'll have more to share on this in the future.

With GPT though we actually updated it recently to increase Web endpoint
coverage and this has lead to some pages getting some big performance fixes
(the Merge Request Changes page should be rendering a lot faster today than a
few months back for example). With the tool we now hit more of these endpoints
at scale that deliver the page source, etc... but we can't render with it as
it's not a browser (way to heavy to run at scale) hence us looking at starting
rendering performance tests as well.

In the meantime I'd love to hear what pages specifically give your old laptop
grief?

