
Ask HN: How do you measure performance of your client-side application? - mgolawski
Hey. I wan&#x27;t to ask you guys what is your solution for gathering info about reflows&#x2F;repainting, layout trashing and overall performance on the client side of application. Do you use any tool to gather and store analysis data, classical Chrome dev tools investigation or maybe some metrics implementation inside your code?
======
hknd
Lighthouse
[https://developers.google.com/web/tools/lighthouse/](https://developers.google.com/web/tools/lighthouse/)

and

Puppeteer
[https://github.com/GoogleChrome/puppeteer](https://github.com/GoogleChrome/puppeteer)

~~~
VeejayRampay
Lighthouse results seem random, got three widely different results for the
same website. I would expect the results to vary slightly (taking into account
the time it takes to receive the requests, server load, congestion and
whatnot), but I also has a feeling that the time to first paint and time to
load are impacted by current load on the machine running the Chrome running
the Lighthouse tests or something.

Am I alone in observing this?

~~~
paulirish
I'd love to investigate what's going on with your results. Could you save the
results from a few runs and share them in a github issue?

> I also has a feeling that the time to first paint and time to load are
> impacted by current load on the machine

This is certainly true of any performance test. But it shouldn't be a
significant problem unless you're multitasking during recording.

~~~
VeejayRampay
I'll see what I can do about reproducing the issue.

------
hypertexthero
I watch over people’s shoulders as they use the application in various devices
and makes notes when they get frustrated.

~~~
pc86
How does watching people give you any actionable information "about
reflows/repainting, layout trashing and overall performance on the client
side?"

~~~
dabockster
Uhhh... Don't worry about it if the users are happy?

I mean, unless you're allocating 1GB RAM for a text editor, I say don't worry
about it if the users aren't noticing it. It's super easy to nitpick your own
work to the point of wrecking what you've done right.

EDIT: Oh, in Chrome.

Well, then it depends on what you built and how many libraries you decided to
use.

------
noahcollins
My team's approach has been to automate real devices and report on any
regressions in terms of JavaScript evaluation, layout, paint, garbage
collection, etc. Shameless plug: the team is growing and we're hiring an
experienced front-end dev.
[https://nordstrom.taleo.net/careersection/2m/jobdetail.ftl?j...](https://nordstrom.taleo.net/careersection/2m/jobdetail.ftl?job=270593)

~~~
t1o5
Oh no Taleo, the corporate ATS blackhole where resumes go to die. I rather not
apply with companies that use Taleo for various past bad experiences. The
first being the resume and all the details that I type in Taleo goes into HR
oblivion. The problem of job applicant tracking & recruitment is not yet
solved.

~~~
cdubzzz
I've recently been through a number of these damn things and have found
SmartRecruiters[0] to be pretty solid. Clean UI and easy application flow. The
companies using it have been good about actually responding to applications as
well, which is always nice.

[0] [https://www.smartrecruiters.com/](https://www.smartrecruiters.com/)

------
joshribakoff
I open chrome's task manager, which shows total memory usage & CPU usage for
each tab in a sortable table. Also in dev tools, I activate the FPS meter
overlay. Then I do things in the app to stress it which will depend on your
app. If it maintains 60fps through all use cases, you don't need to go any
further really. I also use the app for an extended period of time (or
programatically simulate that) and ensure memory doesn't creep up
continuously. And ensure CPU is 0% or close to it when idle.

If it drops below 60fps in a way thats an issue, open the performance/timeline
tab in chrome dev tools & record while repeating that action a few times. In
this tab you can also throttle CPU, or you could just do a bunch of CPU bound
stuff on your main thread every 30ms to simulate a slower CPU

From there you'll know what the issue is, n+1 calls into a jquery plugin,
memory leaks, garbage collections, it even shows paint issues pretty clearly.

From there, you can dig deeper but it depends on your app & frameworks used.
For example if you're using react, you might go into react dev tools or redux
dev tools. For angularJS apps you could use batarang to try to see what watch
queries you have, etc.

For paint issues, use the checkbox in devtools to highlight paints, you can
easily see at a glance if the whole UI is being repainted, if so it means your
virtual DOM library is detecting changes when it should not. Or maybe you have
a legacy jquery app that just builds up a huge string and does div.innerHtml =
string, which should be rewritten to use virtual DOM libraries.

For reflows, I also use the performance tab, it shows up as "recalculate
style" or something like that. Usually it means you're doing $(div).width() or
something similar inside a loop & you can just cache the value to fix it.
Again, it depends on your app. If you have drag & drop widgets & you have 9000
widgets on a page, you're going to have some jank if you're binding jQuery
draggable 9000x. You can use optimizations like not binding until the user
mouses over each widget. To get rid of the jank, one strategy is you just make
jank happen in smaller pieces over time instead of on page load or component
mounting.

------
brlewis
Chrome dev tools investigation is decent, and even lets you throttle network
bandwidth to see the effect of slow connections. It doesn't let you throttle
CPU, though, so an important part of identifying performance problems is to
try your app out on a cheap or old phone.

~~~
feifan
CPU throttling is now available:
[https://plus.google.com/+AddyOsmani/posts/NRsAqshb17n](https://plus.google.com/+AddyOsmani/posts/NRsAqshb17n)

It's in the controls near the top of the Performance tab

~~~
brlewis
Oh that's funny. I just saw it in "Highlights from the Chrome 61 update" and
came here to correct myself.

------
z3t4
I've been using Typometer (1) which measures time from input to output. I
think it's important to measure from outside the program. It would be ideal
with a high-speed camera setup that takes 1000 images/per second pointed at
the monitor. And test from several devices, old onces in particular.

1) [https://pavelfatin.com/typometer/](https://pavelfatin.com/typometer/)

------
cvshane
MachMetrics - it's basically a private WebPageTest instance on better
hardware, which runs your tests recurring, with nice graphs to show trends.
[https://www.machmetrics.com](https://www.machmetrics.com)

------
brudgers
I wonder how much business value (versus technical value) there is in those
metrics.

~~~
beckler
Quite a lot, actually.

I went to the NY Velocity conference back in 2014, and these are from my
notes. Unfortunately, I don't have any sources to link back to, except I
believe this was the session where my notes came from:
[https://conferences.oreilly.com/velocity/velocityny2014/publ...](https://conferences.oreilly.com/velocity/velocityny2014/public/schedule/detail/36041)

Also an obligatory warning that correlation is not always causation.

\- Walmart.com: for every second improvement in page load time they
experienced a ~2% conversion increase.

\- Shopzilla: sped up pages from 6s to 1.2s, increased page views by 25%,
increased revenue by 12%.

\- Yahoo: increased traffic by ~9% for every 400ms of improvement

\- Mozilla: made pages 2.2s faster and resulted in 60m more downloads.

\- On average ~57% of users will abandon a site that hasn't loaded in ~3
seconds (again, I don't know the source).

\- When a page loads at ~8 seconds , conversion rates start to drop by 40-60%
(and again, I don't know the source).

\- In a 2012 EEG study (sorry I don't know which one), they throttled a
desktop connection from 5Mbs to 2Mbs, and found that about half the
participants had difficulty concentrating and finishing the task asked of
them.

\- From 2013-2014, the median page size has grown ~67% in just a year. About
50-60% of the size is from images (again, I don't know the source).

~~~
brudgers
I get what you are saying. My doubt is because most businesses are not Walmart
(etc).

What matters at the scale of a large retailer (so large as to dwarf Amazon) is
not what matters at most places. It's not just that Walmart is dealing with
fungible goods, it is also that:

\+ Walmart's online store is dealing with legacy pre-internet database
architectures on the back end.

\+ Walmart's online store is competing with Walmart's brick and mortar stores
when it comes to architecting IT infrastructures and allocating resources and
the brick and mortar stores eclipse Walmart's etailing.

Because I think Yahoo is pretty good due to being consistently profitable, I
will forgo the softball snark of "Why would a tech company want to be like
Yahoo?" But there is really question of whether the best practices of the
companies you list are appropriate for a business in a narrow market.

Going further, if it is an area of concern choose technologies and page
architectures that don't run the risk. Angular and React are developed to meet
the needs of companies at the scale of Google and Facebook not a shop where
the collective knowledge does not include _experts_ at page repaint metrics.

------
acejam
We run an internal private instance of WebPageTest

------
indescions_2017
Headless chrome opens the window to Visual Debugging. See

[https://screenster.io/what-is-end-to-end-testing-and-is-
ther...](https://screenster.io/what-is-end-to-end-testing-and-is-there-a-
smarter-way-to-automate-it/)

------
jboles
With a stopwatch and notepad

