
Web Vitals: essential metrics for a healthy site - feross
https://blog.chromium.org/2020/05/introducing-web-vitals-essential-metrics.html
======
masswerk
Ironically, the page took more than a minute until first paint on an old iPad.
(I even restarted the poor thing, thinking the HTTP stack had somehow died.)
Please, mind that not everything is evergreen and that there's not much reason
from a user's perspective to put perfectly working equipment to scrap just
because devs decide to go with newest standards only. (Think green computing,
which includes electronics waste. This also works both ways: a robust
implementation is more likely to work in a few years from now on a then newer,
probably much different browser engine.)

------
henriquez
These are useful performance indicators for sure, but I don't understand how
these are different than the metrics already built into Chromium's
"Lighthouse" page speed audit (the "Audits" tab in the inspector).

Is the difference now that they will be more heavily weighed in search engine
ranking? The article wasn't super clear on this, and the explanatory links
seemed to push towards drinking a lot of Google kool-aid (eg. to measure the
Core Web Vitals you must use Chrome UX report, and to do that you must use
Google BigQuery, and to do that you must create a Google Account _and_ a
Google Cloud Project. Wow.)

On closer inspection this whole thing is just a thinly veiled advertisement
for GCP. No thanks.

~~~
cramforce
Lighthouse is a lab-metric system while Web Vitals is a "real-user-metric"
system. Both approaches are valid, and have their pros and cons.

Generally, lab metrics are convenient because you can run them whenever you'd
like including before deploying to prod. But they can never tell you the
ground-truth of what is really happening out there that real-user-metrics
provide.

E.g. the FID metrics only make sense when someone actually interacts with your
site. Lighthouse cannot know when people would interact and thus cannot
calculate the metric.

~~~
henriquez
Thanks for the clarification, that makes sense.

It is ironic that the vast majority of these performance issues are caused by
render-blocking and CPU-sucking JavaScript, and so the recommended approach is
to install yet another client side JS library
([https://github.com/GoogleChrome/web-
vitals/](https://github.com/GoogleChrome/web-vitals/)), which appears to only
work in Chrome, and then use the library to post client side performance data
to Google Analytics (though to be fair it could be any analytics).

"How badly are all your JS libraries slowing down your page? Find out with
this one weird JS library!"

This seems like one of those things where if you get to the point where you
need this, you're already in too deep.

~~~
igrigorik
@cramforce nailed it. One thing I'll add.. I would strongly encourage everyone
to collect "field" (real user measurement) data for each of these metrics via
their own analytics, as that'll give you the most depth and flexibility in
doing root cause analysis on where to improve, etc. The mentions of CrUX and
other Google-powered tools are not to create any dependencies, but to help
lower the entry bar for those that may not have RUM monitoring already, or
will need some time to get that in place.. For those users, we offer
aggregated insights and lab-simulations (Lighthouse) to get a quick pulse for
these vitals.

------
compacct27
These are great, but increasingly none of our web app's performance issues
relate to any of these metrics.

Memory leaks, slow typing, UI interactions like opening a modal that shouldn't
be taking 8s+, where's the literature on how _those_ affect user satisfaction?

~~~
dfabulich
> _slow typing, UI interactions like opening a modal that shouldn 't be taking
> 8s+, where's the literature on how those affect user satisfaction?_

That's right there in the post. Google calls these things "input delay." One
of the three primary metrics called out in Google's post is FID, "first input
delay," which is the delay on the first user interaction.

(Subsequent input delays are also bad for users, but subsequent input delays
are usually only as bad as the first input delay.)

~~~
kaycebasques
> (Subsequent input delays are also bad for users, but subsequent input delays
> are usually only as bad as the first input delay.)

I'm not sure about this statement. It sounds reasonable but I can't recall
seeing any research about FID being a proxy for all input delay throughout the
entire duration of a session. I would hypothesize that optimizations that
improve FID would also tend to improve input delay in general. My main message
here is just that I haven't seen that research.

~~~
igrigorik
Input delay is bad, period.

It's not a matter of first vs rest but observation that input while the page
is loading is, often, where most of the egregious delays happen: the browser
is busy parsing+executing oodles of script, sites don't chunk script execution
and yield to the browser to process input, etc. As a result, we have FID,
which is a diagnostic metric for this particular (painful) user experience
problem on the web today.

Note that Event Timing API captures _all_ input:
[https://github.com/WICG/event-timing](https://github.com/WICG/event-timing).
First input is just a special case we want to draw attention to due to the
reasons I outlined above. That said, we encourage everyone to track all input
delays on their site, and it's definitely a focus area for future versions of
Core Web Vitals -- we want to make sure users have predictable, fast, response
latency on the web.

------
splitrocket
Layout Shift is one of my biggest pet peeves.

Especially in search interfaces.

~~~
SketchySeaBeast
Of which google is a prime offender. 20 minutes ago I repeatedly clicked a
google ad because my search goal was there just before it popped in.

~~~
anderspitman
Is there a name for that little suggestion box that slides into place, and
more importantly is there a way to turn it off? I've trained my self to wait a
couple seconds before clicking on links because it's popped under my cursor so
many times.

~~~
ken
I wrote a CSS rule to hide it, and the Google changed their DOM structure so
it broke. Then I updated my user stylesheet, then they broke it again.

I don’t have the energy to continue, so I’m left with that classic software
dilemma: good underlying technology, or good UI.

------
ArtWomb
This is cool, thanks for shipping! Is Full / Largest Contentful Paint
registering WebGL draw calls? My use case is a single page (up to 4K
resolution) that contains a single canvas element and 3D context. Time to
scene rendered to user is obviously of interest.

And any prospect for a full featured WebGL inspector / debugger in future?

------
nicbou
To be frank, I don't care about a 500ms faster site if I spend 15-20 seconds
navigating a convoluted GDPR notice, dodging newsletter prompts and looking
for a one-line answer in a sea of ads and SEO filler text.

I would define essential metrics very differently:

\- How fast can the users find the answer they are looking for? \- What
percentage of user interactions benefit the users? \- How much information
does the website collect about the users?

------
adamcharnock
This seems to be failing the X-Forwarded-For validation on my site. I expect
one X-Forwarded-For header entry on my (Heroku hosted) site, but requests from
the measure tool [1] seem to have two entries in this header.

Example log entry:

    
    
        2020-05-05T23:00:07.144973+00:00 heroku[router]: at=info method=GET path="/en/" host=redacted.com request_id=a0c8605f-e67a-4b48-9538-c6bafebaaaaa fwd="107.178.238.42,66.249.84.103" dyno=web.1 connect=1ms service=4ms status=400 bytes=294 protocol=https
    

Is this expected behaviour?

[1] [https://web.dev/measure/](https://web.dev/measure/)

~~~
sickmate
[https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/X-...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/X-Forwarded-For#Directives)

>If a request goes through multiple proxies, the IP addresses of each
successive proxy is listed. This means, the right-most IP address is the IP
address of the most recent proxy and the left-most IP address is the IP
address of the originating client.

------
tobr
Great, three new TLAs that people will soon be throwing around in Medium posts
without defining them.

------
speedgoose
The images are cut on mobile.

------
dmitriid
What good are all these metrics, and measures, and approaches, and techniques
when Google's own websites en masse wouldn't give two craps about them?

Physician, heal thyself.

~~~
kaycebasques
What Google sites are you referring to and what "do as I say, not as I do"
behavior are you seeing? I'm not doubting you, I'm just asking for specific
URLs so that I can forward the feedback to specific teams.

~~~
dmitriid
Gmail. Youtube.

Google.com is 34 requests and 2 MB resources. That page contains an image and
an input box.

Original web.dev they rolled out was something like 15 megabytes in size
(thankfully, they fixed that).

Google domains is 1.3 MB of JS for what is essentially a static site.

Their recent announcement about some advanced video compression they did (I
immediately forgot, it was on HN). 55 MB with a 3 second video on it.

Material design. Just the front page is 3 MB of resources. Of them, 1.3 is
Javascript. For a static page.

I can agree though that they have very slowly been getting better with some of
their public properties. When we start talking about
internal/private/customer-oriented pages (GCP console, GMail that immediately
come to mind), they are just horrendously awful.

