I agree with you, before optimizing the performance somewhere near the HTTP layer there is often much that can be done at the front-end or the back-end level. The loading of web fonts can actually block a browser for some time until the font is loaded (and often makes the page much harder to read ) - it is not just that the user has to wait for the loading to finish, but the browser sometimes freezes completely (for example, this happens sometimes with Opera), which is a bad user experience.
"Perceptible latency" requires that we have some people who will load the page and measure if is was fast or slow or sluggish. In my experience this is often said to be too expensive or too much work. Actually, it requires just a few people with different browsers and internet connections to get a good idea if the website is fast or slow - BUT: this is nothing that is measured precisely.
The TTFB given by the tests, mentioned in the article, on the other hand, looks like really hard data, measured exactly in fractions of seconds. This can be plotted nicely in a graph, for example.
Another way to see if a website is fast or slow is to ask the visitors. There has been a survey on one large website, which I have created, with some open questions about the new website. One open question was: What do you like about the website. More than 90 percent of the visitors, who have filled out the survey, answered that they have noticed how fast the website loads.
- page render time server side
- time until the user sees something meaningful (you ought to be able to measure this with a headless browser if you have some idea what the first thing you want the user sees) This will get you what people might think they're getting out of the TTFB metric.
- full page load latency
- data transfer latency
TTFB might be useful if you have some data that suggests your web server is a significant bottleneck, but I wouldn't gather it as a matter of course in trying to optimize page load times.