Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is powered by Google Lighthouse, with the benefit of it being done via a web UI instead of a Dev Tools Audit. Which is both good and bad.

Good because Lighthouse has some reasonable best practices to follow, and a few good performance timings, so lowering the barriers of entry is nice.

Bad because many of Lighthouses best practices aren't always applicable (our major media customers constantly say "stop telling me a need a #$%ing Service Worker!"). And while Speed Index and Start Render are great, Time-to-interactive, First CPU idle, and estimated keyboard latency are still fairly fluid/poorly defined, and of different value.

This all also overlooks the value that something like the Browser's User Timings provides (Stop trying to figure out what's a "contentful" or "meaningful" paint, and let me just use performance.mark to tell you "my hero image finished and the CTA click handler registered at X"), which Lighthouse doesn't surface up.

What is interesting is the monitoring side. WebPageTest, Lighthouse, Page Speed Insights, YSlow, etc are just point-in-time assessments that is largely commoditized. Tracking this stuff over time and extracting meaningful data is valuable, so that's pretty cool.

Disclaimer: I work in the web performance space. People replace homegrown Lighthouse, puppeteer, or WPT instances with our commercial software, so I'm biased. However I like a lot of the raising awareness and trail blazing about what Performance/UX means that Google is doing.



Why don't you want to implement a basic service worker? A couple of cache strategies among these ones seem like a net positive? https://developers.google.com/web/fundamentals/instant-and-o...

(ex: "Cache, falling back to network" for CDN'd libraries, "Cache then network" for content like news articles)


Because for a normal website, it should be taken care by the browser, and not my custom service worker.

This is XMLHttpRequest all over again.


Standard caching headers should be enough?


> Bad because many of Lighthouses best practices aren't always applicable

It's tough to create an auditing tool that caters to the web at large. See my other comment in this thread: https://news.ycombinator.com/item?id=18442686

> This all also overlooks the value that something like the Browser's User Timings provides... which Lighthouse doesn't surface up.

Lighthouse does surface up this info in the "User Timing Marks and Measures" audit: https://developers.google.com/web/tools/lighthouse/audits/us...

But I'm getting the impression that you want Lighthouse to surface up this information in a different way. Please feel free to elaborate.

Disclaimer: I write the docs for Lighthouse. I'm speaking from my general knowledge of the project but haven't vetted these comments with my team. So consider all comments my own.


Oh god that Service Worker bs. It's like how Yahoo's performance profiler penalized you for not using a CDN on a single page site.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: