Hacker News new | past | comments | ask | show | jobs | submit login
Google can't pass its own page speed test (reddit.com)
374 points by EvgeniyZh on June 5, 2021 | hide | past | favorite | 163 comments



>Cumulative layout shift is supposed to measure whether your site shifts around while it's loading, but it's so strict that even Google Translate fails the test.

And so it should be. There's hardly anything worse than having things shift around after you start reading, scrolling or clicking on something.

I think it's a very good sign if Google's own sites fail these tests. It means the tests are meaningful.


Google Search fails the cumulative layout shift test.

I can't count the number of times the "People also search for" box pops up in the middle of my search results 3-4 seconds after load. It's just enough time for me to identify the result I want, move the mouse there, and then proceed click on the new, different link that just loaded under my mouse.

It's infuriating.


Yeah, I have a custom rule in ublock origin to remove it. It's literally the only custom rule I have, but it happened to me so damn often that it ended up being worth the time to identify the element so I could permanently block it.

In case anyone ever needs it:

    google.com##div[id^="eob_"]


Thank you so much! This is such a small little quality of life improvement.

And for others trying, you need to just add this line to uBlock Origin's "Filters" tab, not the "Rules" tab.


But A/B testing shows people love those links with that behaviour, they follow those links 73.523% of the time!


You kid but I've seen product teams put interaction metrics like this as OKRs and ruin their product.

"Data driven" development so often forgets about common fucking sense.


A fun story from a friend at Spotify: one metric they tracked was the time between sign up and first song play. Then one team decided to automatically start playing a song after sign up. They knocked that OKR out of the park!


This is the result of someone not understanding why they wanted to measure a thing in the first place. I’m assuming that the original intent was ~”sometimes people sign up for Spotify and don’t play anything for a long time, then cancel, possibly because they weren’t getting much value from it.” I’m guessing that someone then decided “Therefore, time between signup and first play is a factor in retention”, which probably isn’t even wrong, but is likely something to not try to directly optimize, or something that needn’t be optimized to the nth degree. In other words, someone taking 15 minutes between signup and first play probably isn’t retained any more effectively than someone taking 15 seconds. I don’t know the exact term for this concept, but it seems like overall engagement is more what they were trying to capture, and somehow the plot was lost, a project was concocted, and auto-play-after-signup was implemented. OKR optimized, but the actual effect on retention is likely zero. Bad OKR and probably indicative of a culture where no one was asking “why” enough / challenging product / marketing initiatives.

I’m making a lot of assumptions here, but I’ve seen this before in a lot of projects where people get worked into a froth optimizing something that will provide no real value to users.


Great work gang!


It's not common sense, contesting your priors and verifying your assumptions is one of the hardest and most important parts of doing data-driven science.

It's also not surprising that when you take a set of random people without science training, they'll just cargo-cult the most visible parts and forget about the hidden, essential ones. It should also not surprise anybody what part they forget about, since the calgo-cult speech is literally about this exact problem, but with trained scientists (did I say it was hard?) instead of random people.


I think our thoughts aren't mutually exclusive.

Good science > intuition/experience/best practices etc > bad science.

I suspect we agree that a lot of product development is based on bad science. Yes doing good science would be best, but let's at least stop doing bad science.


Oh, the problem is that it's not as simple as intuition/experience/etc being better than bad science. Often enough your intuition is just wrong, and bad science is just right.

A better way is dealing with confidence values with your intuition, and changing the required quality of science based on the confidence of the priors you are trying to verify. But then, this is hard too.

The real problem here is that, as you complained, those people aren't just competent enough on the work they are doing. I guess my point is only that this shouldn't be surprising, as it's a hard job.


I can’t find the article but I read a piece years ago about Google algorithmically trying to optimize sign-up links / a button, and the algorithm being in a feedback loop with the A/B testing. It talked about finding the perfect, to-the-pixel placement for a button.

That was when I knew design was dead in practice at Google. There are so many other under-optimized parts of the experience that I have no idea how “if I could only find the perfect position on the screen for a button” became the question someone was willing to throw that level of engineering at. It’s missing the forest for the trees x 10^100.


I think I read about their ‘data driven’ decision on the colour of links.

https://www.theguardian.com/technology/2014/feb/05/why-googl...

The problem with this sort of decision making is it ignores context and is liable to optimise one metric under study at the expense of other more important things (like user trust and retention). It also tends to bias towards easily measured small changes measured in isolation so it encourages blind decisions made without context or coordination.

For example a certain colour might mislead users into clicking more by making a link look like the purple default visited link colour. It makes clicks go up but may not increase user satisfaction.


Did these tests rule out that people just click on links on that area of the page?


That’s the joke.


Realized this too right after I wrote it and left it thinking it's good for clarity and posterity.


The actual motivation or causation doesn't matter. All that matters is that some team or product lead can justify a decision or a promotion, or even just an ideology, using the data in some way.


The YouTube app for Android even does this. I have slow Internet, so I always read the description while the video loads. But often when I go to click the description box to expand it, a banner ad loads in its place and pushes the description down.


That box is obviously and clearly malicious.


I don’t know if I would go that far. Do you have evidence of this?


It's the most used webpage on the planet and for sure their 300 IQ engineers run into this daily.

That entire page probably had tens of thousands of UX work put into it.

If it works like that, it's intentional.


I saw a thing before talking about how much each byte of code on the Google homepage costs to deliver. I wondered then why the homepage isn’t just integrated into their browser, as in literally ship the code for it and load it from local cache. I guess the URL bar already does that.


I'm starting to think this is on purpose. Sometimes I get tricked twice, after pressing back and immediately going to click the correct link this little bastard materializes under the cursor.


At this point you have to assume that is by design.


It's more likely the people who develop and test and prioritize the features live in mountain view, with 5g and gigabit fiber with ultra low latency to the data centers.


That's why the dev tools in browsers have an option to simulate slower data speeds to test this kind of thing. Having good connectivity and failing to provide a useable experience for slower connections is just bad dev work.


Or, it’s a symptom of the ownership of the search experience being smeared out through a large organization.


Sure, but these people must have left that golden zone at some point and had their clicks stolen like the rest of us.


When I lived in Mountain View the absolute fastest internet connection I could get at home was 100mbs Comcast that was never that fast and had all kinds of bizarre and inexplicable random latencies. My internet is faster and infinitely more reliable since I moved to rural Southern Oregon mid pandemic.


Hahahahaha, my biggest surprised moving to Silicon Valley right down the street from Google was how even in their local neighborhood, Comcast still had a monopoly on internet access. My apartment offered the choice between Comcast or no internet. Even today, FTTH in the Bay Area is not commonly available and indeed we can’t get it in our neighborhood, only FTTN with no guaranteed speeds. UVerse won’t even install at our address, so we’re stuck with Comcast still. Sonic is over ATT’s last mile and they won’t install at our address.


Ye. It is like those adds in Android apps that happen to load where the button you try to click was.

It is obvious someone at Google is trying to game some metric with accidental clicks.


I don't understand the downvotes you are getting, considering how they used such tricks whenever possible


> 3-4 seconds after load. It's just enough time for me to identify the result I want, move the mouse there, and then proceed click on the new, different link that just loaded under my mouse.

You have in duplex in one small fragment enunciated wonderfully why the rodent is never to be used for anything but video games and image editing.


It took me several tries to understand this comment. In the first pass it looks like the result of some kind of Markov chain text generator.


> And so it should be. There's hardly anything worse than having things shift around after you start reading, scrolling or clicking on something.

Oh my god, this is one of my largest complaints. I'm an impatient person so i often go to click something before the page is done - bad me - and so often JS loads and swaps buttons.

Honestly though i've had that complaint about my OS', too. The number of times i'm typing and something pops up and steals focus or i go to click something and moments before it shifts causing me to click something else.. ugh.


If I was a native widget toolkit designer or web developer I'd implement a rule that all layout changes must be rendered and on the screen 150ms before input handling can use them. Lag compensation for local user input. If you click a button at the same time it moves, you should still click what was on the screen at the time you pressed the button.

Yes, I know this effectively means having to keep two copies of the widget tree (one for input and one for output) and applying all changes as diffs. Don't care. I'll buy more RAM.


When I switched to Mac, that was one of the most refreshing things about OSX: apps weren't allowed to steal focus.

Sadly, that's no longer true.


Is that a very recent change? I don't know that I've ever noticed that on macOS, but I started using this OS around Lion, I think


I know something worse: websites that force you to use their app, providing links that don’t actually take you to that content in the app. Two examples with the same experience: Instagram and Reddit. They block functionality on the mobile web pages in Safari, pop up large buttons to get you to use the app, then those buttons take you to the iOS App Store page even though I already have each installed, then clicking “open” there just opens each app to the default home page, NOT the content I was trying to access but was subsequently blocked from accessing.

Result: I decide that I don’t care about that content anyway and just go do something else. Trying to log in via mobile site, forgetting my password, going through the reset flow, logging in again because my pw reset session happened in GMail’s safari context session and didn’t save cookies to main safari, all while being hounded constantly that “it’s better in the APP!”…just keep it.

Pinterest is uniquely bad because the thing you see the photo of doesn’t actually take you to the place where that came from. Add in all the ads, the page wasting tons of space with spacing / white space / low density, the placement on Google for so many thing, and I cringe whenever I see a Pinterest result. Quora isn’t much better.

Amazon and eBay at least take you to the right place with their “open in app” bars at the top of the page. Not an iOS dev but I’m assuming they’re using some special Safari APIs to do that instead of useless generic App Store links that don’t pass through context.

I really wish Google / Apple would punish sites for doing this, since it leads to such a bad UX. Please think about that, marketing and product people: I literally already have your apps, but you’ve made the mobile web page so annoying in trying to get me to use the app, that any interesting content I do find via SERP I discard instantly as inaccessible due to the hassle of trying to search for that same content through your apps, because you’ve intentionally broken the mobile web experience.


At my second job around 2004 my boss was very strict about not having the page move on load, it was a big deal. But over the years, especially recently, I've noticed so many sites and apps that shift constantly and it seems that it's not a priority anymore.

Just yesterday, opening the Amazon app, and the "Orders" button shifted before I could click it!


Web development has gone a long ways backwards over the years. The tech is so complicated that developers focus on their own productivity over basic user experience. It feels like it's been a long long long time since I've even seen a web article focus on good UI, instead it's all about how to handle all the layers of complexity or new tooling.


is your old boss still around - how is his site?


Google's pages are some of the worst for this.

It's not hard to build pages without these problems but everyone is way too caught up in fancy web design they ignore UX.


On the other hand, it means that Google has stopped caring about UI performance. Of course, anyone who’s used Google Groups lately could tell you that.


My theory is that they test this on their 10 Gbit company network, and high-end laptops, and it works great. So it's hard to convince anyone that it's a problem.


In the office we are strongly encouraged to try our stuff with the WiFi network degraded to perform like EDGE (apparently nobody cares about GPRS any more). Obviously that hasn't been exercised much since last spring.


Most Google offices have multiple wifi networks for simulating different kinds of degraded connectivity.


I'm not sure its reasonable to jump from "Google Translate has a jump' to "Google has stopped caring about UI performance"


As I say, compare Google Groups today to Google Groups 10 years ago. Or gmail. Or Calendar. Though Groups is possibly the best example, as it went, more or less overnight, from one of Google's snappiest properties to about its most sluggish.

Google used to be very concerned about UI performance, but they seem to have totally lost interest in it.


Priority is the ads display performance; search or news are just an addition to the actual business.


or Gmail for the last 10 years


> There's hardly anything worse than having things shift around after you start reading, scrolling or clicking on something.

Most sites that fail for CLS don't have any CLS that's visible to the user.


YouTube does this notorious thing where the ad above the first search result appears after the results, so you accidentally click the ad.


It's almost as if they get paid per click on that ad...

There's no incentive for them to fix this, unfortunately. Everywhere else on YT there's a nice gray box that keeps the place of a UI element until that element loads.


I don't think anyone would disagree but, as things stand, you can have a page where things DO NOT visibly 'shift around' but still fail the test.

I feel sorry for all the developers who are going to be inundated with complaints because their client's sites have failed yet another perceived important benchmark.


Visible to whom?

Web developers often have no clue how their page appears to others, because they are developing locally, and the production site is close and has few network issues.

So a lot of developers that think everything is fine end up having no clue what most people experience.

When web fonts first started seeing widespread use, I would very frequently see 2-10 second delays when all content had loaded except for the font, including massive images, but I couldn't see any text. When I complained about this trend in the web to web developers I knew, almost none of them even seemed to believe it was possible, or at worst I was just a very unlucky user.


It should be obvious to a developer that he/she should test outside of this closed enviroment but, I agree that it's possible many do not.

I stand by the statement that you can still fail the test despite having content that does not 'visibly' shift though.


So what would cause this kind of shift?


Gmail shifts almost every time I click on the Promotions tab. I go to click my top unread email and instead end up clicking an ad.


"I think it's a very good sign if Google's own sites fail these tests. It means the tests are meaningful."

I agree, that appears to be a good sign. I am curious to what degree Google will reward fast sites and punish slow ones. I wonder if they are willing to make big shifts in the rankings.


I feel like there should be additional constraints. For example a page that looks so simple like the google search page should have a limit on how much code it needs to get there. Basically a bloat alarm, lighting up in red, once bloat is detected. And it would be redder than red for Google itself and things like YouTube.


I think sometimes windows settings (the modern settings application) fails at this too.


> It's causing a fair amount of panic, because 96% of sites fail the test.

Good. That sounds like a realistic estimation of the number of slow and bloated websites. What good are nice animations and designs when they destroy the UX?

You'll never see someone gaming in 4k with hardware that can't render it with more than 15FPS. Yet we see that kind of "tradeoff" every time we browse the web. Users get loading times for the site itself, then processing of JS to redundantly do the browser's job of arranging the 20 layers of <div>s, then loading animations for the actual contents, and then a couple seconds after that you might get the first thumbnails.

And I'm absolutely not surprised Google's pages fail this test as well; Everything from Google Images to YouTube got increasingly worse with every new design iteration, with both slower loading times as well as an increase in breakage.


The amount of bloat in modern websites always amazed me. I remember the first computer I ever had, a hand me down from my parents with 512MB of ram and a single core 1.6GHz cpu (yes I'm a zoomer and wasn't even born in the good ole days of dialup internet and Windows 95) and all websites I visited ran just fine. I could open many browser tabs and do all the things one normally does in a website. The only main difference maybe is that video playback nowadays is done at much higher resolutions and bitrate. And web apps were a very new (or maybe even non-existent) concept. But still, nowadays I see my web browser using 1GB+ of memory with a few tabs open containing some newspaper articles and perhaps a couple other misc non media heavy websites.

This is madness. When not using an ad blocker, the amount of data that a regular website loads that's not relevant for what you need to read (so no text and images included) is huge. I can understand why some complex web apps like Google Docs or whatever the cloud version of MS Office is called may be quite more resource intensive than a magazine article, but there is no reason why a newspaper or cooking recipe site should use memory in the hundreds of megabytes, when the useful content itself that the reader cares about is maybe (with images included) a couple megabytes in total.


The memory requirements for graphics changed dramatically.

A screen at 1024x768, 16 bit color is 1.5 MB.

A screen at 3840x2160, 24 bit color is 24 MB. 32 MB if using 32 bit color.

Add to it that graphics became more plentiful with increased bandwidth, and low color GIFs are out of fashion, and you very easily see the memory usage grow by several times just from that fact alone.

Older operating systems also didn't have a compositor. They told the application: "this part of your window has just been damaged by the user dragging another window over it, redraw it".

Modern operating systems use a compositor. Every window is rendered in memory then composed as needed. This makes for a much nicer experience, but the memory cost of that is quite significant.

Take a webpage, and just give the mouse wheel a good spin. It should render lightning fast, which probably means the browser has a good chunk if not all of the page pre-rendered and ready to put on the screen in a few milliseconds. This also is going to take a lot of memory.


32 MB is nothing. Unless you have hundreds of open windows this does not account for gigabytes of memory usage from the browser.


Point is, as resolutions and color depth increased, the amount of memory needed for graphics grew by several times. So a switch from 512MB being enough to several GB being needed is almost unavoidable on account of that alone.


What I am saying is that graphics are not using much of that memory and probably can't be blamed for the increased requirements.

Back in the days, maybe, as 1.5 MB was a significant portion of available RAM. However the average available RAM on computers has grown much faster than the average display size.


They very much can.

Try the old Space Jam site, a wonderful example of the old web: https://www.spacejam.com/1996/

That's a total of 101 kB.

Now try reddit.com/r/spacejam: 13 MB. A single image on there is a megabyte worth of JPEG. It's 3024x4032 in size, which is 36MB in uncompressed RGB.


That all happens on the GPU. Do task managers show memory for both?


What do you mean, "on the GPU"? Where do you think the GPU gets the textures?

I'm not familiar with DirectX, but OpenGL manages textures invisibly behind the developer's back, to the point that it's difficult for a developer to find out how much VRAM there is. OpenGL wants to invisibly manage VRAM on its own, which means that every texture you have exists at least twice: Once in RAM, and once in VRAM. And very possibly 3 times, if the application keeps the original texture data in its own buffer for a bunch of reasons.

So when you look at google.com, that Google logo probably exists in memory at least 3 times: probably as a RGB bitmap (the actual image object the browser works with), in RAM managed by the graphics driver (in whatever format the graphics card likes best), and then on the card's VRAM possibly. It could be more, like if the browser can apply some sort of color correction or other transformation and therefore keeps both the original and retouched version. The original PNG is also probably in the cache, and there exists the possibility of extra copies because some particular part of the system needs images to be in some specific format. Graphics are memory hungry, and it adds up fast.

The nice thing about this is that your GUI doesn't get a whole bunch of horrible artifacts if you hook up a 4K monitor to a laptop that allocates 128MB VRAM to graphics. The 3D rendering layer simply makes do with what VRAM there is by constantly copying stuff from RAM as needed, with the applications not noticing anything.

The bad thing is that this convenience has a cost in RAM. But really, for the better. Can you imagine the pain it would be to program a GUI if every single application had to allocate VRAM for itself and could break the system by exhausting it?


> a hand me down from my parents with 512MB of ram and a single core 1.6GHz cpu

Wow. I'm feeling old. My first computer ran at 1MHz with 16K of memory. :)


An empty Google Docs document (read-only) is 6.5MB, making 187 requests…


The thing I hate most about the modern web experience is something else - somehow, it seems to have become fairly standard to reflow content between a tap and when it registers, but not to use the coordinates from before the reflow.

This seems like such an obvious and elementary bug that I have wondered if I'm missing something. Like, maybe it's deliberately done to increase the number of accidental clicks on ads.

Then again, maybe it's just a side effect of everything being threaded and asynchronous these days.


> When not using an ad blocker

why would any sane person ever do this?

oh right, mobile.

so let me ask a different way: why would any sane person ever browse the web on any platform that does not have effective ad blocking?


My mom's first response after I deployed AdGuard Home in her network was "I don't know what you did but my phone feels faster" lol


I don't mind ads, only have an issue with performance and tracking. My browsing habits are such that very occasionally I stumble upon a website full of ads and it's startling.


Mobile.


Friends don't let friends use anything but FireFox with uBlock Origin, on Android... Sorry iPhone users :-/


On iPhone, I use Wipr which handles blocking the majority of ads.


Also an option is NextDNS, Raspberry Pi without the hassle. A combination of 1Blocker and NextDNS works pretty decent.


My site didn't initially pass the test, and the fix indeed improved the experience on slower devices.


Im actually surprised how good YT comes out in this. The page is a dumpster fire:

Before Polymer (current YT framework) YT video page weighted somewhere around 50KB (10KB compressed) and was ordinary HTML + 1MB js player (400KB compressed). As soon as HTML part loaded the page was all there.

Now its 600KB (100KB compressed) JSON + additional 8MB desktop_polymer.js (1MB compressed) that needs to be compiled/interpreted before it even starts building the page client side and anything starts showing up. 1MB js player is on top of that.


It truly is a terribly site, in terms of performance. Streaming HD content used to be the "hard problem", but now it's displaying the site hosting the videos that the issue.

You can stream HD content on the lowest end device, or hardware almost 10 years old. The same hardware just isn't powerful enough to let you use the YouTube website in a performant way. I cannot fathom how YouTube doesn't see that as a problem.


How many people, especially among the best advertising demographics, have 10 year old hardware? I have had 6 computers in that timeframe and 4 phones.


Running a 2012 Mac Mini as a HTPC. It's been wonderful for anything h.264 and has no problems even playing 4K files (h.264, it chokes on 265 of course).

But YouTube is increasingly becoming unusable to the point where I just youtube-dl what I want to watch in advance.


I'm amazed at how much slower Youtube has gotten in the past couple of years. That fake paint stuff is terrible too.

Here's something to try, resize youtube horizontally and watch your browser grind to a halt. At least in the case of Chrome for me.


YouTube even fakes FCP by initially showing a lot of dummy grey boxes until they have any idea of what to really show.


These so-called skeleton screens are a common technique and not inherently bad.

When you know an elements exact dimensions but not its contents and have more important things to serve first, it’s completely valid to use.

It gets infuriating when these grey boxes animate about but them decide not to display anything anyway and just collapse. Or they load and load and load and not account for network errors. Or when the box sizes has nothing to do with the size of the element it’s being a placeholder for.


> These so-called skeleton screens are a common technique and not inherently bad.

The technique may not be bad by itself, but it's so common mostly among super bloated website behemoths that every time I see those skeleton screens I automatically prepare myself for another 5 second of loading.

It's the modern web equivalent of seeing an hourglass cursor on an underpowered Vista machine - not inherently bad but usually a bad omen.


Can't argue with that.

It's easier to randomly place grey boxes around a page than to address the real problems. Plus you get to lecture people with terms like "perceived performance".


Did we forget wombocom so quickly? A big part of the joke was that loading screens (of any kind) on sites are really stupid.


Link in case anyone doesn’t know what this is referring to (I think)

https://www.zombo.com/

(Appears to have been upgraded to HTTPS and other modernities!)


"Do not trust any statistics you did not fake yourself."


It's astonishing how much worse they made it - and intentionally made it worse in browsers without Web Components, like Firefox. Forcing the old non-Polymer website was like hitting the turbo button on an old PC.


It wasn't Web Components (Firefox supports those[0]) it was Shadow DOM v0, the original deprecated Chrome-only predecessor to Web Components. Except it has been removed from Chrome now so I don't think this is an issue anymore.

[0]: https://caniuse.com/custom-elementsv1


Well that would be why it's so much slower than it use to be! HN is one of the last bastions of fast sites :(


Anyone observed similar increase in overhead after ionic migrated from angular component into web component ?


I don't take this as "Google doesn't care about UX" or performance as some comments suggest. Google is a large company and its not one unified team working on various projects in exact sync.

That said, as Google will start promoting non-AMP content that passes Coe Web Page Vitals, its become a bigger deal.

I work in media and CLS is a big problem. Most publishers don't come close to passing. As of writing only 5 out of the 80 I track score above 80! (Out of 100, and higher is better)

The publication I run Product/Engineering for hovers around 84 to 85 and we don't have ads.

Full list: https://webperf.xyz/

To save you a click, the top 10 are:

  Rank Site       Score Speed-Index FCP   LCP   Int   TBT
  1 ThoughtCo     87    1856        1.6 s 2.0 s 6.5 s 345 ms
  2 The Markup    86    3621        2.4 s 3.5 s 3.5 s  95 ms
  3 Rest of World 84    3154        1.9 s 4.0 s 4.3 s  79 ms
  4 Investopedia  81    2009        1.6 s 1.9 s 6.5 s 552 ms
  5 The Spruce    80    1877        1.3 s 1.9 s 6.7 s 634 ms
Bottom 5 are...

  Rank Site       Score Speed-Index FCP   LCP    Int    TBT
  77 CNN          11    31408       6.2 s 35.7 s 70.5 s 17,666 ms
  78 Seattle PI   10    27902       6.4 s 14.6 s 57.1 s 11,687 ms
  79 SFGate        9    41064       7.4 s 31.1 s 96.7 s 24,437 ms
  80 NY Mag        8    18222       6.0 s 10.8 s 41.1 s  7,157 ms
  81 Teen Vogue    8    22549       3.4 s  9.1 s 42.3 s  8,968 ms
If you want to help me get our Score to 95 or higher, I am hiring frontend developers :)

https://news.ycombinator.com/item?id=27358113


Even google has its own SEO teams and they are not immune to the developers messing up.

And you need technical SEO's not more front end devs cranking out the frame work dejour


restofworld.org living up to their name and are accessible in the rest of the world

Good on them


Thats very nice of you to say. We're trying our best but have more work to do.


Surprised there’s anything lower on that list than SFgate :)


It's funny you mentioned that, I worked on SFGate experiment for optimizing ads and web performance in 2017.

I got it down from 85s load time to 15s load time, and Speed Index score from 21,000 to 7,000. Ad revenue went up by 35% too (better UX as well)

Then some jackass on business side signed a deal about a auto-play video player on every page and the project was killed

https://docs.google.com/presentation/d/12ds0b4nTxzcDy23te0Zm...

It is a bit sad to see where it is now. The potential was there.


Thank you for the work you put into improving SFGate.


One of the frustrating things about being a web developer is that I can do a ton of advanced optimization to the point of code splitting my js with dynamic import() or doing page level caching, lazyloading, and splitting non-critical CSS. But other departments of my org can squash it by adding a chat plugin with a disgusting level of resource abuse which obliterates all the advances I've made.

My approach is to just explain what I can or can't do, and explain the trade-offs for everything. I'll give the decision-makers the info but I'm not going to be the curmudgeon going to war over something the marketing people say we need to survive.


DNS resolution 12ms

First-byte time 64ms

CSS & JavaScript 148ms

Analytics 3,157ms

Images 250ms

someone who is good at the web please help me budget this. my store is slow


Well clearly we can't remove analytics, and if we changed analytics to a different (faster) provider someone would have a cow... We could lower our image sizes, but let's be honest that would result in the dreaded JPGing. Our Javascript is a hand crafted wonder that we just rebuilt for the 6th time in the past decade in a new modern better framework so clearly that cannot be the problem. Therefore I'm leaning towards adding a CDN to lower the first byte time, or yelling at our DNS provider to speed up their side. /s


Just load the analytics async, ezpz


Sad-yet-amusing fact: apparently most of the W3C member [0] websites can't pass the W3 HTML validator [1], for some years now (possibly it was the case for as long as both existed). With that in mind, failures to pass fairly complex tests for things conceived recently, which everyone isn't supposed to follow, are far from surprising.

[0] https://www.w3.org/Consortium/membership.html

[1] https://validator.w3.org/


For once, I think this is a good initiative from google.

Most websites are over-bloated and this is a good incentive to move the web in the right direction.


Would you also think of it as a good initiative if the US government demanded the change?

I don't know. To me it sounds a bit like "you can only be in the AppStore if your app meets these requirements".

If Google was pure in its reasoning, they would allow slow pages to still be listed high in search results for people who don't care about page load times.


> for people who don't care about page load times.

And I am sure there is an incredibly high percentage of people who love nothing more than slow web sites.


If they are ad-free and tracking-free, then you can give me slow web sites.


For some strange reason ad-free and tracking-free sites are usually faster...


There is an enormous difference between a demand from a government that would be backed up by the threat of violence and anything Google is doing. You are free to ignore Google without any risk of being shot or imprisoned.


Yep, the only possibility is that your business might be destroyed, you might be bankrupt, and your family needs to start again from zero.


It seems like this is the case. There is a link to a more informative article in the Reddit post that has a spokesperson from google who says that if you’re the most relevant, you’re going to still be ranked well.

Article link: https://www.ntara.com/google-core-web-vitals/


Ok, let's extend this. If two pages are equally relevant and equally fast, will they show me the page with the fewest advertisements first?

And why is speed more important than number of advertisements?


Fortunately, speed is almost always inversely proportional to number of ads! So the two mostly go hand in hand, for sites with similar content.


I think the best approach would be to show a small indicator of page load times in the search results so everyone can decide for themelves what they prefer. Then these preferences would influence what is shown on top, creating an optimisation feedback cycle.


I tried a couple pages from SourceHut, which famously prides itself on its fast performance.

The projects page (dynamic, changes with user activity on the site) https://sr.ht/projects: 100

A patch page https://lists.sr.ht/~sircmpwn/sr.ht-dev/patches/23162: 99

A source page https://git.sr.ht/~sircmpwn/pages.sr.ht/tree/master/item/ser...: 100

SourceHut often "feels" a bit slower in that the browser does a full navigation whenever you click, but the absolute time is definitely low and I applaud them from building such a fast (and accessible!) experience.


I've been doing a lot of this work over the last month and LCP has been the hardest metric to move by far. I ended up dropping AdSense on mobile entirely from my site, which was ironically the biggest factor in performance degradation across a number of the stats they track.


With FLoC, the website page can get user interest without loading adsense nor using 3rd party data brokers.

That may helps reducing workload on page load.


Does it have to load early though, if possible you can load it on demand after the page has fully loaded.


My site's not huge and mobile ads were only ~$150/mo. I might revisit it later, but for now I'm willing to take the path of least resistance.


I like Nolan Lawson's take [1] that this is a good thing, at least when it comes to the test itself: they're willing to push other teams at Google to improve (rather than lowering the bar for everyone else).

I guess you could argue that they should've done that pushing internally, but that's really a concern only for Google itself. As long as it works, it's a win for users.

[1] https://toot.cafe/@nolan/106358723424552836


It infuriates me when Google search result pages jump around as I'm trying to click on links!


While improving performance is a good goal, chasing after perfect scores in Google tests sometimes leads to increased complexity of code, bugs on older browsers and even bigger website size.


Goodhart's law works here too: When a measure becomes a target, it ceases to be a good measure.

One should consider all factors before making a change and don't blindly target the test score only.


A valid criticism is that Google's own products will presumably not be penalized in search results for these low scores.

I'm happy to be wrong on that assumption, of course, but I think it's a reasonable one to make, and it severely dampens my willingness to agree that Google deserves praise for setting a high bar that it must itself struggle to reach in order to provide real value to users.


This limit of 4 seconds, is that on some specific hardware? Or does Google make this relative to the user's hardware? (I.e., the user who is doing the search)


The desktop performance targets are easy to hit, but mobile tests for Lighthouse are rough: "Simulated Fast 3G" network throttling and a 4x CPU slowdown.

I don't know anyone in the US who is still on 3G and modern CPUs are not 4x slower than their desktop counterparts.


Guess what, Google does not sell mostly to the US!

Also modern CPUs on mobile are faaaaaar slower than 4x the desktop one for the majority of the world.


If we broaden "desktop" to include laptops, and assuming we're talking about common/mid-range consumer devices rather than e.g. spec'd out gaming machines, GP's point seems to hold up.

It's still wild to me that the 2013 MacBook Pro that was my daily driver until recently is neck-and-neck on Geekbench with both my Pixel 5 (whose CPU is considered mid-range) and the old iPhone 7 that I use as a test device. It's decisively slower than every iPhone since version 8.

If we move ahead to modern desktops: it looks like iPhones have typically been only 20 - 25% slower than iPad Pros released in the same years, and this year's iPad Pro literally has a desktop processor in it (not even just a laptop processor, now that the iMac uses M1 too).

Based on that, in order for your claim to be true, the majority of the world outside the US would have to be using outdated or very low-end mobile devices and/or modern souped-up desktops that blow the average American's machine out of the water.

Some googling shows that a popular phone in India is the Redmi 8, which is pretty low-end even compared to the Pixel 5, and scores about half the 2013 MBP at multi-core and slightly above 25% at single-core. If the average owner of a phone like this also happened to own a modern (not 8-year-old) mid-range consumer laptop, I could see 4x being overly optimistic.


A) This level of vitriol directed at internet strangers is not super healthy. I hope you find something that helps you chill out a bit.

B) I didn't say they did, but not every product needs to be concerned with the rest of the world. Websites in English targeted at a US market and charging US Dollars for their products probably don't care much about the average mobile processor speed in India (anecdotal source: me).

I would guess that most SaaS businesses are primarily accessed on desktop computers during the work week, but I bet they're now collectively spending millions-if-not-billions of dollars in dev time to make their landing pages load faster on Indian mobile phones for users who are unlikely to ever visit or become customers.

(I pick on India because my site gets a lot of Indian traffic, but feel free to swap in the developing nation of your choice.)


I wouldn't read "guess what" nearly as harsh as you're taking it.


I’ve struggled with this. Lighthouse is a great tool but the things it dings me most on: google ads, google fonts, and google maps.


As a daily user of Google's Cloud Platform web UI, this doesn't surprise me in the least!


I was willing to cut some slack because I'm sure some operations actually take time on the back end (eg. create a VM) but even browsing read only pages make my laptop fan spin up like no other web site does.


I'm sure thety do, but when doing something like removing an instance from an Instance Group you can never be 100% certain that it actually was removed without several refreshes.

Switching to a different project can give you, for example, a list on instances belonging to the previous project.

It's just not what I'd expect from Google considering what they pay engineers.


This is also the case if your site has doubleclick ads or similar, they are the slowest part of the page by a significant amount.

In my experience a site got a lighthouse score of ~50 with doubleclick ads enabled, and a score of 100 with my network ad blocker enabled.

Truly infuriating that they penalize you for using their own solutions. And of course G has no support to speak of to help report or remedy the problem.


It's been a long time since I've looked at integrating ads on a website, but there are far better ways to do it than just throwing up a script tag nowadays. Have you looked into those?


Not totally surprising. A lot of things Google is officially opposed to concerning their "Better Ads Coalition" are things you routinely see in Google Search and Google Display Network.

From the outside, Google looks more and more like an indifferent, multi-headed monster with competing agendas by the day.


Direct link: https://www.ntara.com/google-core-web-vitals/

Dang might want to fix it if he reads this.


Its just not your computer anymore - in countless ways. Processing user input should come before anything else, second is displaying what the user wants. To state the obvious: You should be able to click around in a menu (that doesn't jump all over the place) and switch between content before it is fully loaded. Almost nothing does this. You have to use things like <a> and <form>, features from back when it was your computer.


I wish google would optimize Adsense for Core Vitals. My site gets a 100 score without ads and a score of 80 with Adsense. I’m not willing to give up the ad revenue.


There is a long-standing bug in lighthouse where out-of-process iframes are counted against the main thread. This causes major degradation in lighthouse scores. Even YouTube’s own iframe embed is affected. You could have a perfect-scoring page, put a YouTube embed on it, and the score will be decimated.


It would be nice if any of the web vitals were actually tied to conversion or bounce rates.

I have used the web for a long time and how long a page loads generally does not matter to me. Perhaps I am alone in that? No amount of javascript squishing will matter if the network from you to the origin site is slow to start with.


The cumulative amount of popups on Google websites, and apps has been unbearable for quite some time.


I really don't understand how poorly it seems that some websites and browsers are doing even in environments that they have no right to have issues. Like sufficient or high amounts of everything from CPU, GPU, RAM and network...


It's a sad thing what current Web turned into. Google docs spreadsheets are taking 2 to 3 minutes to load on my laptop. IMHO, people shouldn't drag everything into the browser just because they can.


I've noticed that the Cumulative Latout Shift test reports failures when scrolling in Monaco Editor and other VList implementations. Monaco has buttery smooth scrolling even in giant files.


no website hoses a browser / laptop faster than the adwords dashboard

I looked into it once -- it's some kind of shitty CPU bound loop in dart-to-js logic

we're talking 10-second pauses on an older mac, full crashes on a linux laptop prone to OOM. This is for a site that does one or two basic crud actions. Few other websites do this and adwords is the most consistent offender, i.e. the worst.


Alternatively one department's product conflicts with another department's tool.


Is there somewhere I can paste my website URL to test it with Web Vitals?




I wonder where they measure from, as it claims >1 second of loading time for what is ~500ms in my browser.

And my browser is also absurdly slow, now that I look at it. The HTML at https://lucgommans.nl is 978 bytes, the CSS 2KB, and the ping 39ms. With DNS+TCP+TLS+HTTP, the HTML downloads in 159ms. And then it waits. And waits. Then, after 134ms, it decides that DOM content has been loaded and fires the DOMContentLoaded event (there is no JS on the page). And then it waits some more, until after waiting 177ms it decides to start downloading the CSS which finishes in 44ms. And then it waits again for 40ms, downloads the 33KB background image in 80ms, and finally clocks the total loading time at 508ms.

How fast should it be? The loading times are 283ms altogether (sequentially), how long does parsing 3KB of data take on a modern CPU? Close to zero right?

I remember in iirc ~2007-2015, quite regularly there would be news that browser X had optimized the DOM parser, or rewritten the JavaScript engine, or cut off some time with this or that. What even is going on here, did all this go out the window with the multi-process browser where it has to talk to other processes to get things done, or what's up with all this waiting? Does anyone know?



Maybe you're forgetting to test with a clean cache? Completely anecdotal, but I visited your site a few times on my phone and 2 loads with a cleaed cache were _way_ slower than the 2 without clearing the cache.


Developer tools in chrome is the main one at scale use screaming frog.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: