As far as I understand, this code is a part of the anti-adblocker code that (slowly) constructs an HTML fragment such as `<div class="ad-interrupting"><video src="blob:https://www.youtube.com/..." class="html5-main-video"></video></div>`. It will detect the adblocker once `ontimeupdate` event didn't fire for 5 full seconds (the embedded webm file itself is 3 seconds long), which is the actual goal for this particular code. I do agree that the anti-adblocker attempt itself is still annoying.
I couldn't reproduce the 5s wait in multiple scenarios in Firefox (various combinations of being logged in / not being logged in / without adblocker / with adblocker) and couldn't reproduce a 5s wait time in any of them, it played back immediately in each case (when without adblocker, using a second video to have one start without ad). I tested on Linux.
What exact combination of circumstances is required to trigger the multi second wait time?
I just tested this in firefox on ubuntu. Three subsequent new tab tests.
Load: 4.34s, 5.14, 2.96, 3.35
DOMContentLoaded: 3.65s, 4.56, 2.92, 3.33
Finish: 13.14s, 10.77, 8.49, 12.02
So it's getting a bit faster over time, but still heinous, and crucially, it isn't hanging on requests. Individual asset GET/POST requests are taking tens of ms, worst was a few parallel 254ms GETs on a cold start. Usually 50-70ms. But there is a flurry of requests, then a period of very few requests until 5s after init, then another flurry.
Same OS, chrome 115.0.5790.170, no blockers, youtube is much snappier, it still definitely takes a few seconds to paint thumbnails, but it's basically done by 5s. DOMContentloaded is never more than 1.75s, finish <8s.
Firefox private window with blockers off has similar time statistics. But actually doubleclick.net is still getting bounced.
I tested in Firefox (uBlock), LibreWolf (uBlock), Safari (AdGuard), and Chromium (no ad blocker), and the initial home page load takes a couple seconds, but I never witnessed a 5s delay. I would say it was actually fastest in Firefox for me, but that may have just been a result of some caching. I am a premium subscriber and have never seen a warning for using an ad blocker, so I'm not sure if premium subscribers get a pass.
Probably because there are other methods for Chrome that don't apply to Firefox.
Like when I noticed that some sites did some URL rewriting trickery on Firefox and others browsers, but not for Chrome. The trick is to show you the proper URL the link points to, but as you click, it is substituted for one that is a redirection, for tracking purposes (ex: "https://l.facebook.com/l.php?u=http:://actualsite...").
On Chrome, they don't need to use these tricks as the browser supports the "ping" attribute of links, so they can do their tracking without rewriting the URL.
This kind of BS is why I don't ever click on links directly. I copy/paste them instead, so I can examine and trim them. Often, the actual link is through some sort of redirection service and I need to copy/paste the text the browser shows for the link rather than the actual link.
There's so much trickery and nonsense around this stuff that no link is safe to just click on.
You actually don't need to use any dedicated extensions for that, as this functionality is built into uBO, you just need to find a filter list (just search for "ublock origin clearurl list" or whatever)
I've also noticed this behavior popping up a lot lately, but I had no idea why. The URL with tracking included was still blocked by uBlock Origin, but having to manually copy-paste the relevant portion was an annoyance.
I have no idea because I didn't experience anything like that both in Chrome and in Firefox (both with uBO though). But I'm confident that this particular code is not related to the actual slowdown, if it did happen to some Firefox users, because I received the same code even in Chrome.
This is just anecdote, but sometimes (especially when I'm on slower internet) Safari + AdGuard will have glitch [0] on YouTube. Never happened with Firefox + Ublock Origin.
[0] Unable to press play and showing image with Ad instead.
I experience the same glitch and i like it because you can just reload the page (cmd-r) and then the video starts so if you're used to it you can skip ads within less than a second and you dont get annoyed by the ad sound/video, just an image.
When they first introduced anti-adblock crap, you could evade the banner by switching UAs. I'd say it's fair to assume that switching UAs triggers some other code path and this function never gets called.
I'm not even mad about Google making my artificially wait 5s for using firefox.
I'm mad that such a big company with suposelly decent engineers, are making me wait 5s with literally a sleep, how is even possible to do such thing in such a rudimentary way? I would be like damn that was smart, this feels like, seriously this is the level?
IMHO, this kind of things are not done by engineers.
* Marketing/Sales asks engineers to add a feature flag to sleep N milliseconds for their research: "how slowing down impacts your revenue"
* engineer adds a flag, with different control parameters
* Some genius in Product figures this out and updates the experiment to slow down for competitors
When company gets a backlash from public: "oops, we forgot to clean up all parameters of feature flag and it accidentally impacted Firefox"
Google stopped testing stuff in Firefox, that is all they did afaik. We all should know how many bugs and "oppsies" you get when you don't test before releasing new features. Test code snippets being pushed to prod etc.
Engineers tend to create paper trails on what they work on, code reviews and bug logs etc are everywhere, so I doubt there is any of those where they say "Make things shit for Firefox to hurt our competitors", that would net them an easy loss in court. But not testing in browsers with small userbases will hold in court.
Firefox has a small userbase partly because of the early "oopses" described in the article I linked. Those happened a while ago, when Firefox had more users than Chrome.
But they referred to behaviour that was present pretty much from the start. It's just that Mozilla folks were extremely tolerant and assumed good faith for a very long time.
Google have been disgustingly anticompetitive for a very, very long time at this point.
Yeah, one of the biggest examples being the HTML 5 video push and Chrome’s claims around H.264: Google promised they were going all in on WebM and would remove support soon, but never did. That meant that Firefox users got more errors outright but also that for years even sites like YouTube would leave Firefox using 100% CPU with your laptop fans on high doing software WebM while Chrome users got hardware accelerated H.264. That became moot after Mozilla and Cisco struck that deal and video hardware acceleration for other formats shipped but there was a multi-year period where Firefox suffered badly in comparison to other browsers.
Another person is claiming that Google writes custom code for Firefox (or other browsers) to enable tracking, because of the feature difference between Firefox and Chrome [1]. Only one of you can be correct.
The company is big enough for both of them to be correct.
I have firsthand knowledge that Cloud, for instance, did not test regularly directly on Firefox. Team couldn't justify the cost of setting up and maintaining a FF test suite to satisfy 1 in 50 users, so they didn't (and nobody up-chain pushed back on that). Testing was done regularly on Chrome, Safari, and Edge, as per the usercounts and "top three browser" guidance (at the time, we didn't even test regularly on mobile, since there was a separate mobile client).
But the analytics team? I'm sure they test directly on Firefox. They're just testing an entirely different piece of the elephant and their end-to-ends won't include how, for example, changes they make interoperate with Firefox in the context of Cloud. Or YouTube. Or etc. Not unless they have a specific reason to be concerned enough to.
Google's like any other megacorp in the sense that costs of cross-team coordination are combinatoric.
Very good point. It's important to recognise that developers in many companies are often not fully aware of the intended use of features they're asked to create.
My initial reaction was astonishment that the engineers would happily implement this. And maybe that is what happened. But the alternative possibility is that product and senior management assigned different parts of the feature to different teams e.g. one team develops a pattern recognition system to detect users' professions, another team develops a spoofing system for use in demos, etc...
They have done such research before, Google published this at a time when developers were all "100 ms more or less web load time doesn't matter". Since then webpages has gotten much more focused on performance.
The prevailing developer discussions going from "Load speed doesn't matter, stop complaining about useless stuff" to "load times matters, but here we choose to make it slow for other reasons" is a massive improvement though. Today speed is valued, it wasn't back then.
There are many such tests being written about in blogs today. So now a developer can get time to optimize load times based on those blog posts while before managers would say it was worthless.
Of course it always mattered. But at the time lots of people argued it didn't matter, which is why the headline is "Speed matters". You thinking it did matter at the time doesn't mean the general community thought so.
But the general community did care about speed. Everyone worked towards small load times, optimized (for example) image size for optimal load time, everyone cared.
Not so hard to believe tho. I work on a product that has parametrized feature flags. This means that, from a web interface, someone can say things like "activate feature X, on machines running operating system Y, at version Z, and are running product version W with license type Q". This is not a hard thing to build, and once you have it you can mix and match filters without being a software engineer or knowing how it works behind the scenes.
when the purpose is to abuse your monopoly to further your business interests in another area, being obtuse and convoluted to get plausible deniability is good engineering. This is just sloppy.
I think this is a good example of corporations being made up of people, rather than being contiguous coordinated entities as many of us sometimes think of them.
An engineer doing "good engineering" on a feature typically depends not only on them being a "good engineer" but also on them having some actual interest in implementing that feature.
I would imagine that in a well coordinated company engaging in this kind of thing, the order wouldn't be "slow down firefox", but something along the lines of "use XYZ feature that firefox doesn't support and then use this polyfill for FF, which happens to be slow". Something that doesn't look too incriminating during any potential discovery process, while still getting you what you want.
That's assuming a degree of engineering competency at the product decision making level that is usually absent in companies that are structured as Google is, with pretty strong demarcations of competencies across teams.
Nah, that's got a risk profile. They could implement whatever your strategy is in the next release. You aren't going to necessarily get the longevity of the naive approach.
Plus a Firefox dev would discover that more easily as opposed to this version which they can just dismiss as some JavaScript bug on YouTube's part
that's the beautiful thing, you make the polyfill contingent on the browser being firefox rather than probing for the feature and then you forget to remove it once they implement the feature
But why do you have to be that clever? If you're caught the consequences are the same regardless and both implementations would exhibit equivalent behavior.
The only superior approach here would be one that is consistent enough to be perceived but noisy enough to be robust to analysis.
Also it should be hidden on the server side.
Who knows, maybe there are a bunch of equivalent slow downs on the server side in the Google property space.
Given this discovery it would probably be reasonable to do some performance testing and change the user agent header string of the request.
Google docs, image search and Gmail operations would be the place to hide them.
I dunno. How long has it been there without anybody noticing?
5 years? 7? Longer?
No matter how they approached it, you could demonstrate the pattern through the law of large numbers regardless. Might as well make the implementation straight forward.
Using an idle timer, like window: requestIdleCallback [1], is good engineering. If anything passes that's not good engineering, it's laziness.
I'm not even a JS programmer but I know about timers, idle wait in UI programming is a common pattern. It's the attitude of mediocre engineers not bothering to lookup or learn new things.
If every OS/browser/stock market dev did what they want "because it works" we don't have a working system. We'll have systemic lags making the system sluggish and eventually unusable as more engineers follow the same mantra.
"It works" is The high engineering bar and it's the hard one to hit.
Oftentimes it's replaced these days with imagined complexity, ideological conformity or some arbitrarily defined set of virtues and then you get a really complicated thing that maybe works some of the time and breaks in really hard to understand ways.
Transcompiled frameworks inside of microservices talking to DBMS adapters over virtual networks to do a "select *" from a table and then pipe things in the reverse direction to talk to a variety of services and providers with their own APIs and separate dependencies sitting in different microservices as it just shepherds a JSON string through a dozen wrapper functions on 5 docker containers to just send it back to the browser is The way things are done these days. This is the crap that passes for "proper" engineering. Like the programming version of the pre-revolutionary French Court.
A simple solution, fit for purpose, that works as intended, easy to understand, remove, debug and modify with a no-bus factor, that's the actual high end solution, not the spaghetti stacked as lasagna that is software haute couture these days.
Sometimes, in practice, the dumb solution can also be the smart one. True mastery is in what you choose Not to do.
I agree with the spirit of your comment; I too hate over-engineering. Choose your battles is an important step in mastery, yes, but being lazy can't be chalked up to mastery.
In this particular case I disagree with using `sleep`; using the idle timer it's not as roundabout as you put it: _Transcompiled frameworks inside of microservices talking to DBMS adapters over virtual networks_. It's a straight-forward callback, some lower-level timekeeper signals you and you do your thing: it's nowhere close to the convoluted jumping through hoops you explain.
Mastery comes with balance: putting in the optimal effort, not more, not less either. Of course, depends on what one's trying to master: job or programming. Former means do the minimum and get maximum benefits from your job/boss, latter means enjoy learning/programming and arrive at the most optimal solution (for no reason, just because you're passionate).
In programming in general, sleeps are generally considered....(I'm lacking the word)...distasteful?
If your code needs to wait for something, it's better done with some sort of event system or interrupt or similar; the reason being that a 5s wait is a 5s wait, but if, say the thing you're waiting for returned in 10ms, if you're using an alternative solution you can carry on immediately, not wait the remaining 4.99 seconds. Conversely, if it takes longer than 5s, who knows what happens?
Sure, but assuming we take it as face value that this is a straightforward attempt to force a UX-destroying delay, I don't see what makes this so terrible. It's meant to force a 5 second wait, and it does it. Problem solved.
The 5-second wait is the issue, not the means it was obtained -- a fixed wait time either wastes the user's time (by making it take longer than necessary) or is prone to bugs (if the awaited task takes >5 seconds, then the end of the timer will likely break). The better question is _why_ a 5-second wait was necessary, and there's almost certainly a better way to handle that need without the fixed wait time.
OPs point, I think, is that wasting the user's time is part of the point of the code. This specific code seems partially meant as a punishment of the user for using an adblocker.
That's somewhat in debate, the last I saw. The initial report was it affected a user using Firefox, and it didn't when they switched useragents. Since then, there have been reports of users not seeing it in Firefox, but seeing it in other (even chromium-based) browsers. So it seems likely they are A/B testing it, but less clear if they are intentionally targeting non-Chrome browsers.
Their goal, quite clearly, is to prevent (or at least heavily discourage) adblockers. This is one attempt to detect them, and maybe in Chrome they have a different detection mechanism so it doesn't show the same behavior.
It would be a particularly foolish move on their part to push Chrome by punishing everything else right now, while they are in the middle of multiple anti-trust lawsuits. It makes me think that is unlikely to be the intent of this change.
they are a lazy man's solution to race conditions that does not actually solve the problem of race conditions, only makes them less likely to cause a problem at an often extreme cost to responsiveness as seen here.
You can't directly do a sleep in Javascript because it runs in the same thread as the UI - it would block the user from interacting with the page. This is effectively a sleep because after 5 seconds it's running the code in the passed-in function (not firing an event). The code in the function then resolves a promise, which runs other functions that can be specified later by what called the one using setTimeout.
This is interesting as I had noticed this happening to me (in Chrome) when the anti-ad-blocking started. I assumed that it was YT's way of "annoying" me still while no ads were shown... It was eventually replaced with the "You cant use Adblockers modal" and now I just tolerate the ads.
So I wonder if that 5s delay has always been there.
When I ran into the adblocker-blocker (Firefox + uBlock Origin), I noticed that I could watch videos if logged out. So I just stayed logged out, and haven't seen an anti-adblock message since. Or an ad.
Added bonus, I'm less tempted to venture into the comments section...
Same, I use Firefox + uBlock Origin + YouTube Unhook for a cleaner interface. I also always watch videos on private navigation windows (my default way of browsing the internet) and I manage subscriptions with the RSS feed of the channels, much better to track what I have watched since the default homepage of YouTube does not display the last videos of your subscriptions.
Edit: I have forgotten to add sponsorblock to the list of extensions
I've been randomly getting the situation where the video on Firefox doesn't work, but the sound does. It says something like "Sorry something's gone wrong", but for a brief second I can see the video. I think it's connected to the ad-blocker changes, but it doesn't actually have a message about having an ad-blocker on.
I'm using Firefox + uBlock Origin logged in and it works totally fine. Maybe Youtube removed the anti-adblocker on select accounts? I remember I once entertained myself with writing a report in which I sounded like I'm sitting in a retirement home and have no clue what's going on with "ad block." Did perhaps someone actually read this?
I think you have simply been lucky, the full story is that uBlock Origin and Youtube have been tying to outpatch the other, with uBlock rolling out a bypass to the filters every one-two days since late October (https://github.com/stephenhawk8054/misc/commits/main/yt-fix....).
Depending on if you've set up uBlock to auto-update and when you've watched youtube relative to when the block filters got updated you might just not have been hit with the latest detectors while they were active. Personally I know I got the "accept ads or leave" modal with firefox + uBlock, locking me out completely on one of my devices.
Same here . No problem with anti Adblock. It was shown twice to me and I googled „YouTube alternatives“ then tried Vimeo and it was nice. Maybe they did register this ? :D
I got the you can't use an adblocker message, but was able to close and/or reload the page to watch videos without ads. After a week or so it stopped popping up.
It's still trivial to block ads, but the delay has recently started for me, after never happening before. So presumably a very intentional volley in the ongoing war to own your attention.
I still use adblockers perfectly fine on Youtube. There was never a real interruption in adblocking either. You just need ublock origin + bypass paywalls.
Pi hole doesn't help, but there are various Android TV apps that do block ads. I still prefer the Roku eco system but I switched after they started putting ads in the middle of music videos.
This is happening to me in Chrome as well so I don't think it's tied to the browser you use.
Curiously it happens only on one profile, in another Chrome profile (which is also logged in to the same Google account) it does not happen. Both profiles run the code in your comment, but the one that does not have the delay does not wait for it to complete.
The only difference I spotted was that the profile that loads slowly does not include the #player-placeholder element in the initial HTML response. Maybe whether it sends it or not is tied to previous ad-blocker usage?
What does piss me off is that even if you clear cookies and local storage and turn off all extensions in both profiles it still somehow "knows" which profile is which, and I don't know how it's doing it.
Is the use of the "E" notation common in JS? I can see that it (could be) less bytes, obviously more efficient for bigger values... Looking at the script I can see it is minified or whatever we call that these days. I guess my question really is: did someone write "5E3" or did the minifier choose it?
(Sorry this is heading into the weeds, but I'm not really a web developer so maybe someone can tell me!)
In js I thought 1==true, and 1 is shorter than !0 ??
Never seen the use of exponential notation for numbers in js though (not a surprise, I'm not really a programmer), it seems sensible to me from the point of shifting the domain from ms to seconds.
Unlikely. Google has been breaking non-Chromium (or sometimes even just non-Google Chrome) browsers for years on YouTube and their other websites. It was especially egregious when MSFT was trying their own EdgeHTML/Trident-based Edge. Issues would go away by faking user-agent.
> It was especially egregious when MSFT was trying their own EdgeHTML/Trident-based Edge. Issues would go away by faking user-agent.
Why is there more than one user-agent? Does somebody still expect to receive different content based on the user-agent, and furthermore expect that the difference will be beneficial to them?
What was Microsoft trying to achieve by sending a non-Chrome user-agent?
User agents are useful. However they tend to be abused much more often than effectively used
1. They are useful for working around bugs. You can match the user agent to work around the bugs on known-buggy browser versions. Ideally this would be a handful of specific matches (like Firefox versions 12-14). You can't do feature detection for many bugs because they may only trigger in very specific situations. Ideally this blacklist would only be confirmed entries and manually tested if the new versions have the same problem. (Unfortunately these often end up open-ended because testing each new release for a bug that isn't on the priority list is tedious.)
2. Diagnosing problems. Often times you see that some specific group of user-agents is hammering some API or fails to load a page. It is much easier to track down if this user agent is a precise identifier of the client for which your site doesn't work correctly.
3. Understanding users. For example if you see that a browser you have never heard of is a significant amount of traffic you may want to add it to your testing routine.
But yes, the abuse of if (/Chrome/.test(navigator.userAgent)) { mainCode() } else { untestedFallback() } is a major issue.
Only option 1 is something that users, who are the people who decide what user-agent to send, might care about. And as you yourself point out, it doesn't happen.
I'm pretty sure that users care that websites can fix bugs affecting their browser. In fact option 1 is very difficult to actually implement when you can't figure out which browser is having problems in the first place.
Why do you think users wouldn't care about sites diagnosing problems that are making pages fail to load (#2) or sites testing the site on the browser that the user uses (#3)?
It is normal practice for each browser to have its own user-agent, no? But the fact that Google intentionally detected it and used polyfills or straight up invalid JS at the time was insane. A similar spin today is "Your browser is unsupported" you see here and there. When a major platform such as YouTube does it, it is really impactful.
It would never do feature detection, would give lower quality h264 video, etc. Back then, there was really nice third-party application myTube which had made this less of an issue but it was eventually killed through API changes.
It may have been intended to be a normal practice, but as far back as IE vs Netscape everyone has been mucking with user agents for non-competitive (and counter-non-competetive) reasons
There is no reason for charity with such a large power difference. For Firefox, "bugs" like this can really end up being a lost one-shot game.
It's like people walking by and casually reaching for your phone. It's always meant as a joke, unless you don't pull it away fast enough. Then suddenly it wasn't a joke - and your phone is gone.
This is not rooted in any reservation against Google in particular. If you are a mega-corporation with the power to casually crush competitors, you should really want to be held to a high standard. You do not want to be seen as the accidentally-fucking-others-up-occasionally kind of company.
Without studying the minified code I wouldn't assume malice just yet, this could be just an inexperienced developer trying to lazily fix some browser-specific bug, or something that accidentally made it to production like you say
You think they let inexperienced developers touch the YT code base without proper code review? Even if that were the case, which is an extremely charitable assumption, that itself would be malice in my opinion.
> You think they let inexperienced developers touch the YT code base
Uh, yes? We were all inexperienced at some point. Just the linked file is like 300k lines of unminified code, I doubt it's all written by PHDs with 20 years of experience
It could even just be a timeout as part of retry logic or similar. A lot of people seem to be saying that there is no reasonable reason to have a `sleep` in a production application. But there are many legitimate reasons to need to delay execution of some code for a while.
> To clarify it more, it's simply this code in their polymer script link:
> setTimeout(function() { c(); a.resolve(1) }, 5E3);
> which doesn't do anything except making you wait 5s (5E3 = 5000ms = 5s). You can search for it easily in https://www.youtube.com/s/desktop/96766c85/jsbin/desktop_pol...